Print this page
2553 mac address should be a dladm link property
Split |
Close |
Expand all |
Collapse all |
--- old/usr/src/uts/common/io/mac/mac.c
+++ new/usr/src/uts/common/io/mac/mac.c
1 1 /*
2 2 * CDDL HEADER START
3 3 *
4 4 * The contents of this file are subject to the terms of the
5 5 * Common Development and Distribution License (the "License").
6 6 * You may not use this file except in compliance with the License.
7 7 *
8 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 9 * or http://www.opensolaris.org/os/licensing.
10 10 * See the License for the specific language governing permissions
11 11 * and limitations under the License.
12 12 *
13 13 * When distributing Covered Code, include this CDDL HEADER in each
14 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 15 * If applicable, add the following below this CDDL HEADER, with the
16 16 * fields enclosed by brackets "[]" replaced with your own identifying
17 17 * information: Portions Copyright [yyyy] [name of copyright owner]
18 18 *
19 19 * CDDL HEADER END
20 20 */
21 21
22 22 /*
23 23 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
24 24 */
25 25
26 26 /*
27 27 * MAC Services Module
28 28 *
29 29 * The GLDv3 framework locking - The MAC layer
30 30 * --------------------------------------------
31 31 *
32 32 * The MAC layer is central to the GLD framework and can provide the locking
33 33 * framework needed for itself and for the use of MAC clients. MAC end points
34 34 * are fairly disjoint and don't share a lot of state. So a coarse grained
35 35 * multi-threading scheme is to single thread all create/modify/delete or set
36 36 * type of control operations on a per mac end point while allowing data threads
37 37 * concurrently.
38 38 *
39 39 * Control operations (set) that modify a mac end point are always serialized on
40 40 * a per mac end point basis, We have at most 1 such thread per mac end point
41 41 * at a time.
42 42 *
43 43 * All other operations that are not serialized are essentially multi-threaded.
44 44 * For example a control operation (get) like getting statistics which may not
45 45 * care about reading values atomically or data threads sending or receiving
46 46 * data. Mostly these type of operations don't modify the control state. Any
47 47 * state these operations care about are protected using traditional locks.
48 48 *
49 49 * The perimeter only serializes serial operations. It does not imply there
50 50 * aren't any other concurrent operations. However a serialized operation may
51 51 * sometimes need to make sure it is the only thread. In this case it needs
52 52 * to use reference counting mechanisms to cv_wait until any current data
53 53 * threads are done.
54 54 *
55 55 * The mac layer itself does not hold any locks across a call to another layer.
56 56 * The perimeter is however held across a down call to the driver to make the
57 57 * whole control operation atomic with respect to other control operations.
58 58 * Also the data path and get type control operations may proceed concurrently.
59 59 * These operations synchronize with the single serial operation on a given mac
60 60 * end point using regular locks. The perimeter ensures that conflicting
61 61 * operations like say a mac_multicast_add and a mac_multicast_remove on the
62 62 * same mac end point don't interfere with each other and also ensures that the
63 63 * changes in the mac layer and the call to the underlying driver to say add a
64 64 * multicast address are done atomically without interference from a thread
65 65 * trying to delete the same address.
66 66 *
67 67 * For example, consider
68 68 * mac_multicst_add()
69 69 * {
70 70 * mac_perimeter_enter(); serialize all control operations
71 71 *
72 72 * grab list lock protect against access by data threads
73 73 * add to list
74 74 * drop list lock
75 75 *
76 76 * call driver's mi_multicst
77 77 *
78 78 * mac_perimeter_exit();
79 79 * }
80 80 *
81 81 * To lessen the number of serialization locks and simplify the lock hierarchy,
82 82 * we serialize all the control operations on a per mac end point by using a
83 83 * single serialization lock called the perimeter. We allow recursive entry into
84 84 * the perimeter to facilitate use of this mechanism by both the mac client and
85 85 * the MAC layer itself.
86 86 *
87 87 * MAC client means an entity that does an operation on a mac handle
88 88 * obtained from a mac_open/mac_client_open. Similarly MAC driver means
89 89 * an entity that does an operation on a mac handle obtained from a
90 90 * mac_register. An entity could be both client and driver but on different
91 91 * handles eg. aggr. and should only make the corresponding mac interface calls
92 92 * i.e. mac driver interface or mac client interface as appropriate for that
93 93 * mac handle.
94 94 *
95 95 * General rules.
96 96 * -------------
97 97 *
98 98 * R1. The lock order of upcall threads is natually opposite to downcall
99 99 * threads. Hence upcalls must not hold any locks across layers for fear of
100 100 * recursive lock enter and lock order violation. This applies to all layers.
101 101 *
102 102 * R2. The perimeter is just another lock. Since it is held in the down
103 103 * direction, acquiring the perimeter in an upcall is prohibited as it would
104 104 * cause a deadlock. This applies to all layers.
105 105 *
106 106 * Note that upcalls that need to grab the mac perimeter (for example
107 107 * mac_notify upcalls) can still achieve that by posting the request to a
108 108 * thread, which can then grab all the required perimeters and locks in the
109 109 * right global order. Note that in the above example the mac layer iself
110 110 * won't grab the mac perimeter in the mac_notify upcall, instead the upcall
111 111 * to the client must do that. Please see the aggr code for an example.
112 112 *
113 113 * MAC client rules
114 114 * ----------------
115 115 *
116 116 * R3. A MAC client may use the MAC provided perimeter facility to serialize
117 117 * control operations on a per mac end point. It does this by by acquring
118 118 * and holding the perimeter across a sequence of calls to the mac layer.
119 119 * This ensures atomicity across the entire block of mac calls. In this
120 120 * model the MAC client must not hold any client locks across the calls to
121 121 * the mac layer. This model is the preferred solution.
122 122 *
123 123 * R4. However if a MAC client has a lot of global state across all mac end
124 124 * points the per mac end point serialization may not be sufficient. In this
125 125 * case the client may choose to use global locks or use its own serialization.
126 126 * To avoid deadlocks, these client layer locks held across the mac calls
127 127 * in the control path must never be acquired by the data path for the reason
128 128 * mentioned below.
129 129 *
130 130 * (Assume that a control operation that holds a client lock blocks in the
131 131 * mac layer waiting for upcall reference counts to drop to zero. If an upcall
132 132 * data thread that holds this reference count, tries to acquire the same
133 133 * client lock subsequently it will deadlock).
134 134 *
135 135 * A MAC client may follow either the R3 model or the R4 model, but can't
136 136 * mix both. In the former, the hierarchy is Perim -> client locks, but in
137 137 * the latter it is client locks -> Perim.
138 138 *
139 139 * R5. MAC clients must make MAC calls (excluding data calls) in a cv_wait'able
140 140 * context since they may block while trying to acquire the perimeter.
141 141 * In addition some calls may block waiting for upcall refcnts to come down to
142 142 * zero.
143 143 *
144 144 * R6. MAC clients must make sure that they are single threaded and all threads
145 145 * from the top (in particular data threads) have finished before calling
146 146 * mac_client_close. The MAC framework does not track the number of client
147 147 * threads using the mac client handle. Also mac clients must make sure
148 148 * they have undone all the control operations before calling mac_client_close.
149 149 * For example mac_unicast_remove/mac_multicast_remove to undo the corresponding
150 150 * mac_unicast_add/mac_multicast_add.
151 151 *
152 152 * MAC framework rules
153 153 * -------------------
154 154 *
155 155 * R7. The mac layer itself must not hold any mac layer locks (except the mac
156 156 * perimeter) across a call to any other layer from the mac layer. The call to
157 157 * any other layer could be via mi_* entry points, classifier entry points into
158 158 * the driver or via upcall pointers into layers above. The mac perimeter may
159 159 * be acquired or held only in the down direction, for e.g. when calling into
160 160 * a mi_* driver enty point to provide atomicity of the operation.
161 161 *
162 162 * R8. Since it is not guaranteed (see R14) that drivers won't hold locks across
163 163 * mac driver interfaces, the MAC layer must provide a cut out for control
164 164 * interfaces like upcall notifications and start them in a separate thread.
165 165 *
166 166 * R9. Note that locking order also implies a plumbing order. For example
167 167 * VNICs are allowed to be created over aggrs, but not vice-versa. An attempt
168 168 * to plumb in any other order must be failed at mac_open time, otherwise it
169 169 * could lead to deadlocks due to inverse locking order.
170 170 *
171 171 * R10. MAC driver interfaces must not block since the driver could call them
172 172 * in interrupt context.
173 173 *
174 174 * R11. Walkers must preferably not hold any locks while calling walker
175 175 * callbacks. Instead these can operate on reference counts. In simple
176 176 * callbacks it may be ok to hold a lock and call the callbacks, but this is
177 177 * harder to maintain in the general case of arbitrary callbacks.
178 178 *
179 179 * R12. The MAC layer must protect upcall notification callbacks using reference
180 180 * counts rather than holding locks across the callbacks.
181 181 *
182 182 * R13. Given the variety of drivers, it is preferable if the MAC layer can make
183 183 * sure that any pointers (such as mac ring pointers) it passes to the driver
184 184 * remain valid until mac unregister time. Currently the mac layer achieves
185 185 * this by using generation numbers for rings and freeing the mac rings only
186 186 * at unregister time. The MAC layer must provide a layer of indirection and
187 187 * must not expose underlying driver rings or driver data structures/pointers
188 188 * directly to MAC clients.
189 189 *
190 190 * MAC driver rules
191 191 * ----------------
192 192 *
193 193 * R14. It would be preferable if MAC drivers don't hold any locks across any
194 194 * mac call. However at a minimum they must not hold any locks across data
195 195 * upcalls. They must also make sure that all references to mac data structures
196 196 * are cleaned up and that it is single threaded at mac_unregister time.
197 197 *
198 198 * R15. MAC driver interfaces don't block and so the action may be done
199 199 * asynchronously in a separate thread as for example handling notifications.
200 200 * The driver must not assume that the action is complete when the call
201 201 * returns.
202 202 *
203 203 * R16. Drivers must maintain a generation number per Rx ring, and pass it
204 204 * back to mac_rx_ring(); They are expected to increment the generation
205 205 * number whenever the ring's stop routine is invoked.
206 206 * See comments in mac_rx_ring();
207 207 *
208 208 * R17 Similarly mi_stop is another synchronization point and the driver must
209 209 * ensure that all upcalls are done and there won't be any future upcall
210 210 * before returning from mi_stop.
211 211 *
212 212 * R18. The driver may assume that all set/modify control operations via
213 213 * the mi_* entry points are single threaded on a per mac end point.
214 214 *
215 215 * Lock and Perimeter hierarchy scenarios
216 216 * ---------------------------------------
217 217 *
218 218 * i_mac_impl_lock -> mi_rw_lock -> srs_lock -> s_ring_lock[i_mac_tx_srs_notify]
219 219 *
220 220 * ft_lock -> fe_lock [mac_flow_lookup]
221 221 *
222 222 * mi_rw_lock -> fe_lock [mac_bcast_send]
223 223 *
224 224 * srs_lock -> mac_bw_lock [mac_rx_srs_drain_bw]
225 225 *
226 226 * cpu_lock -> mac_srs_g_lock -> srs_lock -> s_ring_lock [mac_walk_srs_and_bind]
227 227 *
228 228 * i_dls_devnet_lock -> mac layer locks [dls_devnet_rename]
229 229 *
230 230 * Perimeters are ordered P1 -> P2 -> P3 from top to bottom in order of mac
231 231 * client to driver. In the case of clients that explictly use the mac provided
232 232 * perimeter mechanism for its serialization, the hierarchy is
233 233 * Perimeter -> mac layer locks, since the client never holds any locks across
234 234 * the mac calls. In the case of clients that use its own locks the hierarchy
235 235 * is Client locks -> Mac Perim -> Mac layer locks. The client never explicitly
236 236 * calls mac_perim_enter/exit in this case.
237 237 *
238 238 * Subflow creation rules
239 239 * ---------------------------
240 240 * o In case of a user specified cpulist present on underlying link and flows,
241 241 * the flows cpulist must be a subset of the underlying link.
242 242 * o In case of a user specified fanout mode present on link and flow, the
243 243 * subflow fanout count has to be less than or equal to that of the
244 244 * underlying link. The cpu-bindings for the subflows will be a subset of
245 245 * the underlying link.
246 246 * o In case if no cpulist specified on both underlying link and flow, the
247 247 * underlying link relies on a MAC tunable to provide out of box fanout.
248 248 * The subflow will have no cpulist (the subflow will be unbound)
249 249 * o In case if no cpulist is specified on the underlying link, a subflow can
250 250 * carry either a user-specified cpulist or fanout count. The cpu-bindings
251 251 * for the subflow will not adhere to restriction that they need to be subset
252 252 * of the underlying link.
253 253 * o In case where the underlying link is carrying either a user specified
254 254 * cpulist or fanout mode and for a unspecified subflow, the subflow will be
255 255 * created unbound.
256 256 * o While creating unbound subflows, bandwidth mode changes attempt to
257 257 * figure a right fanout count. In such cases the fanout count will override
258 258 * the unbound cpu-binding behavior.
259 259 * o In addition to this, while cycling between flow and link properties, we
260 260 * impose a restriction that if a link property has a subflow with
261 261 * user-specified attributes, we will not allow changing the link property.
262 262 * The administrator needs to reset all the user specified properties for the
263 263 * subflows before attempting a link property change.
264 264 * Some of the above rules can be overridden by specifying additional command
265 265 * line options while creating or modifying link or subflow properties.
266 266 */
267 267
268 268 #include <sys/types.h>
269 269 #include <sys/conf.h>
270 270 #include <sys/id_space.h>
271 271 #include <sys/esunddi.h>
272 272 #include <sys/stat.h>
273 273 #include <sys/mkdev.h>
274 274 #include <sys/stream.h>
275 275 #include <sys/strsun.h>
276 276 #include <sys/strsubr.h>
277 277 #include <sys/dlpi.h>
278 278 #include <sys/list.h>
279 279 #include <sys/modhash.h>
280 280 #include <sys/mac_provider.h>
281 281 #include <sys/mac_client_impl.h>
282 282 #include <sys/mac_soft_ring.h>
283 283 #include <sys/mac_stat.h>
284 284 #include <sys/mac_impl.h>
285 285 #include <sys/mac.h>
286 286 #include <sys/dls.h>
287 287 #include <sys/dld.h>
288 288 #include <sys/modctl.h>
289 289 #include <sys/fs/dv_node.h>
290 290 #include <sys/thread.h>
291 291 #include <sys/proc.h>
292 292 #include <sys/callb.h>
293 293 #include <sys/cpuvar.h>
294 294 #include <sys/atomic.h>
295 295 #include <sys/bitmap.h>
296 296 #include <sys/sdt.h>
297 297 #include <sys/mac_flow.h>
298 298 #include <sys/ddi_intr_impl.h>
299 299 #include <sys/disp.h>
300 300 #include <sys/sdt.h>
301 301 #include <sys/vnic.h>
302 302 #include <sys/vnic_impl.h>
303 303 #include <sys/vlan.h>
304 304 #include <inet/ip.h>
305 305 #include <inet/ip6.h>
306 306 #include <sys/exacct.h>
307 307 #include <sys/exacct_impl.h>
308 308 #include <inet/nd.h>
309 309 #include <sys/ethernet.h>
310 310 #include <sys/pool.h>
311 311 #include <sys/pool_pset.h>
312 312 #include <sys/cpupart.h>
313 313 #include <inet/wifi_ioctl.h>
314 314 #include <net/wpa.h>
315 315
316 316 #define IMPL_HASHSZ 67 /* prime */
317 317
318 318 kmem_cache_t *i_mac_impl_cachep;
319 319 mod_hash_t *i_mac_impl_hash;
320 320 krwlock_t i_mac_impl_lock;
321 321 uint_t i_mac_impl_count;
322 322 static kmem_cache_t *mac_ring_cache;
323 323 static id_space_t *minor_ids;
324 324 static uint32_t minor_count;
325 325 static pool_event_cb_t mac_pool_event_reg;
326 326
327 327 /*
328 328 * Logging stuff. Perhaps mac_logging_interval could be broken into
329 329 * mac_flow_log_interval and mac_link_log_interval if we want to be
330 330 * able to schedule them differently.
331 331 */
332 332 uint_t mac_logging_interval;
333 333 boolean_t mac_flow_log_enable;
334 334 boolean_t mac_link_log_enable;
335 335 timeout_id_t mac_logging_timer;
336 336
337 337 /* for debugging, see MAC_DBG_PRT() in mac_impl.h */
338 338 int mac_dbg = 0;
339 339
340 340 #define MACTYPE_KMODDIR "mac"
341 341 #define MACTYPE_HASHSZ 67
342 342 static mod_hash_t *i_mactype_hash;
343 343 /*
344 344 * i_mactype_lock synchronizes threads that obtain references to mactype_t
345 345 * structures through i_mactype_getplugin().
346 346 */
347 347 static kmutex_t i_mactype_lock;
348 348
349 349 /*
350 350 * mac_tx_percpu_cnt
351 351 *
352 352 * Number of per cpu locks per mac_client_impl_t. Used by the transmit side
353 353 * in mac_tx to reduce lock contention. This is sized at boot time in mac_init.
354 354 * mac_tx_percpu_cnt_max is settable in /etc/system and must be a power of 2.
355 355 * Per cpu locks may be disabled by setting mac_tx_percpu_cnt_max to 1.
356 356 */
357 357 int mac_tx_percpu_cnt;
358 358 int mac_tx_percpu_cnt_max = 128;
359 359
360 360 /*
361 361 * Call back functions for the bridge module. These are guaranteed to be valid
362 362 * when holding a reference on a link or when holding mip->mi_bridge_lock and
363 363 * mi_bridge_link is non-NULL.
364 364 */
365 365 mac_bridge_tx_t mac_bridge_tx_cb;
366 366 mac_bridge_rx_t mac_bridge_rx_cb;
367 367 mac_bridge_ref_t mac_bridge_ref_cb;
368 368 mac_bridge_ls_t mac_bridge_ls_cb;
369 369
370 370 static int i_mac_constructor(void *, void *, int);
371 371 static void i_mac_destructor(void *, void *);
372 372 static int i_mac_ring_ctor(void *, void *, int);
373 373 static void i_mac_ring_dtor(void *, void *);
374 374 static mblk_t *mac_rx_classify(mac_impl_t *, mac_resource_handle_t, mblk_t *);
375 375 void mac_tx_client_flush(mac_client_impl_t *);
376 376 void mac_tx_client_block(mac_client_impl_t *);
377 377 static void mac_rx_ring_quiesce(mac_ring_t *, uint_t);
378 378 static int mac_start_group_and_rings(mac_group_t *);
379 379 static void mac_stop_group_and_rings(mac_group_t *);
380 380 static void mac_pool_event_cb(pool_event_t, int, void *);
381 381
382 382 typedef struct netinfo_s {
383 383 list_node_t ni_link;
384 384 void *ni_record;
385 385 int ni_size;
386 386 int ni_type;
387 387 } netinfo_t;
388 388
389 389 /*
390 390 * Module initialization functions.
391 391 */
392 392
393 393 void
394 394 mac_init(void)
395 395 {
396 396 mac_tx_percpu_cnt = ((boot_max_ncpus == -1) ? max_ncpus :
397 397 boot_max_ncpus);
398 398
399 399 /* Upper bound is mac_tx_percpu_cnt_max */
400 400 if (mac_tx_percpu_cnt > mac_tx_percpu_cnt_max)
401 401 mac_tx_percpu_cnt = mac_tx_percpu_cnt_max;
402 402
403 403 if (mac_tx_percpu_cnt < 1) {
404 404 /* Someone set max_tx_percpu_cnt_max to 0 or less */
405 405 mac_tx_percpu_cnt = 1;
406 406 }
407 407
408 408 ASSERT(mac_tx_percpu_cnt >= 1);
409 409 mac_tx_percpu_cnt = (1 << highbit(mac_tx_percpu_cnt - 1));
410 410 /*
411 411 * Make it of the form 2**N - 1 in the range
412 412 * [0 .. mac_tx_percpu_cnt_max - 1]
413 413 */
414 414 mac_tx_percpu_cnt--;
415 415
416 416 i_mac_impl_cachep = kmem_cache_create("mac_impl_cache",
417 417 sizeof (mac_impl_t), 0, i_mac_constructor, i_mac_destructor,
418 418 NULL, NULL, NULL, 0);
419 419 ASSERT(i_mac_impl_cachep != NULL);
420 420
421 421 mac_ring_cache = kmem_cache_create("mac_ring_cache",
422 422 sizeof (mac_ring_t), 0, i_mac_ring_ctor, i_mac_ring_dtor, NULL,
423 423 NULL, NULL, 0);
424 424 ASSERT(mac_ring_cache != NULL);
425 425
426 426 i_mac_impl_hash = mod_hash_create_extended("mac_impl_hash",
427 427 IMPL_HASHSZ, mod_hash_null_keydtor, mod_hash_null_valdtor,
428 428 mod_hash_bystr, NULL, mod_hash_strkey_cmp, KM_SLEEP);
429 429 rw_init(&i_mac_impl_lock, NULL, RW_DEFAULT, NULL);
430 430
431 431 mac_flow_init();
432 432 mac_soft_ring_init();
433 433 mac_bcast_init();
434 434 mac_client_init();
435 435
436 436 i_mac_impl_count = 0;
437 437
438 438 i_mactype_hash = mod_hash_create_extended("mactype_hash",
439 439 MACTYPE_HASHSZ,
440 440 mod_hash_null_keydtor, mod_hash_null_valdtor,
441 441 mod_hash_bystr, NULL, mod_hash_strkey_cmp, KM_SLEEP);
442 442
443 443 /*
444 444 * Allocate an id space to manage minor numbers. The range of the
445 445 * space will be from MAC_MAX_MINOR+1 to MAC_PRIVATE_MINOR-1. This
446 446 * leaves half of the 32-bit minors available for driver private use.
447 447 */
448 448 minor_ids = id_space_create("mac_minor_ids", MAC_MAX_MINOR+1,
449 449 MAC_PRIVATE_MINOR-1);
450 450 ASSERT(minor_ids != NULL);
451 451 minor_count = 0;
452 452
453 453 /* Let's default to 20 seconds */
454 454 mac_logging_interval = 20;
455 455 mac_flow_log_enable = B_FALSE;
456 456 mac_link_log_enable = B_FALSE;
457 457 mac_logging_timer = 0;
458 458
459 459 /* Register to be notified of noteworthy pools events */
460 460 mac_pool_event_reg.pec_func = mac_pool_event_cb;
461 461 mac_pool_event_reg.pec_arg = NULL;
462 462 pool_event_cb_register(&mac_pool_event_reg);
463 463 }
464 464
465 465 int
466 466 mac_fini(void)
467 467 {
468 468
469 469 if (i_mac_impl_count > 0 || minor_count > 0)
470 470 return (EBUSY);
471 471
472 472 pool_event_cb_unregister(&mac_pool_event_reg);
473 473
474 474 id_space_destroy(minor_ids);
475 475 mac_flow_fini();
476 476
477 477 mod_hash_destroy_hash(i_mac_impl_hash);
478 478 rw_destroy(&i_mac_impl_lock);
479 479
480 480 mac_client_fini();
481 481 kmem_cache_destroy(mac_ring_cache);
482 482
483 483 mod_hash_destroy_hash(i_mactype_hash);
484 484 mac_soft_ring_finish();
485 485
486 486
487 487 return (0);
488 488 }
489 489
490 490 /*
491 491 * Initialize a GLDv3 driver's device ops. A driver that manages its own ops
492 492 * (e.g. softmac) may pass in a NULL ops argument.
493 493 */
494 494 void
495 495 mac_init_ops(struct dev_ops *ops, const char *name)
496 496 {
497 497 major_t major = ddi_name_to_major((char *)name);
498 498
499 499 /*
500 500 * By returning on error below, we are not letting the driver continue
501 501 * in an undefined context. The mac_register() function will faill if
502 502 * DN_GLDV3_DRIVER isn't set.
503 503 */
504 504 if (major == DDI_MAJOR_T_NONE)
505 505 return;
506 506 LOCK_DEV_OPS(&devnamesp[major].dn_lock);
507 507 devnamesp[major].dn_flags |= (DN_GLDV3_DRIVER | DN_NETWORK_DRIVER);
508 508 UNLOCK_DEV_OPS(&devnamesp[major].dn_lock);
509 509 if (ops != NULL)
510 510 dld_init_ops(ops, name);
511 511 }
512 512
513 513 void
514 514 mac_fini_ops(struct dev_ops *ops)
515 515 {
516 516 dld_fini_ops(ops);
517 517 }
518 518
519 519 /*ARGSUSED*/
520 520 static int
521 521 i_mac_constructor(void *buf, void *arg, int kmflag)
522 522 {
523 523 mac_impl_t *mip = buf;
524 524
525 525 bzero(buf, sizeof (mac_impl_t));
526 526
527 527 mip->mi_linkstate = LINK_STATE_UNKNOWN;
528 528
529 529 rw_init(&mip->mi_rw_lock, NULL, RW_DRIVER, NULL);
530 530 mutex_init(&mip->mi_notify_lock, NULL, MUTEX_DRIVER, NULL);
531 531 mutex_init(&mip->mi_promisc_lock, NULL, MUTEX_DRIVER, NULL);
532 532 mutex_init(&mip->mi_ring_lock, NULL, MUTEX_DEFAULT, NULL);
533 533
534 534 mip->mi_notify_cb_info.mcbi_lockp = &mip->mi_notify_lock;
535 535 cv_init(&mip->mi_notify_cb_info.mcbi_cv, NULL, CV_DRIVER, NULL);
536 536 mip->mi_promisc_cb_info.mcbi_lockp = &mip->mi_promisc_lock;
537 537 cv_init(&mip->mi_promisc_cb_info.mcbi_cv, NULL, CV_DRIVER, NULL);
538 538
539 539 mutex_init(&mip->mi_bridge_lock, NULL, MUTEX_DEFAULT, NULL);
540 540
541 541 return (0);
542 542 }
543 543
544 544 /*ARGSUSED*/
545 545 static void
546 546 i_mac_destructor(void *buf, void *arg)
547 547 {
548 548 mac_impl_t *mip = buf;
549 549 mac_cb_info_t *mcbi;
550 550
551 551 ASSERT(mip->mi_ref == 0);
552 552 ASSERT(mip->mi_active == 0);
553 553 ASSERT(mip->mi_linkstate == LINK_STATE_UNKNOWN);
554 554 ASSERT(mip->mi_devpromisc == 0);
555 555 ASSERT(mip->mi_ksp == NULL);
556 556 ASSERT(mip->mi_kstat_count == 0);
557 557 ASSERT(mip->mi_nclients == 0);
558 558 ASSERT(mip->mi_nactiveclients == 0);
559 559 ASSERT(mip->mi_single_active_client == NULL);
560 560 ASSERT(mip->mi_state_flags == 0);
561 561 ASSERT(mip->mi_factory_addr == NULL);
562 562 ASSERT(mip->mi_factory_addr_num == 0);
563 563 ASSERT(mip->mi_default_tx_ring == NULL);
564 564
565 565 mcbi = &mip->mi_notify_cb_info;
566 566 ASSERT(mcbi->mcbi_del_cnt == 0 && mcbi->mcbi_walker_cnt == 0);
567 567 ASSERT(mip->mi_notify_bits == 0);
568 568 ASSERT(mip->mi_notify_thread == NULL);
569 569 ASSERT(mcbi->mcbi_lockp == &mip->mi_notify_lock);
570 570 mcbi->mcbi_lockp = NULL;
571 571
572 572 mcbi = &mip->mi_promisc_cb_info;
573 573 ASSERT(mcbi->mcbi_del_cnt == 0 && mip->mi_promisc_list == NULL);
574 574 ASSERT(mip->mi_promisc_list == NULL);
575 575 ASSERT(mcbi->mcbi_lockp == &mip->mi_promisc_lock);
576 576 mcbi->mcbi_lockp = NULL;
577 577
578 578 ASSERT(mip->mi_bcast_ngrps == 0 && mip->mi_bcast_grp == NULL);
579 579 ASSERT(mip->mi_perim_owner == NULL && mip->mi_perim_ocnt == 0);
580 580
581 581 rw_destroy(&mip->mi_rw_lock);
582 582
583 583 mutex_destroy(&mip->mi_promisc_lock);
584 584 cv_destroy(&mip->mi_promisc_cb_info.mcbi_cv);
585 585 mutex_destroy(&mip->mi_notify_lock);
586 586 cv_destroy(&mip->mi_notify_cb_info.mcbi_cv);
587 587 mutex_destroy(&mip->mi_ring_lock);
588 588
589 589 ASSERT(mip->mi_bridge_link == NULL);
590 590 }
591 591
592 592 /* ARGSUSED */
593 593 static int
594 594 i_mac_ring_ctor(void *buf, void *arg, int kmflag)
595 595 {
596 596 mac_ring_t *ring = (mac_ring_t *)buf;
597 597
598 598 bzero(ring, sizeof (mac_ring_t));
599 599 cv_init(&ring->mr_cv, NULL, CV_DEFAULT, NULL);
600 600 mutex_init(&ring->mr_lock, NULL, MUTEX_DEFAULT, NULL);
601 601 ring->mr_state = MR_FREE;
602 602 return (0);
603 603 }
604 604
605 605 /* ARGSUSED */
606 606 static void
607 607 i_mac_ring_dtor(void *buf, void *arg)
608 608 {
609 609 mac_ring_t *ring = (mac_ring_t *)buf;
610 610
611 611 cv_destroy(&ring->mr_cv);
612 612 mutex_destroy(&ring->mr_lock);
613 613 }
614 614
615 615 /*
616 616 * Common functions to do mac callback addition and deletion. Currently this is
617 617 * used by promisc callbacks and notify callbacks. List addition and deletion
618 618 * need to take care of list walkers. List walkers in general, can't hold list
619 619 * locks and make upcall callbacks due to potential lock order and recursive
620 620 * reentry issues. Instead list walkers increment the list walker count to mark
621 621 * the presence of a walker thread. Addition can be carefully done to ensure
622 622 * that the list walker always sees either the old list or the new list.
623 623 * However the deletion can't be done while the walker is active, instead the
624 624 * deleting thread simply marks the entry as logically deleted. The last walker
625 625 * physically deletes and frees up the logically deleted entries when the walk
626 626 * is complete.
627 627 */
628 628 void
629 629 mac_callback_add(mac_cb_info_t *mcbi, mac_cb_t **mcb_head,
630 630 mac_cb_t *mcb_elem)
631 631 {
632 632 mac_cb_t *p;
633 633 mac_cb_t **pp;
634 634
635 635 /* Verify it is not already in the list */
636 636 for (pp = mcb_head; (p = *pp) != NULL; pp = &p->mcb_nextp) {
637 637 if (p == mcb_elem)
638 638 break;
639 639 }
640 640 VERIFY(p == NULL);
641 641
642 642 /*
643 643 * Add it to the head of the callback list. The membar ensures that
644 644 * the following list pointer manipulations reach global visibility
645 645 * in exactly the program order below.
646 646 */
647 647 ASSERT(MUTEX_HELD(mcbi->mcbi_lockp));
648 648
649 649 mcb_elem->mcb_nextp = *mcb_head;
650 650 membar_producer();
651 651 *mcb_head = mcb_elem;
652 652 }
653 653
654 654 /*
655 655 * Mark the entry as logically deleted. If there aren't any walkers unlink
656 656 * from the list. In either case return the corresponding status.
657 657 */
658 658 boolean_t
659 659 mac_callback_remove(mac_cb_info_t *mcbi, mac_cb_t **mcb_head,
660 660 mac_cb_t *mcb_elem)
661 661 {
662 662 mac_cb_t *p;
663 663 mac_cb_t **pp;
664 664
665 665 ASSERT(MUTEX_HELD(mcbi->mcbi_lockp));
666 666 /*
667 667 * Search the callback list for the entry to be removed
668 668 */
669 669 for (pp = mcb_head; (p = *pp) != NULL; pp = &p->mcb_nextp) {
670 670 if (p == mcb_elem)
671 671 break;
672 672 }
673 673 VERIFY(p != NULL);
674 674
675 675 /*
676 676 * If there are walkers just mark it as deleted and the last walker
677 677 * will remove from the list and free it.
678 678 */
679 679 if (mcbi->mcbi_walker_cnt != 0) {
680 680 p->mcb_flags |= MCB_CONDEMNED;
681 681 mcbi->mcbi_del_cnt++;
682 682 return (B_FALSE);
683 683 }
684 684
685 685 ASSERT(mcbi->mcbi_del_cnt == 0);
686 686 *pp = p->mcb_nextp;
687 687 p->mcb_nextp = NULL;
688 688 return (B_TRUE);
689 689 }
690 690
691 691 /*
692 692 * Wait for all pending callback removals to be completed
693 693 */
694 694 void
695 695 mac_callback_remove_wait(mac_cb_info_t *mcbi)
696 696 {
697 697 ASSERT(MUTEX_HELD(mcbi->mcbi_lockp));
698 698 while (mcbi->mcbi_del_cnt != 0) {
699 699 DTRACE_PROBE1(need_wait, mac_cb_info_t *, mcbi);
700 700 cv_wait(&mcbi->mcbi_cv, mcbi->mcbi_lockp);
701 701 }
702 702 }
703 703
704 704 /*
705 705 * The last mac callback walker does the cleanup. Walk the list and unlik
706 706 * all the logically deleted entries and construct a temporary list of
707 707 * removed entries. Return the list of removed entries to the caller.
708 708 */
709 709 mac_cb_t *
710 710 mac_callback_walker_cleanup(mac_cb_info_t *mcbi, mac_cb_t **mcb_head)
711 711 {
712 712 mac_cb_t *p;
713 713 mac_cb_t **pp;
714 714 mac_cb_t *rmlist = NULL; /* List of removed elements */
715 715 int cnt = 0;
716 716
717 717 ASSERT(MUTEX_HELD(mcbi->mcbi_lockp));
718 718 ASSERT(mcbi->mcbi_del_cnt != 0 && mcbi->mcbi_walker_cnt == 0);
719 719
720 720 pp = mcb_head;
721 721 while (*pp != NULL) {
722 722 if ((*pp)->mcb_flags & MCB_CONDEMNED) {
723 723 p = *pp;
724 724 *pp = p->mcb_nextp;
725 725 p->mcb_nextp = rmlist;
726 726 rmlist = p;
727 727 cnt++;
728 728 continue;
729 729 }
730 730 pp = &(*pp)->mcb_nextp;
731 731 }
732 732
733 733 ASSERT(mcbi->mcbi_del_cnt == cnt);
734 734 mcbi->mcbi_del_cnt = 0;
735 735 return (rmlist);
736 736 }
737 737
738 738 boolean_t
739 739 mac_callback_lookup(mac_cb_t **mcb_headp, mac_cb_t *mcb_elem)
740 740 {
741 741 mac_cb_t *mcb;
742 742
743 743 /* Verify it is not already in the list */
744 744 for (mcb = *mcb_headp; mcb != NULL; mcb = mcb->mcb_nextp) {
745 745 if (mcb == mcb_elem)
746 746 return (B_TRUE);
747 747 }
748 748
749 749 return (B_FALSE);
750 750 }
751 751
752 752 boolean_t
753 753 mac_callback_find(mac_cb_info_t *mcbi, mac_cb_t **mcb_headp, mac_cb_t *mcb_elem)
754 754 {
755 755 boolean_t found;
756 756
757 757 mutex_enter(mcbi->mcbi_lockp);
758 758 found = mac_callback_lookup(mcb_headp, mcb_elem);
759 759 mutex_exit(mcbi->mcbi_lockp);
760 760
761 761 return (found);
762 762 }
763 763
764 764 /* Free the list of removed callbacks */
765 765 void
766 766 mac_callback_free(mac_cb_t *rmlist)
767 767 {
768 768 mac_cb_t *mcb;
769 769 mac_cb_t *mcb_next;
770 770
771 771 for (mcb = rmlist; mcb != NULL; mcb = mcb_next) {
772 772 mcb_next = mcb->mcb_nextp;
773 773 kmem_free(mcb->mcb_objp, mcb->mcb_objsize);
774 774 }
775 775 }
776 776
777 777 /*
778 778 * The promisc callbacks are in 2 lists, one off the 'mip' and another off the
779 779 * 'mcip' threaded by mpi_mi_link and mpi_mci_link respectively. However there
780 780 * is only a single shared total walker count, and an entry can't be physically
781 781 * unlinked if a walker is active on either list. The last walker does this
782 782 * cleanup of logically deleted entries.
783 783 */
784 784 void
785 785 i_mac_promisc_walker_cleanup(mac_impl_t *mip)
786 786 {
787 787 mac_cb_t *rmlist;
788 788 mac_cb_t *mcb;
789 789 mac_cb_t *mcb_next;
790 790 mac_promisc_impl_t *mpip;
791 791
792 792 /*
793 793 * Construct a temporary list of deleted callbacks by walking the
794 794 * the mi_promisc_list. Then for each entry in the temporary list,
795 795 * remove it from the mci_promisc_list and free the entry.
796 796 */
797 797 rmlist = mac_callback_walker_cleanup(&mip->mi_promisc_cb_info,
798 798 &mip->mi_promisc_list);
799 799
800 800 for (mcb = rmlist; mcb != NULL; mcb = mcb_next) {
801 801 mcb_next = mcb->mcb_nextp;
802 802 mpip = (mac_promisc_impl_t *)mcb->mcb_objp;
803 803 VERIFY(mac_callback_remove(&mip->mi_promisc_cb_info,
804 804 &mpip->mpi_mcip->mci_promisc_list, &mpip->mpi_mci_link));
805 805 mcb->mcb_flags = 0;
806 806 mcb->mcb_nextp = NULL;
807 807 kmem_cache_free(mac_promisc_impl_cache, mpip);
808 808 }
809 809 }
810 810
811 811 void
812 812 i_mac_notify(mac_impl_t *mip, mac_notify_type_t type)
813 813 {
814 814 mac_cb_info_t *mcbi;
815 815
816 816 /*
817 817 * Signal the notify thread even after mi_ref has become zero and
818 818 * mi_disabled is set. The synchronization with the notify thread
819 819 * happens in mac_unregister and that implies the driver must make
820 820 * sure it is single-threaded (with respect to mac calls) and that
821 821 * all pending mac calls have returned before it calls mac_unregister
822 822 */
823 823 rw_enter(&i_mac_impl_lock, RW_READER);
824 824 if (mip->mi_state_flags & MIS_DISABLED)
825 825 goto exit;
826 826
827 827 /*
828 828 * Guard against incorrect notifications. (Running a newer
829 829 * mac client against an older implementation?)
830 830 */
831 831 if (type >= MAC_NNOTE)
832 832 goto exit;
833 833
834 834 mcbi = &mip->mi_notify_cb_info;
835 835 mutex_enter(mcbi->mcbi_lockp);
836 836 mip->mi_notify_bits |= (1 << type);
837 837 cv_broadcast(&mcbi->mcbi_cv);
838 838 mutex_exit(mcbi->mcbi_lockp);
839 839
840 840 exit:
841 841 rw_exit(&i_mac_impl_lock);
842 842 }
843 843
844 844 /*
845 845 * Mac serialization primitives. Please see the block comment at the
846 846 * top of the file.
847 847 */
848 848 void
849 849 i_mac_perim_enter(mac_impl_t *mip)
850 850 {
851 851 mac_client_impl_t *mcip;
852 852
853 853 if (mip->mi_state_flags & MIS_IS_VNIC) {
854 854 /*
855 855 * This is a VNIC. Return the lower mac since that is what
856 856 * we want to serialize on.
857 857 */
858 858 mcip = mac_vnic_lower(mip);
859 859 mip = mcip->mci_mip;
860 860 }
861 861
862 862 mutex_enter(&mip->mi_perim_lock);
863 863 if (mip->mi_perim_owner == curthread) {
864 864 mip->mi_perim_ocnt++;
865 865 mutex_exit(&mip->mi_perim_lock);
866 866 return;
867 867 }
868 868
869 869 while (mip->mi_perim_owner != NULL)
870 870 cv_wait(&mip->mi_perim_cv, &mip->mi_perim_lock);
871 871
872 872 mip->mi_perim_owner = curthread;
873 873 ASSERT(mip->mi_perim_ocnt == 0);
874 874 mip->mi_perim_ocnt++;
875 875 #ifdef DEBUG
876 876 mip->mi_perim_stack_depth = getpcstack(mip->mi_perim_stack,
877 877 MAC_PERIM_STACK_DEPTH);
878 878 #endif
879 879 mutex_exit(&mip->mi_perim_lock);
880 880 }
881 881
882 882 int
883 883 i_mac_perim_enter_nowait(mac_impl_t *mip)
884 884 {
885 885 /*
886 886 * The vnic is a special case, since the serialization is done based
887 887 * on the lower mac. If the lower mac is busy, it does not imply the
888 888 * vnic can't be unregistered. But in the case of other drivers,
889 889 * a busy perimeter or open mac handles implies that the mac is busy
890 890 * and can't be unregistered.
891 891 */
892 892 if (mip->mi_state_flags & MIS_IS_VNIC) {
893 893 i_mac_perim_enter(mip);
894 894 return (0);
895 895 }
896 896
897 897 mutex_enter(&mip->mi_perim_lock);
898 898 if (mip->mi_perim_owner != NULL) {
899 899 mutex_exit(&mip->mi_perim_lock);
900 900 return (EBUSY);
901 901 }
902 902 ASSERT(mip->mi_perim_ocnt == 0);
903 903 mip->mi_perim_owner = curthread;
904 904 mip->mi_perim_ocnt++;
905 905 mutex_exit(&mip->mi_perim_lock);
906 906
907 907 return (0);
908 908 }
909 909
910 910 void
911 911 i_mac_perim_exit(mac_impl_t *mip)
912 912 {
913 913 mac_client_impl_t *mcip;
914 914
915 915 if (mip->mi_state_flags & MIS_IS_VNIC) {
916 916 /*
917 917 * This is a VNIC. Return the lower mac since that is what
918 918 * we want to serialize on.
919 919 */
920 920 mcip = mac_vnic_lower(mip);
921 921 mip = mcip->mci_mip;
922 922 }
923 923
924 924 ASSERT(mip->mi_perim_owner == curthread && mip->mi_perim_ocnt != 0);
925 925
926 926 mutex_enter(&mip->mi_perim_lock);
927 927 if (--mip->mi_perim_ocnt == 0) {
928 928 mip->mi_perim_owner = NULL;
929 929 cv_signal(&mip->mi_perim_cv);
930 930 }
931 931 mutex_exit(&mip->mi_perim_lock);
932 932 }
933 933
934 934 /*
935 935 * Returns whether the current thread holds the mac perimeter. Used in making
936 936 * assertions.
937 937 */
938 938 boolean_t
939 939 mac_perim_held(mac_handle_t mh)
940 940 {
941 941 mac_impl_t *mip = (mac_impl_t *)mh;
942 942 mac_client_impl_t *mcip;
943 943
944 944 if (mip->mi_state_flags & MIS_IS_VNIC) {
945 945 /*
946 946 * This is a VNIC. Return the lower mac since that is what
947 947 * we want to serialize on.
948 948 */
949 949 mcip = mac_vnic_lower(mip);
950 950 mip = mcip->mci_mip;
951 951 }
952 952 return (mip->mi_perim_owner == curthread);
953 953 }
954 954
955 955 /*
956 956 * mac client interfaces to enter the mac perimeter of a mac end point, given
957 957 * its mac handle, or macname or linkid.
958 958 */
959 959 void
960 960 mac_perim_enter_by_mh(mac_handle_t mh, mac_perim_handle_t *mphp)
961 961 {
962 962 mac_impl_t *mip = (mac_impl_t *)mh;
963 963
964 964 i_mac_perim_enter(mip);
965 965 /*
966 966 * The mac_perim_handle_t returned encodes the 'mip' and whether a
967 967 * mac_open has been done internally while entering the perimeter.
968 968 * This information is used in mac_perim_exit
969 969 */
970 970 MAC_ENCODE_MPH(*mphp, mip, 0);
971 971 }
972 972
973 973 int
974 974 mac_perim_enter_by_macname(const char *name, mac_perim_handle_t *mphp)
975 975 {
976 976 int err;
977 977 mac_handle_t mh;
978 978
979 979 if ((err = mac_open(name, &mh)) != 0)
980 980 return (err);
981 981
982 982 mac_perim_enter_by_mh(mh, mphp);
983 983 MAC_ENCODE_MPH(*mphp, mh, 1);
984 984 return (0);
985 985 }
986 986
987 987 int
988 988 mac_perim_enter_by_linkid(datalink_id_t linkid, mac_perim_handle_t *mphp)
989 989 {
990 990 int err;
991 991 mac_handle_t mh;
992 992
993 993 if ((err = mac_open_by_linkid(linkid, &mh)) != 0)
994 994 return (err);
995 995
996 996 mac_perim_enter_by_mh(mh, mphp);
997 997 MAC_ENCODE_MPH(*mphp, mh, 1);
998 998 return (0);
999 999 }
1000 1000
1001 1001 void
1002 1002 mac_perim_exit(mac_perim_handle_t mph)
1003 1003 {
1004 1004 mac_impl_t *mip;
1005 1005 boolean_t need_close;
1006 1006
1007 1007 MAC_DECODE_MPH(mph, mip, need_close);
1008 1008 i_mac_perim_exit(mip);
1009 1009 if (need_close)
1010 1010 mac_close((mac_handle_t)mip);
1011 1011 }
1012 1012
1013 1013 int
1014 1014 mac_hold(const char *macname, mac_impl_t **pmip)
1015 1015 {
1016 1016 mac_impl_t *mip;
1017 1017 int err;
1018 1018
1019 1019 /*
1020 1020 * Check the device name length to make sure it won't overflow our
1021 1021 * buffer.
1022 1022 */
1023 1023 if (strlen(macname) >= MAXNAMELEN)
1024 1024 return (EINVAL);
1025 1025
1026 1026 /*
1027 1027 * Look up its entry in the global hash table.
1028 1028 */
1029 1029 rw_enter(&i_mac_impl_lock, RW_WRITER);
1030 1030 err = mod_hash_find(i_mac_impl_hash, (mod_hash_key_t)macname,
1031 1031 (mod_hash_val_t *)&mip);
1032 1032
1033 1033 if (err != 0) {
1034 1034 rw_exit(&i_mac_impl_lock);
1035 1035 return (ENOENT);
1036 1036 }
1037 1037
1038 1038 if (mip->mi_state_flags & MIS_DISABLED) {
1039 1039 rw_exit(&i_mac_impl_lock);
1040 1040 return (ENOENT);
1041 1041 }
1042 1042
1043 1043 if (mip->mi_state_flags & MIS_EXCLUSIVE_HELD) {
1044 1044 rw_exit(&i_mac_impl_lock);
1045 1045 return (EBUSY);
1046 1046 }
1047 1047
1048 1048 mip->mi_ref++;
1049 1049 rw_exit(&i_mac_impl_lock);
1050 1050
1051 1051 *pmip = mip;
1052 1052 return (0);
1053 1053 }
1054 1054
1055 1055 void
1056 1056 mac_rele(mac_impl_t *mip)
1057 1057 {
1058 1058 rw_enter(&i_mac_impl_lock, RW_WRITER);
1059 1059 ASSERT(mip->mi_ref != 0);
1060 1060 if (--mip->mi_ref == 0) {
1061 1061 ASSERT(mip->mi_nactiveclients == 0 &&
1062 1062 !(mip->mi_state_flags & MIS_EXCLUSIVE));
1063 1063 }
1064 1064 rw_exit(&i_mac_impl_lock);
1065 1065 }
1066 1066
1067 1067 /*
1068 1068 * Private GLDv3 function to start a MAC instance.
1069 1069 */
1070 1070 int
1071 1071 mac_start(mac_handle_t mh)
1072 1072 {
1073 1073 mac_impl_t *mip = (mac_impl_t *)mh;
1074 1074 int err = 0;
1075 1075 mac_group_t *defgrp;
1076 1076
1077 1077 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
1078 1078 ASSERT(mip->mi_start != NULL);
1079 1079
1080 1080 /*
1081 1081 * Check whether the device is already started.
1082 1082 */
1083 1083 if (mip->mi_active++ == 0) {
1084 1084 mac_ring_t *ring = NULL;
1085 1085
1086 1086 /*
1087 1087 * Start the device.
1088 1088 */
1089 1089 err = mip->mi_start(mip->mi_driver);
1090 1090 if (err != 0) {
1091 1091 mip->mi_active--;
1092 1092 return (err);
1093 1093 }
1094 1094
1095 1095 /*
1096 1096 * Start the default tx ring.
1097 1097 */
1098 1098 if (mip->mi_default_tx_ring != NULL) {
1099 1099
1100 1100 ring = (mac_ring_t *)mip->mi_default_tx_ring;
1101 1101 if (ring->mr_state != MR_INUSE) {
1102 1102 err = mac_start_ring(ring);
1103 1103 if (err != 0) {
1104 1104 mip->mi_active--;
1105 1105 return (err);
1106 1106 }
1107 1107 }
1108 1108 }
1109 1109
1110 1110 if ((defgrp = MAC_DEFAULT_RX_GROUP(mip)) != NULL) {
1111 1111 /*
1112 1112 * Start the default ring, since it will be needed
1113 1113 * to receive broadcast and multicast traffic for
1114 1114 * both primary and non-primary MAC clients.
1115 1115 */
1116 1116 ASSERT(defgrp->mrg_state == MAC_GROUP_STATE_REGISTERED);
1117 1117 err = mac_start_group_and_rings(defgrp);
1118 1118 if (err != 0) {
1119 1119 mip->mi_active--;
1120 1120 if ((ring != NULL) &&
1121 1121 (ring->mr_state == MR_INUSE))
1122 1122 mac_stop_ring(ring);
1123 1123 return (err);
1124 1124 }
1125 1125 mac_set_group_state(defgrp, MAC_GROUP_STATE_SHARED);
1126 1126 }
1127 1127 }
1128 1128
1129 1129 return (err);
1130 1130 }
1131 1131
1132 1132 /*
1133 1133 * Private GLDv3 function to stop a MAC instance.
1134 1134 */
1135 1135 void
1136 1136 mac_stop(mac_handle_t mh)
1137 1137 {
1138 1138 mac_impl_t *mip = (mac_impl_t *)mh;
1139 1139 mac_group_t *grp;
1140 1140
1141 1141 ASSERT(mip->mi_stop != NULL);
1142 1142 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
1143 1143
1144 1144 /*
1145 1145 * Check whether the device is still needed.
1146 1146 */
1147 1147 ASSERT(mip->mi_active != 0);
1148 1148 if (--mip->mi_active == 0) {
1149 1149 if ((grp = MAC_DEFAULT_RX_GROUP(mip)) != NULL) {
1150 1150 /*
1151 1151 * There should be no more active clients since the
1152 1152 * MAC is being stopped. Stop the default RX group
1153 1153 * and transition it back to registered state.
1154 1154 *
1155 1155 * When clients are torn down, the groups
1156 1156 * are release via mac_release_rx_group which
1157 1157 * knows the the default group is always in
1158 1158 * started mode since broadcast uses it. So
1159 1159 * we can assert that their are no clients
1160 1160 * (since mac_bcast_add doesn't register itself
1161 1161 * as a client) and group is in SHARED state.
1162 1162 */
1163 1163 ASSERT(grp->mrg_state == MAC_GROUP_STATE_SHARED);
1164 1164 ASSERT(MAC_GROUP_NO_CLIENT(grp) &&
1165 1165 mip->mi_nactiveclients == 0);
1166 1166 mac_stop_group_and_rings(grp);
1167 1167 mac_set_group_state(grp, MAC_GROUP_STATE_REGISTERED);
1168 1168 }
1169 1169
1170 1170 if (mip->mi_default_tx_ring != NULL) {
1171 1171 mac_ring_t *ring;
1172 1172
1173 1173 ring = (mac_ring_t *)mip->mi_default_tx_ring;
1174 1174 if (ring->mr_state == MR_INUSE) {
1175 1175 mac_stop_ring(ring);
1176 1176 ring->mr_flag = 0;
1177 1177 }
1178 1178 }
1179 1179
1180 1180 /*
1181 1181 * Stop the device.
1182 1182 */
1183 1183 mip->mi_stop(mip->mi_driver);
1184 1184 }
1185 1185 }
1186 1186
1187 1187 int
1188 1188 i_mac_promisc_set(mac_impl_t *mip, boolean_t on)
1189 1189 {
1190 1190 int err = 0;
1191 1191
1192 1192 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
1193 1193 ASSERT(mip->mi_setpromisc != NULL);
1194 1194
1195 1195 if (on) {
1196 1196 /*
1197 1197 * Enable promiscuous mode on the device if not yet enabled.
1198 1198 */
1199 1199 if (mip->mi_devpromisc++ == 0) {
1200 1200 err = mip->mi_setpromisc(mip->mi_driver, B_TRUE);
1201 1201 if (err != 0) {
1202 1202 mip->mi_devpromisc--;
1203 1203 return (err);
1204 1204 }
1205 1205 i_mac_notify(mip, MAC_NOTE_DEVPROMISC);
1206 1206 }
1207 1207 } else {
1208 1208 if (mip->mi_devpromisc == 0)
1209 1209 return (EPROTO);
1210 1210
1211 1211 /*
1212 1212 * Disable promiscuous mode on the device if this is the last
1213 1213 * enabling.
1214 1214 */
1215 1215 if (--mip->mi_devpromisc == 0) {
1216 1216 err = mip->mi_setpromisc(mip->mi_driver, B_FALSE);
1217 1217 if (err != 0) {
1218 1218 mip->mi_devpromisc++;
1219 1219 return (err);
1220 1220 }
1221 1221 i_mac_notify(mip, MAC_NOTE_DEVPROMISC);
1222 1222 }
1223 1223 }
1224 1224
1225 1225 return (0);
1226 1226 }
1227 1227
1228 1228 /*
1229 1229 * The promiscuity state can change any time. If the caller needs to take
1230 1230 * actions that are atomic with the promiscuity state, then the caller needs
1231 1231 * to bracket the entire sequence with mac_perim_enter/exit
1232 1232 */
1233 1233 boolean_t
1234 1234 mac_promisc_get(mac_handle_t mh)
1235 1235 {
1236 1236 mac_impl_t *mip = (mac_impl_t *)mh;
1237 1237
1238 1238 /*
1239 1239 * Return the current promiscuity.
1240 1240 */
1241 1241 return (mip->mi_devpromisc != 0);
1242 1242 }
1243 1243
1244 1244 /*
1245 1245 * Invoked at MAC instance attach time to initialize the list
1246 1246 * of factory MAC addresses supported by a MAC instance. This function
1247 1247 * builds a local cache in the mac_impl_t for the MAC addresses
1248 1248 * supported by the underlying hardware. The MAC clients themselves
1249 1249 * use the mac_addr_factory*() functions to query and reserve
1250 1250 * factory MAC addresses.
1251 1251 */
1252 1252 void
1253 1253 mac_addr_factory_init(mac_impl_t *mip)
1254 1254 {
1255 1255 mac_capab_multifactaddr_t capab;
1256 1256 uint8_t *addr;
1257 1257 int i;
1258 1258
1259 1259 /*
1260 1260 * First round to see how many factory MAC addresses are available.
1261 1261 */
1262 1262 bzero(&capab, sizeof (capab));
1263 1263 if (!i_mac_capab_get((mac_handle_t)mip, MAC_CAPAB_MULTIFACTADDR,
1264 1264 &capab) || (capab.mcm_naddr == 0)) {
1265 1265 /*
1266 1266 * The MAC instance doesn't support multiple factory
1267 1267 * MAC addresses, we're done here.
1268 1268 */
1269 1269 return;
1270 1270 }
1271 1271
1272 1272 /*
1273 1273 * Allocate the space and get all the factory addresses.
1274 1274 */
1275 1275 addr = kmem_alloc(capab.mcm_naddr * MAXMACADDRLEN, KM_SLEEP);
1276 1276 capab.mcm_getaddr(mip->mi_driver, capab.mcm_naddr, addr);
1277 1277
1278 1278 mip->mi_factory_addr_num = capab.mcm_naddr;
1279 1279 mip->mi_factory_addr = kmem_zalloc(mip->mi_factory_addr_num *
1280 1280 sizeof (mac_factory_addr_t), KM_SLEEP);
1281 1281
1282 1282 for (i = 0; i < capab.mcm_naddr; i++) {
1283 1283 bcopy(addr + i * MAXMACADDRLEN,
1284 1284 mip->mi_factory_addr[i].mfa_addr,
1285 1285 mip->mi_type->mt_addr_length);
1286 1286 mip->mi_factory_addr[i].mfa_in_use = B_FALSE;
1287 1287 }
1288 1288
1289 1289 kmem_free(addr, capab.mcm_naddr * MAXMACADDRLEN);
1290 1290 }
1291 1291
1292 1292 void
1293 1293 mac_addr_factory_fini(mac_impl_t *mip)
1294 1294 {
1295 1295 if (mip->mi_factory_addr == NULL) {
1296 1296 ASSERT(mip->mi_factory_addr_num == 0);
1297 1297 return;
1298 1298 }
1299 1299
1300 1300 kmem_free(mip->mi_factory_addr, mip->mi_factory_addr_num *
1301 1301 sizeof (mac_factory_addr_t));
1302 1302
1303 1303 mip->mi_factory_addr = NULL;
1304 1304 mip->mi_factory_addr_num = 0;
1305 1305 }
1306 1306
1307 1307 /*
1308 1308 * Reserve a factory MAC address. If *slot is set to -1, the function
1309 1309 * attempts to reserve any of the available factory MAC addresses and
1310 1310 * returns the reserved slot id. If no slots are available, the function
1311 1311 * returns ENOSPC. If *slot is not set to -1, the function reserves
1312 1312 * the specified slot if it is available, or returns EBUSY is the slot
1313 1313 * is already used. Returns ENOTSUP if the underlying MAC does not
1314 1314 * support multiple factory addresses. If the slot number is not -1 but
1315 1315 * is invalid, returns EINVAL.
1316 1316 */
1317 1317 int
1318 1318 mac_addr_factory_reserve(mac_client_handle_t mch, int *slot)
1319 1319 {
1320 1320 mac_client_impl_t *mcip = (mac_client_impl_t *)mch;
1321 1321 mac_impl_t *mip = mcip->mci_mip;
1322 1322 int i, ret = 0;
1323 1323
1324 1324 i_mac_perim_enter(mip);
1325 1325 /*
1326 1326 * Protect against concurrent readers that may need a self-consistent
1327 1327 * view of the factory addresses
1328 1328 */
1329 1329 rw_enter(&mip->mi_rw_lock, RW_WRITER);
1330 1330
1331 1331 if (mip->mi_factory_addr_num == 0) {
1332 1332 ret = ENOTSUP;
1333 1333 goto bail;
1334 1334 }
1335 1335
1336 1336 if (*slot != -1) {
1337 1337 /* check the specified slot */
1338 1338 if (*slot < 1 || *slot > mip->mi_factory_addr_num) {
1339 1339 ret = EINVAL;
1340 1340 goto bail;
1341 1341 }
1342 1342 if (mip->mi_factory_addr[*slot-1].mfa_in_use) {
1343 1343 ret = EBUSY;
1344 1344 goto bail;
1345 1345 }
1346 1346 } else {
1347 1347 /* pick the next available slot */
1348 1348 for (i = 0; i < mip->mi_factory_addr_num; i++) {
1349 1349 if (!mip->mi_factory_addr[i].mfa_in_use)
1350 1350 break;
1351 1351 }
1352 1352
1353 1353 if (i == mip->mi_factory_addr_num) {
1354 1354 ret = ENOSPC;
1355 1355 goto bail;
1356 1356 }
1357 1357 *slot = i+1;
1358 1358 }
1359 1359
1360 1360 mip->mi_factory_addr[*slot-1].mfa_in_use = B_TRUE;
1361 1361 mip->mi_factory_addr[*slot-1].mfa_client = mcip;
1362 1362
1363 1363 bail:
1364 1364 rw_exit(&mip->mi_rw_lock);
1365 1365 i_mac_perim_exit(mip);
1366 1366 return (ret);
1367 1367 }
1368 1368
1369 1369 /*
1370 1370 * Release the specified factory MAC address slot.
1371 1371 */
1372 1372 void
1373 1373 mac_addr_factory_release(mac_client_handle_t mch, uint_t slot)
1374 1374 {
1375 1375 mac_client_impl_t *mcip = (mac_client_impl_t *)mch;
1376 1376 mac_impl_t *mip = mcip->mci_mip;
1377 1377
1378 1378 i_mac_perim_enter(mip);
1379 1379 /*
1380 1380 * Protect against concurrent readers that may need a self-consistent
1381 1381 * view of the factory addresses
1382 1382 */
1383 1383 rw_enter(&mip->mi_rw_lock, RW_WRITER);
1384 1384
1385 1385 ASSERT(slot > 0 && slot <= mip->mi_factory_addr_num);
1386 1386 ASSERT(mip->mi_factory_addr[slot-1].mfa_in_use);
1387 1387
1388 1388 mip->mi_factory_addr[slot-1].mfa_in_use = B_FALSE;
1389 1389
1390 1390 rw_exit(&mip->mi_rw_lock);
1391 1391 i_mac_perim_exit(mip);
1392 1392 }
1393 1393
1394 1394 /*
1395 1395 * Stores in mac_addr the value of the specified MAC address. Returns
1396 1396 * 0 on success, or EINVAL if the slot number is not valid for the MAC.
1397 1397 * The caller must provide a string of at least MAXNAMELEN bytes.
1398 1398 */
1399 1399 void
1400 1400 mac_addr_factory_value(mac_handle_t mh, int slot, uchar_t *mac_addr,
1401 1401 uint_t *addr_len, char *client_name, boolean_t *in_use_arg)
1402 1402 {
1403 1403 mac_impl_t *mip = (mac_impl_t *)mh;
1404 1404 boolean_t in_use;
1405 1405
1406 1406 ASSERT(slot > 0 && slot <= mip->mi_factory_addr_num);
1407 1407
1408 1408 /*
1409 1409 * Readers need to hold mi_rw_lock. Writers need to hold mac perimeter
1410 1410 * and mi_rw_lock
1411 1411 */
1412 1412 rw_enter(&mip->mi_rw_lock, RW_READER);
1413 1413 bcopy(mip->mi_factory_addr[slot-1].mfa_addr, mac_addr, MAXMACADDRLEN);
1414 1414 *addr_len = mip->mi_type->mt_addr_length;
1415 1415 in_use = mip->mi_factory_addr[slot-1].mfa_in_use;
1416 1416 if (in_use && client_name != NULL) {
1417 1417 bcopy(mip->mi_factory_addr[slot-1].mfa_client->mci_name,
1418 1418 client_name, MAXNAMELEN);
1419 1419 }
1420 1420 if (in_use_arg != NULL)
1421 1421 *in_use_arg = in_use;
1422 1422 rw_exit(&mip->mi_rw_lock);
1423 1423 }
1424 1424
1425 1425 /*
1426 1426 * Returns the number of factory MAC addresses (in addition to the
1427 1427 * primary MAC address), 0 if the underlying MAC doesn't support
1428 1428 * that feature.
1429 1429 */
1430 1430 uint_t
1431 1431 mac_addr_factory_num(mac_handle_t mh)
1432 1432 {
1433 1433 mac_impl_t *mip = (mac_impl_t *)mh;
1434 1434
1435 1435 return (mip->mi_factory_addr_num);
1436 1436 }
1437 1437
1438 1438
1439 1439 void
1440 1440 mac_rx_group_unmark(mac_group_t *grp, uint_t flag)
1441 1441 {
1442 1442 mac_ring_t *ring;
1443 1443
1444 1444 for (ring = grp->mrg_rings; ring != NULL; ring = ring->mr_next)
1445 1445 ring->mr_flag &= ~flag;
1446 1446 }
1447 1447
1448 1448 /*
1449 1449 * The following mac_hwrings_xxx() functions are private mac client functions
1450 1450 * used by the aggr driver to access and control the underlying HW Rx group
1451 1451 * and rings. In this case, the aggr driver has exclusive control of the
1452 1452 * underlying HW Rx group/rings, it calls the following functions to
1453 1453 * start/stop the HW Rx rings, disable/enable polling, add/remove mac'
1454 1454 * addresses, or set up the Rx callback.
1455 1455 */
1456 1456 /* ARGSUSED */
1457 1457 static void
1458 1458 mac_hwrings_rx_process(void *arg, mac_resource_handle_t srs,
1459 1459 mblk_t *mp_chain, boolean_t loopback)
1460 1460 {
1461 1461 mac_soft_ring_set_t *mac_srs = (mac_soft_ring_set_t *)srs;
1462 1462 mac_srs_rx_t *srs_rx = &mac_srs->srs_rx;
1463 1463 mac_direct_rx_t proc;
1464 1464 void *arg1;
1465 1465 mac_resource_handle_t arg2;
1466 1466
1467 1467 proc = srs_rx->sr_func;
1468 1468 arg1 = srs_rx->sr_arg1;
1469 1469 arg2 = mac_srs->srs_mrh;
1470 1470
1471 1471 proc(arg1, arg2, mp_chain, NULL);
1472 1472 }
1473 1473
1474 1474 /*
1475 1475 * This function is called to get the list of HW rings that are reserved by
1476 1476 * an exclusive mac client.
1477 1477 *
1478 1478 * Return value: the number of HW rings.
1479 1479 */
1480 1480 int
1481 1481 mac_hwrings_get(mac_client_handle_t mch, mac_group_handle_t *hwgh,
1482 1482 mac_ring_handle_t *hwrh, mac_ring_type_t rtype)
1483 1483 {
1484 1484 mac_client_impl_t *mcip = (mac_client_impl_t *)mch;
1485 1485 flow_entry_t *flent = mcip->mci_flent;
1486 1486 mac_group_t *grp;
1487 1487 mac_ring_t *ring;
1488 1488 int cnt = 0;
1489 1489
1490 1490 if (rtype == MAC_RING_TYPE_RX) {
1491 1491 grp = flent->fe_rx_ring_group;
1492 1492 } else if (rtype == MAC_RING_TYPE_TX) {
1493 1493 grp = flent->fe_tx_ring_group;
1494 1494 } else {
1495 1495 ASSERT(B_FALSE);
1496 1496 return (-1);
1497 1497 }
1498 1498 /*
1499 1499 * The mac client did not reserve any RX group, return directly.
1500 1500 * This is probably because the underlying MAC does not support
1501 1501 * any groups.
1502 1502 */
1503 1503 if (hwgh != NULL)
1504 1504 *hwgh = NULL;
1505 1505 if (grp == NULL)
1506 1506 return (0);
1507 1507 /*
1508 1508 * This group must be reserved by this mac client.
1509 1509 */
1510 1510 ASSERT((grp->mrg_state == MAC_GROUP_STATE_RESERVED) &&
1511 1511 (mcip == MAC_GROUP_ONLY_CLIENT(grp)));
1512 1512
1513 1513 for (ring = grp->mrg_rings; ring != NULL; ring = ring->mr_next, cnt++) {
1514 1514 ASSERT(cnt < MAX_RINGS_PER_GROUP);
1515 1515 hwrh[cnt] = (mac_ring_handle_t)ring;
1516 1516 }
1517 1517 if (hwgh != NULL)
1518 1518 *hwgh = (mac_group_handle_t)grp;
1519 1519
1520 1520 return (cnt);
1521 1521 }
1522 1522
1523 1523 /*
1524 1524 * This function is called to get info about Tx/Rx rings.
1525 1525 *
1526 1526 * Return value: returns uint_t which will have various bits set
1527 1527 * that indicates different properties of the ring.
1528 1528 */
1529 1529 uint_t
1530 1530 mac_hwring_getinfo(mac_ring_handle_t rh)
1531 1531 {
1532 1532 mac_ring_t *ring = (mac_ring_t *)rh;
1533 1533 mac_ring_info_t *info = &ring->mr_info;
1534 1534
1535 1535 return (info->mri_flags);
1536 1536 }
1537 1537
1538 1538 /*
1539 1539 * Export ddi interrupt handles from the HW ring to the pseudo ring and
1540 1540 * setup the RX callback of the mac client which exclusively controls
1541 1541 * HW ring.
1542 1542 */
1543 1543 void
1544 1544 mac_hwring_setup(mac_ring_handle_t hwrh, mac_resource_handle_t prh,
1545 1545 mac_ring_handle_t pseudo_rh)
1546 1546 {
1547 1547 mac_ring_t *hw_ring = (mac_ring_t *)hwrh;
1548 1548 mac_ring_t *pseudo_ring;
1549 1549 mac_soft_ring_set_t *mac_srs = hw_ring->mr_srs;
1550 1550
1551 1551 if (pseudo_rh != NULL) {
1552 1552 pseudo_ring = (mac_ring_t *)pseudo_rh;
1553 1553 /* Export the ddi handles to pseudo ring */
1554 1554 pseudo_ring->mr_info.mri_intr.mi_ddi_handle =
1555 1555 hw_ring->mr_info.mri_intr.mi_ddi_handle;
1556 1556 pseudo_ring->mr_info.mri_intr.mi_ddi_shared =
1557 1557 hw_ring->mr_info.mri_intr.mi_ddi_shared;
1558 1558 /*
1559 1559 * Save a pointer to pseudo ring in the hw ring. If
1560 1560 * interrupt handle changes, the hw ring will be
1561 1561 * notified of the change (see mac_ring_intr_set())
1562 1562 * and the appropriate change has to be made to
1563 1563 * the pseudo ring that has exported the ddi handle.
1564 1564 */
1565 1565 hw_ring->mr_prh = pseudo_rh;
1566 1566 }
1567 1567
1568 1568 if (hw_ring->mr_type == MAC_RING_TYPE_RX) {
1569 1569 ASSERT(!(mac_srs->srs_type & SRST_TX));
1570 1570 mac_srs->srs_mrh = prh;
1571 1571 mac_srs->srs_rx.sr_lower_proc = mac_hwrings_rx_process;
1572 1572 }
1573 1573 }
1574 1574
1575 1575 void
1576 1576 mac_hwring_teardown(mac_ring_handle_t hwrh)
1577 1577 {
1578 1578 mac_ring_t *hw_ring = (mac_ring_t *)hwrh;
1579 1579 mac_soft_ring_set_t *mac_srs;
1580 1580
1581 1581 if (hw_ring == NULL)
1582 1582 return;
1583 1583 hw_ring->mr_prh = NULL;
1584 1584 if (hw_ring->mr_type == MAC_RING_TYPE_RX) {
1585 1585 mac_srs = hw_ring->mr_srs;
1586 1586 ASSERT(!(mac_srs->srs_type & SRST_TX));
1587 1587 mac_srs->srs_rx.sr_lower_proc = mac_rx_srs_process;
1588 1588 mac_srs->srs_mrh = NULL;
1589 1589 }
1590 1590 }
1591 1591
1592 1592 int
1593 1593 mac_hwring_disable_intr(mac_ring_handle_t rh)
1594 1594 {
1595 1595 mac_ring_t *rr_ring = (mac_ring_t *)rh;
1596 1596 mac_intr_t *intr = &rr_ring->mr_info.mri_intr;
1597 1597
1598 1598 return (intr->mi_disable(intr->mi_handle));
1599 1599 }
1600 1600
1601 1601 int
1602 1602 mac_hwring_enable_intr(mac_ring_handle_t rh)
1603 1603 {
1604 1604 mac_ring_t *rr_ring = (mac_ring_t *)rh;
1605 1605 mac_intr_t *intr = &rr_ring->mr_info.mri_intr;
1606 1606
1607 1607 return (intr->mi_enable(intr->mi_handle));
1608 1608 }
1609 1609
1610 1610 int
1611 1611 mac_hwring_start(mac_ring_handle_t rh)
1612 1612 {
1613 1613 mac_ring_t *rr_ring = (mac_ring_t *)rh;
1614 1614
1615 1615 MAC_RING_UNMARK(rr_ring, MR_QUIESCE);
1616 1616 return (0);
1617 1617 }
1618 1618
1619 1619 void
1620 1620 mac_hwring_stop(mac_ring_handle_t rh)
1621 1621 {
1622 1622 mac_ring_t *rr_ring = (mac_ring_t *)rh;
1623 1623
1624 1624 mac_rx_ring_quiesce(rr_ring, MR_QUIESCE);
1625 1625 }
1626 1626
1627 1627 mblk_t *
1628 1628 mac_hwring_poll(mac_ring_handle_t rh, int bytes_to_pickup)
1629 1629 {
1630 1630 mac_ring_t *rr_ring = (mac_ring_t *)rh;
1631 1631 mac_ring_info_t *info = &rr_ring->mr_info;
1632 1632
1633 1633 return (info->mri_poll(info->mri_driver, bytes_to_pickup));
1634 1634 }
1635 1635
1636 1636 /*
1637 1637 * Send packets through a selected tx ring.
1638 1638 */
1639 1639 mblk_t *
1640 1640 mac_hwring_tx(mac_ring_handle_t rh, mblk_t *mp)
1641 1641 {
1642 1642 mac_ring_t *ring = (mac_ring_t *)rh;
1643 1643 mac_ring_info_t *info = &ring->mr_info;
1644 1644
1645 1645 ASSERT(ring->mr_type == MAC_RING_TYPE_TX &&
1646 1646 ring->mr_state >= MR_INUSE);
1647 1647 return (info->mri_tx(info->mri_driver, mp));
1648 1648 }
1649 1649
1650 1650 /*
1651 1651 * Query stats for a particular rx/tx ring
1652 1652 */
1653 1653 int
1654 1654 mac_hwring_getstat(mac_ring_handle_t rh, uint_t stat, uint64_t *val)
1655 1655 {
1656 1656 mac_ring_t *ring = (mac_ring_t *)rh;
1657 1657 mac_ring_info_t *info = &ring->mr_info;
1658 1658
1659 1659 return (info->mri_stat(info->mri_driver, stat, val));
1660 1660 }
1661 1661
1662 1662 /*
1663 1663 * Private function that is only used by aggr to send packets through
1664 1664 * a port/Tx ring. Since aggr exposes a pseudo Tx ring even for ports
1665 1665 * that does not expose Tx rings, aggr_ring_tx() entry point needs
1666 1666 * access to mac_impl_t to send packets through m_tx() entry point.
1667 1667 * It accomplishes this by calling mac_hwring_send_priv() function.
1668 1668 */
1669 1669 mblk_t *
1670 1670 mac_hwring_send_priv(mac_client_handle_t mch, mac_ring_handle_t rh, mblk_t *mp)
1671 1671 {
1672 1672 mac_client_impl_t *mcip = (mac_client_impl_t *)mch;
1673 1673 mac_impl_t *mip = mcip->mci_mip;
1674 1674
1675 1675 MAC_TX(mip, rh, mp, mcip);
1676 1676 return (mp);
1677 1677 }
1678 1678
1679 1679 int
1680 1680 mac_hwgroup_addmac(mac_group_handle_t gh, const uint8_t *addr)
1681 1681 {
1682 1682 mac_group_t *group = (mac_group_t *)gh;
1683 1683
1684 1684 return (mac_group_addmac(group, addr));
1685 1685 }
1686 1686
1687 1687 int
1688 1688 mac_hwgroup_remmac(mac_group_handle_t gh, const uint8_t *addr)
1689 1689 {
1690 1690 mac_group_t *group = (mac_group_t *)gh;
1691 1691
1692 1692 return (mac_group_remmac(group, addr));
1693 1693 }
1694 1694
1695 1695 /*
1696 1696 * Set the RX group to be shared/reserved. Note that the group must be
1697 1697 * started/stopped outside of this function.
1698 1698 */
1699 1699 void
1700 1700 mac_set_group_state(mac_group_t *grp, mac_group_state_t state)
1701 1701 {
1702 1702 /*
1703 1703 * If there is no change in the group state, just return.
1704 1704 */
1705 1705 if (grp->mrg_state == state)
1706 1706 return;
1707 1707
1708 1708 switch (state) {
1709 1709 case MAC_GROUP_STATE_RESERVED:
1710 1710 /*
1711 1711 * Successfully reserved the group.
1712 1712 *
1713 1713 * Given that there is an exclusive client controlling this
1714 1714 * group, we enable the group level polling when available,
1715 1715 * so that SRSs get to turn on/off individual rings they's
1716 1716 * assigned to.
1717 1717 */
1718 1718 ASSERT(MAC_PERIM_HELD(grp->mrg_mh));
1719 1719
1720 1720 if (grp->mrg_type == MAC_RING_TYPE_RX &&
1721 1721 GROUP_INTR_DISABLE_FUNC(grp) != NULL) {
1722 1722 GROUP_INTR_DISABLE_FUNC(grp)(GROUP_INTR_HANDLE(grp));
1723 1723 }
1724 1724 break;
1725 1725
1726 1726 case MAC_GROUP_STATE_SHARED:
1727 1727 /*
1728 1728 * Set all rings of this group to software classified.
1729 1729 * If the group has an overriding interrupt, then re-enable it.
1730 1730 */
1731 1731 ASSERT(MAC_PERIM_HELD(grp->mrg_mh));
1732 1732
1733 1733 if (grp->mrg_type == MAC_RING_TYPE_RX &&
1734 1734 GROUP_INTR_ENABLE_FUNC(grp) != NULL) {
1735 1735 GROUP_INTR_ENABLE_FUNC(grp)(GROUP_INTR_HANDLE(grp));
1736 1736 }
1737 1737 /* The ring is not available for reservations any more */
1738 1738 break;
1739 1739
1740 1740 case MAC_GROUP_STATE_REGISTERED:
1741 1741 /* Also callable from mac_register, perim is not held */
1742 1742 break;
1743 1743
1744 1744 default:
1745 1745 ASSERT(B_FALSE);
1746 1746 break;
1747 1747 }
1748 1748
1749 1749 grp->mrg_state = state;
1750 1750 }
1751 1751
1752 1752 /*
1753 1753 * Quiesce future hardware classified packets for the specified Rx ring
1754 1754 */
1755 1755 static void
1756 1756 mac_rx_ring_quiesce(mac_ring_t *rx_ring, uint_t ring_flag)
1757 1757 {
1758 1758 ASSERT(rx_ring->mr_classify_type == MAC_HW_CLASSIFIER);
1759 1759 ASSERT(ring_flag == MR_CONDEMNED || ring_flag == MR_QUIESCE);
1760 1760
1761 1761 mutex_enter(&rx_ring->mr_lock);
1762 1762 rx_ring->mr_flag |= ring_flag;
1763 1763 while (rx_ring->mr_refcnt != 0)
1764 1764 cv_wait(&rx_ring->mr_cv, &rx_ring->mr_lock);
1765 1765 mutex_exit(&rx_ring->mr_lock);
1766 1766 }
1767 1767
1768 1768 /*
1769 1769 * Please see mac_tx for details about the per cpu locking scheme
1770 1770 */
1771 1771 static void
1772 1772 mac_tx_lock_all(mac_client_impl_t *mcip)
1773 1773 {
1774 1774 int i;
1775 1775
1776 1776 for (i = 0; i <= mac_tx_percpu_cnt; i++)
1777 1777 mutex_enter(&mcip->mci_tx_pcpu[i].pcpu_tx_lock);
1778 1778 }
1779 1779
1780 1780 static void
1781 1781 mac_tx_unlock_all(mac_client_impl_t *mcip)
1782 1782 {
1783 1783 int i;
1784 1784
1785 1785 for (i = mac_tx_percpu_cnt; i >= 0; i--)
1786 1786 mutex_exit(&mcip->mci_tx_pcpu[i].pcpu_tx_lock);
1787 1787 }
1788 1788
1789 1789 static void
1790 1790 mac_tx_unlock_allbutzero(mac_client_impl_t *mcip)
1791 1791 {
1792 1792 int i;
1793 1793
1794 1794 for (i = mac_tx_percpu_cnt; i > 0; i--)
1795 1795 mutex_exit(&mcip->mci_tx_pcpu[i].pcpu_tx_lock);
1796 1796 }
1797 1797
1798 1798 static int
1799 1799 mac_tx_sum_refcnt(mac_client_impl_t *mcip)
1800 1800 {
1801 1801 int i;
1802 1802 int refcnt = 0;
1803 1803
1804 1804 for (i = 0; i <= mac_tx_percpu_cnt; i++)
1805 1805 refcnt += mcip->mci_tx_pcpu[i].pcpu_tx_refcnt;
1806 1806
1807 1807 return (refcnt);
1808 1808 }
1809 1809
1810 1810 /*
1811 1811 * Stop future Tx packets coming down from the client in preparation for
1812 1812 * quiescing the Tx side. This is needed for dynamic reclaim and reassignment
1813 1813 * of rings between clients
1814 1814 */
1815 1815 void
1816 1816 mac_tx_client_block(mac_client_impl_t *mcip)
1817 1817 {
1818 1818 mac_tx_lock_all(mcip);
1819 1819 mcip->mci_tx_flag |= MCI_TX_QUIESCE;
1820 1820 while (mac_tx_sum_refcnt(mcip) != 0) {
1821 1821 mac_tx_unlock_allbutzero(mcip);
1822 1822 cv_wait(&mcip->mci_tx_cv, &mcip->mci_tx_pcpu[0].pcpu_tx_lock);
1823 1823 mutex_exit(&mcip->mci_tx_pcpu[0].pcpu_tx_lock);
1824 1824 mac_tx_lock_all(mcip);
1825 1825 }
1826 1826 mac_tx_unlock_all(mcip);
1827 1827 }
1828 1828
1829 1829 void
1830 1830 mac_tx_client_unblock(mac_client_impl_t *mcip)
1831 1831 {
1832 1832 mac_tx_lock_all(mcip);
1833 1833 mcip->mci_tx_flag &= ~MCI_TX_QUIESCE;
1834 1834 mac_tx_unlock_all(mcip);
1835 1835 /*
1836 1836 * We may fail to disable flow control for the last MAC_NOTE_TX
1837 1837 * notification because the MAC client is quiesced. Send the
1838 1838 * notification again.
1839 1839 */
1840 1840 i_mac_notify(mcip->mci_mip, MAC_NOTE_TX);
1841 1841 }
1842 1842
1843 1843 /*
1844 1844 * Wait for an SRS to quiesce. The SRS worker will signal us when the
1845 1845 * quiesce is done.
1846 1846 */
1847 1847 static void
1848 1848 mac_srs_quiesce_wait(mac_soft_ring_set_t *srs, uint_t srs_flag)
1849 1849 {
1850 1850 mutex_enter(&srs->srs_lock);
1851 1851 while (!(srs->srs_state & srs_flag))
1852 1852 cv_wait(&srs->srs_quiesce_done_cv, &srs->srs_lock);
1853 1853 mutex_exit(&srs->srs_lock);
1854 1854 }
1855 1855
1856 1856 /*
1857 1857 * Quiescing an Rx SRS is achieved by the following sequence. The protocol
1858 1858 * works bottom up by cutting off packet flow from the bottommost point in the
1859 1859 * mac, then the SRS, and then the soft rings. There are 2 use cases of this
1860 1860 * mechanism. One is a temporary quiesce of the SRS, such as say while changing
1861 1861 * the Rx callbacks. Another use case is Rx SRS teardown. In the former case
1862 1862 * the QUIESCE prefix/suffix is used and in the latter the CONDEMNED is used
1863 1863 * for the SRS and MR flags. In the former case the threads pause waiting for
1864 1864 * a restart, while in the latter case the threads exit. The Tx SRS teardown
1865 1865 * is also mostly similar to the above.
1866 1866 *
1867 1867 * 1. Stop future hardware classified packets at the lowest level in the mac.
1868 1868 * Remove any hardware classification rule (CONDEMNED case) and mark the
1869 1869 * rings as CONDEMNED or QUIESCE as appropriate. This prevents the mr_refcnt
1870 1870 * from increasing. Upcalls from the driver that come through hardware
1871 1871 * classification will be dropped in mac_rx from now on. Then we wait for
1872 1872 * the mr_refcnt to drop to zero. When the mr_refcnt reaches zero we are
1873 1873 * sure there aren't any upcall threads from the driver through hardware
1874 1874 * classification. In the case of SRS teardown we also remove the
1875 1875 * classification rule in the driver.
1876 1876 *
1877 1877 * 2. Stop future software classified packets by marking the flow entry with
1878 1878 * FE_QUIESCE or FE_CONDEMNED as appropriate which prevents the refcnt from
1879 1879 * increasing. We also remove the flow entry from the table in the latter
1880 1880 * case. Then wait for the fe_refcnt to reach an appropriate quiescent value
1881 1881 * that indicates there aren't any active threads using that flow entry.
1882 1882 *
1883 1883 * 3. Quiesce the SRS and softrings by signaling the SRS. The SRS poll thread,
1884 1884 * SRS worker thread, and the soft ring threads are quiesced in sequence
1885 1885 * with the SRS worker thread serving as a master controller. This
1886 1886 * mechansim is explained in mac_srs_worker_quiesce().
1887 1887 *
1888 1888 * The restart mechanism to reactivate the SRS and softrings is explained
1889 1889 * in mac_srs_worker_restart(). Here we just signal the SRS worker to start the
1890 1890 * restart sequence.
1891 1891 */
1892 1892 void
1893 1893 mac_rx_srs_quiesce(mac_soft_ring_set_t *srs, uint_t srs_quiesce_flag)
1894 1894 {
1895 1895 flow_entry_t *flent = srs->srs_flent;
1896 1896 uint_t mr_flag, srs_done_flag;
1897 1897
1898 1898 ASSERT(MAC_PERIM_HELD((mac_handle_t)FLENT_TO_MIP(flent)));
1899 1899 ASSERT(!(srs->srs_type & SRST_TX));
1900 1900
1901 1901 if (srs_quiesce_flag == SRS_CONDEMNED) {
1902 1902 mr_flag = MR_CONDEMNED;
1903 1903 srs_done_flag = SRS_CONDEMNED_DONE;
1904 1904 if (srs->srs_type & SRST_CLIENT_POLL_ENABLED)
1905 1905 mac_srs_client_poll_disable(srs->srs_mcip, srs);
1906 1906 } else {
1907 1907 ASSERT(srs_quiesce_flag == SRS_QUIESCE);
1908 1908 mr_flag = MR_QUIESCE;
1909 1909 srs_done_flag = SRS_QUIESCE_DONE;
1910 1910 if (srs->srs_type & SRST_CLIENT_POLL_ENABLED)
1911 1911 mac_srs_client_poll_quiesce(srs->srs_mcip, srs);
1912 1912 }
1913 1913
1914 1914 if (srs->srs_ring != NULL) {
1915 1915 mac_rx_ring_quiesce(srs->srs_ring, mr_flag);
1916 1916 } else {
1917 1917 /*
1918 1918 * SRS is driven by software classification. In case
1919 1919 * of CONDEMNED, the top level teardown functions will
1920 1920 * deal with flow removal.
1921 1921 */
1922 1922 if (srs_quiesce_flag != SRS_CONDEMNED) {
1923 1923 FLOW_MARK(flent, FE_QUIESCE);
1924 1924 mac_flow_wait(flent, FLOW_DRIVER_UPCALL);
1925 1925 }
1926 1926 }
1927 1927
1928 1928 /*
1929 1929 * Signal the SRS to quiesce itself, and then cv_wait for the
1930 1930 * SRS quiesce to complete. The SRS worker thread will wake us
1931 1931 * up when the quiesce is complete
1932 1932 */
1933 1933 mac_srs_signal(srs, srs_quiesce_flag);
1934 1934 mac_srs_quiesce_wait(srs, srs_done_flag);
1935 1935 }
1936 1936
1937 1937 /*
1938 1938 * Remove an SRS.
1939 1939 */
1940 1940 void
1941 1941 mac_rx_srs_remove(mac_soft_ring_set_t *srs)
1942 1942 {
1943 1943 flow_entry_t *flent = srs->srs_flent;
1944 1944 int i;
1945 1945
1946 1946 mac_rx_srs_quiesce(srs, SRS_CONDEMNED);
1947 1947 /*
1948 1948 * Locate and remove our entry in the fe_rx_srs[] array, and
1949 1949 * adjust the fe_rx_srs array entries and array count by
1950 1950 * moving the last entry into the vacated spot.
1951 1951 */
1952 1952 mutex_enter(&flent->fe_lock);
1953 1953 for (i = 0; i < flent->fe_rx_srs_cnt; i++) {
1954 1954 if (flent->fe_rx_srs[i] == srs)
1955 1955 break;
1956 1956 }
1957 1957
1958 1958 ASSERT(i != 0 && i < flent->fe_rx_srs_cnt);
1959 1959 if (i != flent->fe_rx_srs_cnt - 1) {
1960 1960 flent->fe_rx_srs[i] =
1961 1961 flent->fe_rx_srs[flent->fe_rx_srs_cnt - 1];
1962 1962 i = flent->fe_rx_srs_cnt - 1;
1963 1963 }
1964 1964
1965 1965 flent->fe_rx_srs[i] = NULL;
1966 1966 flent->fe_rx_srs_cnt--;
1967 1967 mutex_exit(&flent->fe_lock);
1968 1968
1969 1969 mac_srs_free(srs);
1970 1970 }
1971 1971
1972 1972 static void
1973 1973 mac_srs_clear_flag(mac_soft_ring_set_t *srs, uint_t flag)
1974 1974 {
1975 1975 mutex_enter(&srs->srs_lock);
1976 1976 srs->srs_state &= ~flag;
1977 1977 mutex_exit(&srs->srs_lock);
1978 1978 }
1979 1979
1980 1980 void
1981 1981 mac_rx_srs_restart(mac_soft_ring_set_t *srs)
1982 1982 {
1983 1983 flow_entry_t *flent = srs->srs_flent;
1984 1984 mac_ring_t *mr;
1985 1985
1986 1986 ASSERT(MAC_PERIM_HELD((mac_handle_t)FLENT_TO_MIP(flent)));
1987 1987 ASSERT((srs->srs_type & SRST_TX) == 0);
1988 1988
1989 1989 /*
1990 1990 * This handles a change in the number of SRSs between the quiesce and
1991 1991 * and restart operation of a flow.
1992 1992 */
1993 1993 if (!SRS_QUIESCED(srs))
1994 1994 return;
1995 1995
1996 1996 /*
1997 1997 * Signal the SRS to restart itself. Wait for the restart to complete
1998 1998 * Note that we only restart the SRS if it is not marked as
1999 1999 * permanently quiesced.
2000 2000 */
2001 2001 if (!SRS_QUIESCED_PERMANENT(srs)) {
2002 2002 mac_srs_signal(srs, SRS_RESTART);
2003 2003 mac_srs_quiesce_wait(srs, SRS_RESTART_DONE);
2004 2004 mac_srs_clear_flag(srs, SRS_RESTART_DONE);
2005 2005
2006 2006 mac_srs_client_poll_restart(srs->srs_mcip, srs);
2007 2007 }
2008 2008
2009 2009 /* Finally clear the flags to let the packets in */
2010 2010 mr = srs->srs_ring;
2011 2011 if (mr != NULL) {
2012 2012 MAC_RING_UNMARK(mr, MR_QUIESCE);
2013 2013 /* In case the ring was stopped, safely restart it */
2014 2014 if (mr->mr_state != MR_INUSE)
2015 2015 (void) mac_start_ring(mr);
2016 2016 } else {
2017 2017 FLOW_UNMARK(flent, FE_QUIESCE);
2018 2018 }
2019 2019 }
2020 2020
2021 2021 /*
2022 2022 * Temporary quiesce of a flow and associated Rx SRS.
2023 2023 * Please see block comment above mac_rx_classify_flow_rem.
2024 2024 */
2025 2025 /* ARGSUSED */
2026 2026 int
2027 2027 mac_rx_classify_flow_quiesce(flow_entry_t *flent, void *arg)
2028 2028 {
2029 2029 int i;
2030 2030
2031 2031 for (i = 0; i < flent->fe_rx_srs_cnt; i++) {
2032 2032 mac_rx_srs_quiesce((mac_soft_ring_set_t *)flent->fe_rx_srs[i],
2033 2033 SRS_QUIESCE);
2034 2034 }
2035 2035 return (0);
2036 2036 }
2037 2037
2038 2038 /*
2039 2039 * Restart a flow and associated Rx SRS that has been quiesced temporarily
2040 2040 * Please see block comment above mac_rx_classify_flow_rem
2041 2041 */
2042 2042 /* ARGSUSED */
2043 2043 int
2044 2044 mac_rx_classify_flow_restart(flow_entry_t *flent, void *arg)
2045 2045 {
2046 2046 int i;
2047 2047
2048 2048 for (i = 0; i < flent->fe_rx_srs_cnt; i++)
2049 2049 mac_rx_srs_restart((mac_soft_ring_set_t *)flent->fe_rx_srs[i]);
2050 2050
2051 2051 return (0);
2052 2052 }
2053 2053
2054 2054 void
2055 2055 mac_srs_perm_quiesce(mac_client_handle_t mch, boolean_t on)
2056 2056 {
2057 2057 mac_client_impl_t *mcip = (mac_client_impl_t *)mch;
2058 2058 flow_entry_t *flent = mcip->mci_flent;
2059 2059 mac_impl_t *mip = mcip->mci_mip;
2060 2060 mac_soft_ring_set_t *mac_srs;
2061 2061 int i;
2062 2062
2063 2063 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
2064 2064
2065 2065 if (flent == NULL)
2066 2066 return;
2067 2067
2068 2068 for (i = 0; i < flent->fe_rx_srs_cnt; i++) {
2069 2069 mac_srs = flent->fe_rx_srs[i];
2070 2070 mutex_enter(&mac_srs->srs_lock);
2071 2071 if (on)
2072 2072 mac_srs->srs_state |= SRS_QUIESCE_PERM;
2073 2073 else
2074 2074 mac_srs->srs_state &= ~SRS_QUIESCE_PERM;
2075 2075 mutex_exit(&mac_srs->srs_lock);
2076 2076 }
2077 2077 }
2078 2078
2079 2079 void
2080 2080 mac_rx_client_quiesce(mac_client_handle_t mch)
2081 2081 {
2082 2082 mac_client_impl_t *mcip = (mac_client_impl_t *)mch;
2083 2083 mac_impl_t *mip = mcip->mci_mip;
2084 2084
2085 2085 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
2086 2086
2087 2087 if (MCIP_DATAPATH_SETUP(mcip)) {
2088 2088 (void) mac_rx_classify_flow_quiesce(mcip->mci_flent,
2089 2089 NULL);
2090 2090 (void) mac_flow_walk_nolock(mcip->mci_subflow_tab,
2091 2091 mac_rx_classify_flow_quiesce, NULL);
2092 2092 }
2093 2093 }
2094 2094
2095 2095 void
2096 2096 mac_rx_client_restart(mac_client_handle_t mch)
2097 2097 {
2098 2098 mac_client_impl_t *mcip = (mac_client_impl_t *)mch;
2099 2099 mac_impl_t *mip = mcip->mci_mip;
2100 2100
2101 2101 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
2102 2102
2103 2103 if (MCIP_DATAPATH_SETUP(mcip)) {
2104 2104 (void) mac_rx_classify_flow_restart(mcip->mci_flent, NULL);
2105 2105 (void) mac_flow_walk_nolock(mcip->mci_subflow_tab,
2106 2106 mac_rx_classify_flow_restart, NULL);
2107 2107 }
2108 2108 }
2109 2109
2110 2110 /*
2111 2111 * This function only quiesces the Tx SRS and softring worker threads. Callers
2112 2112 * need to make sure that there aren't any mac client threads doing current or
2113 2113 * future transmits in the mac before calling this function.
2114 2114 */
2115 2115 void
2116 2116 mac_tx_srs_quiesce(mac_soft_ring_set_t *srs, uint_t srs_quiesce_flag)
2117 2117 {
2118 2118 mac_client_impl_t *mcip = srs->srs_mcip;
2119 2119
2120 2120 ASSERT(MAC_PERIM_HELD((mac_handle_t)mcip->mci_mip));
2121 2121
2122 2122 ASSERT(srs->srs_type & SRST_TX);
2123 2123 ASSERT(srs_quiesce_flag == SRS_CONDEMNED ||
2124 2124 srs_quiesce_flag == SRS_QUIESCE);
2125 2125
2126 2126 /*
2127 2127 * Signal the SRS to quiesce itself, and then cv_wait for the
2128 2128 * SRS quiesce to complete. The SRS worker thread will wake us
2129 2129 * up when the quiesce is complete
2130 2130 */
2131 2131 mac_srs_signal(srs, srs_quiesce_flag);
2132 2132 mac_srs_quiesce_wait(srs, srs_quiesce_flag == SRS_QUIESCE ?
2133 2133 SRS_QUIESCE_DONE : SRS_CONDEMNED_DONE);
2134 2134 }
2135 2135
2136 2136 void
2137 2137 mac_tx_srs_restart(mac_soft_ring_set_t *srs)
2138 2138 {
2139 2139 /*
2140 2140 * Resizing the fanout could result in creation of new SRSs.
2141 2141 * They may not necessarily be in the quiesced state in which
2142 2142 * case it need be restarted
2143 2143 */
2144 2144 if (!SRS_QUIESCED(srs))
2145 2145 return;
2146 2146
2147 2147 mac_srs_signal(srs, SRS_RESTART);
2148 2148 mac_srs_quiesce_wait(srs, SRS_RESTART_DONE);
2149 2149 mac_srs_clear_flag(srs, SRS_RESTART_DONE);
2150 2150 }
2151 2151
2152 2152 /*
2153 2153 * Temporary quiesce of a flow and associated Rx SRS.
2154 2154 * Please see block comment above mac_rx_srs_quiesce
2155 2155 */
2156 2156 /* ARGSUSED */
2157 2157 int
2158 2158 mac_tx_flow_quiesce(flow_entry_t *flent, void *arg)
2159 2159 {
2160 2160 /*
2161 2161 * The fe_tx_srs is null for a subflow on an interface that is
2162 2162 * not plumbed
2163 2163 */
2164 2164 if (flent->fe_tx_srs != NULL)
2165 2165 mac_tx_srs_quiesce(flent->fe_tx_srs, SRS_QUIESCE);
2166 2166 return (0);
2167 2167 }
2168 2168
2169 2169 /* ARGSUSED */
2170 2170 int
2171 2171 mac_tx_flow_restart(flow_entry_t *flent, void *arg)
2172 2172 {
2173 2173 /*
2174 2174 * The fe_tx_srs is null for a subflow on an interface that is
2175 2175 * not plumbed
2176 2176 */
2177 2177 if (flent->fe_tx_srs != NULL)
2178 2178 mac_tx_srs_restart(flent->fe_tx_srs);
2179 2179 return (0);
2180 2180 }
2181 2181
2182 2182 static void
2183 2183 i_mac_tx_client_quiesce(mac_client_handle_t mch, uint_t srs_quiesce_flag)
2184 2184 {
2185 2185 mac_client_impl_t *mcip = (mac_client_impl_t *)mch;
2186 2186
2187 2187 ASSERT(MAC_PERIM_HELD((mac_handle_t)mcip->mci_mip));
2188 2188
2189 2189 mac_tx_client_block(mcip);
2190 2190 if (MCIP_TX_SRS(mcip) != NULL) {
2191 2191 mac_tx_srs_quiesce(MCIP_TX_SRS(mcip), srs_quiesce_flag);
2192 2192 (void) mac_flow_walk_nolock(mcip->mci_subflow_tab,
2193 2193 mac_tx_flow_quiesce, NULL);
2194 2194 }
2195 2195 }
2196 2196
2197 2197 void
2198 2198 mac_tx_client_quiesce(mac_client_handle_t mch)
2199 2199 {
2200 2200 i_mac_tx_client_quiesce(mch, SRS_QUIESCE);
2201 2201 }
2202 2202
2203 2203 void
2204 2204 mac_tx_client_condemn(mac_client_handle_t mch)
2205 2205 {
2206 2206 i_mac_tx_client_quiesce(mch, SRS_CONDEMNED);
2207 2207 }
2208 2208
2209 2209 void
2210 2210 mac_tx_client_restart(mac_client_handle_t mch)
2211 2211 {
2212 2212 mac_client_impl_t *mcip = (mac_client_impl_t *)mch;
2213 2213
2214 2214 ASSERT(MAC_PERIM_HELD((mac_handle_t)mcip->mci_mip));
2215 2215
2216 2216 mac_tx_client_unblock(mcip);
2217 2217 if (MCIP_TX_SRS(mcip) != NULL) {
2218 2218 mac_tx_srs_restart(MCIP_TX_SRS(mcip));
2219 2219 (void) mac_flow_walk_nolock(mcip->mci_subflow_tab,
2220 2220 mac_tx_flow_restart, NULL);
2221 2221 }
2222 2222 }
2223 2223
2224 2224 void
2225 2225 mac_tx_client_flush(mac_client_impl_t *mcip)
2226 2226 {
2227 2227 ASSERT(MAC_PERIM_HELD((mac_handle_t)mcip->mci_mip));
2228 2228
2229 2229 mac_tx_client_quiesce((mac_client_handle_t)mcip);
2230 2230 mac_tx_client_restart((mac_client_handle_t)mcip);
2231 2231 }
2232 2232
2233 2233 void
2234 2234 mac_client_quiesce(mac_client_impl_t *mcip)
2235 2235 {
2236 2236 mac_rx_client_quiesce((mac_client_handle_t)mcip);
2237 2237 mac_tx_client_quiesce((mac_client_handle_t)mcip);
2238 2238 }
2239 2239
2240 2240 void
2241 2241 mac_client_restart(mac_client_impl_t *mcip)
2242 2242 {
2243 2243 mac_rx_client_restart((mac_client_handle_t)mcip);
2244 2244 mac_tx_client_restart((mac_client_handle_t)mcip);
2245 2245 }
2246 2246
2247 2247 /*
2248 2248 * Allocate a minor number.
2249 2249 */
2250 2250 minor_t
2251 2251 mac_minor_hold(boolean_t sleep)
2252 2252 {
2253 2253 minor_t minor;
2254 2254
2255 2255 /*
2256 2256 * Grab a value from the arena.
2257 2257 */
2258 2258 atomic_add_32(&minor_count, 1);
2259 2259
2260 2260 if (sleep)
2261 2261 minor = (uint_t)id_alloc(minor_ids);
2262 2262 else
2263 2263 minor = (uint_t)id_alloc_nosleep(minor_ids);
2264 2264
2265 2265 if (minor == 0) {
2266 2266 atomic_add_32(&minor_count, -1);
2267 2267 return (0);
2268 2268 }
2269 2269
2270 2270 return (minor);
2271 2271 }
2272 2272
2273 2273 /*
2274 2274 * Release a previously allocated minor number.
2275 2275 */
2276 2276 void
2277 2277 mac_minor_rele(minor_t minor)
2278 2278 {
2279 2279 /*
2280 2280 * Return the value to the arena.
2281 2281 */
2282 2282 id_free(minor_ids, minor);
2283 2283 atomic_add_32(&minor_count, -1);
2284 2284 }
2285 2285
2286 2286 uint32_t
2287 2287 mac_no_notification(mac_handle_t mh)
2288 2288 {
2289 2289 mac_impl_t *mip = (mac_impl_t *)mh;
2290 2290
2291 2291 return (((mip->mi_state_flags & MIS_LEGACY) != 0) ?
2292 2292 mip->mi_capab_legacy.ml_unsup_note : 0);
2293 2293 }
2294 2294
2295 2295 /*
2296 2296 * Prevent any new opens of this mac in preparation for unregister
2297 2297 */
2298 2298 int
2299 2299 i_mac_disable(mac_impl_t *mip)
2300 2300 {
2301 2301 mac_client_impl_t *mcip;
2302 2302
2303 2303 rw_enter(&i_mac_impl_lock, RW_WRITER);
2304 2304 if (mip->mi_state_flags & MIS_DISABLED) {
2305 2305 /* Already disabled, return success */
2306 2306 rw_exit(&i_mac_impl_lock);
2307 2307 return (0);
2308 2308 }
2309 2309 /*
2310 2310 * See if there are any other references to this mac_t (e.g., VLAN's).
2311 2311 * If so return failure. If all the other checks below pass, then
2312 2312 * set mi_disabled atomically under the i_mac_impl_lock to prevent
2313 2313 * any new VLAN's from being created or new mac client opens of this
2314 2314 * mac end point.
2315 2315 */
2316 2316 if (mip->mi_ref > 0) {
2317 2317 rw_exit(&i_mac_impl_lock);
2318 2318 return (EBUSY);
2319 2319 }
2320 2320
2321 2321 /*
2322 2322 * mac clients must delete all multicast groups they join before
2323 2323 * closing. bcast groups are reference counted, the last client
2324 2324 * to delete the group will wait till the group is physically
2325 2325 * deleted. Since all clients have closed this mac end point
2326 2326 * mi_bcast_ngrps must be zero at this point
2327 2327 */
2328 2328 ASSERT(mip->mi_bcast_ngrps == 0);
2329 2329
2330 2330 /*
2331 2331 * Don't let go of this if it has some flows.
2332 2332 * All other code guarantees no flows are added to a disabled
2333 2333 * mac, therefore it is sufficient to check for the flow table
2334 2334 * only here.
2335 2335 */
2336 2336 mcip = mac_primary_client_handle(mip);
2337 2337 if ((mcip != NULL) && mac_link_has_flows((mac_client_handle_t)mcip)) {
2338 2338 rw_exit(&i_mac_impl_lock);
2339 2339 return (ENOTEMPTY);
2340 2340 }
2341 2341
2342 2342 mip->mi_state_flags |= MIS_DISABLED;
2343 2343 rw_exit(&i_mac_impl_lock);
2344 2344 return (0);
2345 2345 }
2346 2346
2347 2347 int
2348 2348 mac_disable_nowait(mac_handle_t mh)
2349 2349 {
2350 2350 mac_impl_t *mip = (mac_impl_t *)mh;
2351 2351 int err;
2352 2352
2353 2353 if ((err = i_mac_perim_enter_nowait(mip)) != 0)
2354 2354 return (err);
2355 2355 err = i_mac_disable(mip);
2356 2356 i_mac_perim_exit(mip);
2357 2357 return (err);
2358 2358 }
2359 2359
2360 2360 int
2361 2361 mac_disable(mac_handle_t mh)
2362 2362 {
2363 2363 mac_impl_t *mip = (mac_impl_t *)mh;
2364 2364 int err;
2365 2365
2366 2366 i_mac_perim_enter(mip);
2367 2367 err = i_mac_disable(mip);
2368 2368 i_mac_perim_exit(mip);
2369 2369
2370 2370 /*
2371 2371 * Clean up notification thread and wait for it to exit.
2372 2372 */
2373 2373 if (err == 0)
2374 2374 i_mac_notify_exit(mip);
2375 2375
2376 2376 return (err);
2377 2377 }
2378 2378
2379 2379 /*
2380 2380 * Called when the MAC instance has a non empty flow table, to de-multiplex
2381 2381 * incoming packets to the right flow.
2382 2382 * The MAC's rw lock is assumed held as a READER.
2383 2383 */
2384 2384 /* ARGSUSED */
2385 2385 static mblk_t *
2386 2386 mac_rx_classify(mac_impl_t *mip, mac_resource_handle_t mrh, mblk_t *mp)
2387 2387 {
2388 2388 flow_entry_t *flent = NULL;
2389 2389 uint_t flags = FLOW_INBOUND;
2390 2390 int err;
2391 2391
2392 2392 /*
2393 2393 * If the mac is a port of an aggregation, pass FLOW_IGNORE_VLAN
2394 2394 * to mac_flow_lookup() so that the VLAN packets can be successfully
2395 2395 * passed to the non-VLAN aggregation flows.
2396 2396 *
2397 2397 * Note that there is possibly a race between this and
2398 2398 * mac_unicast_remove/add() and VLAN packets could be incorrectly
2399 2399 * classified to non-VLAN flows of non-aggregation mac clients. These
2400 2400 * VLAN packets will be then filtered out by the mac module.
2401 2401 */
2402 2402 if ((mip->mi_state_flags & MIS_EXCLUSIVE) != 0)
2403 2403 flags |= FLOW_IGNORE_VLAN;
2404 2404
2405 2405 err = mac_flow_lookup(mip->mi_flow_tab, mp, flags, &flent);
2406 2406 if (err != 0) {
2407 2407 /* no registered receive function */
2408 2408 return (mp);
2409 2409 } else {
2410 2410 mac_client_impl_t *mcip;
2411 2411
2412 2412 /*
2413 2413 * This flent might just be an additional one on the MAC client,
2414 2414 * i.e. for classification purposes (different fdesc), however
2415 2415 * the resources, SRS et. al., are in the mci_flent, so if
2416 2416 * this isn't the mci_flent, we need to get it.
2417 2417 */
2418 2418 if ((mcip = flent->fe_mcip) != NULL &&
2419 2419 mcip->mci_flent != flent) {
2420 2420 FLOW_REFRELE(flent);
2421 2421 flent = mcip->mci_flent;
2422 2422 FLOW_TRY_REFHOLD(flent, err);
2423 2423 if (err != 0)
2424 2424 return (mp);
2425 2425 }
2426 2426 (flent->fe_cb_fn)(flent->fe_cb_arg1, flent->fe_cb_arg2, mp,
2427 2427 B_FALSE);
2428 2428 FLOW_REFRELE(flent);
2429 2429 }
2430 2430 return (NULL);
2431 2431 }
2432 2432
2433 2433 mblk_t *
2434 2434 mac_rx_flow(mac_handle_t mh, mac_resource_handle_t mrh, mblk_t *mp_chain)
2435 2435 {
2436 2436 mac_impl_t *mip = (mac_impl_t *)mh;
2437 2437 mblk_t *bp, *bp1, **bpp, *list = NULL;
2438 2438
2439 2439 /*
2440 2440 * We walk the chain and attempt to classify each packet.
2441 2441 * The packets that couldn't be classified will be returned
2442 2442 * back to the caller.
2443 2443 */
2444 2444 bp = mp_chain;
2445 2445 bpp = &list;
2446 2446 while (bp != NULL) {
2447 2447 bp1 = bp;
2448 2448 bp = bp->b_next;
2449 2449 bp1->b_next = NULL;
2450 2450
2451 2451 if (mac_rx_classify(mip, mrh, bp1) != NULL) {
2452 2452 *bpp = bp1;
2453 2453 bpp = &bp1->b_next;
2454 2454 }
2455 2455 }
2456 2456 return (list);
2457 2457 }
2458 2458
2459 2459 static int
2460 2460 mac_tx_flow_srs_wakeup(flow_entry_t *flent, void *arg)
2461 2461 {
2462 2462 mac_ring_handle_t ring = arg;
2463 2463
2464 2464 if (flent->fe_tx_srs)
2465 2465 mac_tx_srs_wakeup(flent->fe_tx_srs, ring);
2466 2466 return (0);
2467 2467 }
2468 2468
2469 2469 void
2470 2470 i_mac_tx_srs_notify(mac_impl_t *mip, mac_ring_handle_t ring)
2471 2471 {
2472 2472 mac_client_impl_t *cclient;
2473 2473 mac_soft_ring_set_t *mac_srs;
2474 2474
2475 2475 /*
2476 2476 * After grabbing the mi_rw_lock, the list of clients can't change.
2477 2477 * If there are any clients mi_disabled must be B_FALSE and can't
2478 2478 * get set since there are clients. If there aren't any clients we
2479 2479 * don't do anything. In any case the mip has to be valid. The driver
2480 2480 * must make sure that it goes single threaded (with respect to mac
2481 2481 * calls) and wait for all pending mac calls to finish before calling
2482 2482 * mac_unregister.
2483 2483 */
2484 2484 rw_enter(&i_mac_impl_lock, RW_READER);
2485 2485 if (mip->mi_state_flags & MIS_DISABLED) {
2486 2486 rw_exit(&i_mac_impl_lock);
2487 2487 return;
2488 2488 }
2489 2489
2490 2490 /*
2491 2491 * Get MAC tx srs from walking mac_client_handle list.
2492 2492 */
2493 2493 rw_enter(&mip->mi_rw_lock, RW_READER);
2494 2494 for (cclient = mip->mi_clients_list; cclient != NULL;
2495 2495 cclient = cclient->mci_client_next) {
2496 2496 if ((mac_srs = MCIP_TX_SRS(cclient)) != NULL) {
2497 2497 mac_tx_srs_wakeup(mac_srs, ring);
2498 2498 } else {
2499 2499 /*
2500 2500 * Aggr opens underlying ports in exclusive mode
2501 2501 * and registers flow control callbacks using
2502 2502 * mac_tx_client_notify(). When opened in
2503 2503 * exclusive mode, Tx SRS won't be created
2504 2504 * during mac_unicast_add().
2505 2505 */
2506 2506 if (cclient->mci_state_flags & MCIS_EXCLUSIVE) {
2507 2507 mac_tx_invoke_callbacks(cclient,
2508 2508 (mac_tx_cookie_t)ring);
2509 2509 }
2510 2510 }
2511 2511 (void) mac_flow_walk(cclient->mci_subflow_tab,
2512 2512 mac_tx_flow_srs_wakeup, ring);
2513 2513 }
2514 2514 rw_exit(&mip->mi_rw_lock);
2515 2515 rw_exit(&i_mac_impl_lock);
2516 2516 }
2517 2517
2518 2518 /* ARGSUSED */
2519 2519 void
2520 2520 mac_multicast_refresh(mac_handle_t mh, mac_multicst_t refresh, void *arg,
2521 2521 boolean_t add)
2522 2522 {
2523 2523 mac_impl_t *mip = (mac_impl_t *)mh;
2524 2524
2525 2525 i_mac_perim_enter((mac_impl_t *)mh);
2526 2526 /*
2527 2527 * If no specific refresh function was given then default to the
2528 2528 * driver's m_multicst entry point.
2529 2529 */
2530 2530 if (refresh == NULL) {
2531 2531 refresh = mip->mi_multicst;
2532 2532 arg = mip->mi_driver;
2533 2533 }
2534 2534
2535 2535 mac_bcast_refresh(mip, refresh, arg, add);
2536 2536 i_mac_perim_exit((mac_impl_t *)mh);
2537 2537 }
2538 2538
2539 2539 void
2540 2540 mac_promisc_refresh(mac_handle_t mh, mac_setpromisc_t refresh, void *arg)
2541 2541 {
2542 2542 mac_impl_t *mip = (mac_impl_t *)mh;
2543 2543
2544 2544 /*
2545 2545 * If no specific refresh function was given then default to the
2546 2546 * driver's m_promisc entry point.
2547 2547 */
2548 2548 if (refresh == NULL) {
2549 2549 refresh = mip->mi_setpromisc;
2550 2550 arg = mip->mi_driver;
2551 2551 }
2552 2552 ASSERT(refresh != NULL);
2553 2553
2554 2554 /*
2555 2555 * Call the refresh function with the current promiscuity.
2556 2556 */
2557 2557 refresh(arg, (mip->mi_devpromisc != 0));
2558 2558 }
2559 2559
2560 2560 /*
2561 2561 * The mac client requests that the mac not to change its margin size to
2562 2562 * be less than the specified value. If "current" is B_TRUE, then the client
2563 2563 * requests the mac not to change its margin size to be smaller than the
2564 2564 * current size. Further, return the current margin size value in this case.
2565 2565 *
2566 2566 * We keep every requested size in an ordered list from largest to smallest.
2567 2567 */
2568 2568 int
2569 2569 mac_margin_add(mac_handle_t mh, uint32_t *marginp, boolean_t current)
2570 2570 {
2571 2571 mac_impl_t *mip = (mac_impl_t *)mh;
2572 2572 mac_margin_req_t **pp, *p;
2573 2573 int err = 0;
2574 2574
2575 2575 rw_enter(&(mip->mi_rw_lock), RW_WRITER);
2576 2576 if (current)
2577 2577 *marginp = mip->mi_margin;
2578 2578
2579 2579 /*
2580 2580 * If the current margin value cannot satisfy the margin requested,
2581 2581 * return ENOTSUP directly.
2582 2582 */
2583 2583 if (*marginp > mip->mi_margin) {
2584 2584 err = ENOTSUP;
2585 2585 goto done;
2586 2586 }
2587 2587
2588 2588 /*
2589 2589 * Check whether the given margin is already in the list. If so,
2590 2590 * bump the reference count.
2591 2591 */
2592 2592 for (pp = &mip->mi_mmrp; (p = *pp) != NULL; pp = &p->mmr_nextp) {
2593 2593 if (p->mmr_margin == *marginp) {
2594 2594 /*
2595 2595 * The margin requested is already in the list,
2596 2596 * so just bump the reference count.
2597 2597 */
2598 2598 p->mmr_ref++;
2599 2599 goto done;
2600 2600 }
2601 2601 if (p->mmr_margin < *marginp)
2602 2602 break;
2603 2603 }
2604 2604
2605 2605
2606 2606 p = kmem_zalloc(sizeof (mac_margin_req_t), KM_SLEEP);
2607 2607 p->mmr_margin = *marginp;
2608 2608 p->mmr_ref++;
2609 2609 p->mmr_nextp = *pp;
2610 2610 *pp = p;
2611 2611
2612 2612 done:
2613 2613 rw_exit(&(mip->mi_rw_lock));
2614 2614 return (err);
2615 2615 }
2616 2616
2617 2617 /*
2618 2618 * The mac client requests to cancel its previous mac_margin_add() request.
2619 2619 * We remove the requested margin size from the list.
2620 2620 */
2621 2621 int
2622 2622 mac_margin_remove(mac_handle_t mh, uint32_t margin)
2623 2623 {
2624 2624 mac_impl_t *mip = (mac_impl_t *)mh;
2625 2625 mac_margin_req_t **pp, *p;
2626 2626 int err = 0;
2627 2627
2628 2628 rw_enter(&(mip->mi_rw_lock), RW_WRITER);
2629 2629 /*
2630 2630 * Find the entry in the list for the given margin.
2631 2631 */
2632 2632 for (pp = &(mip->mi_mmrp); (p = *pp) != NULL; pp = &(p->mmr_nextp)) {
2633 2633 if (p->mmr_margin == margin) {
2634 2634 if (--p->mmr_ref == 0)
2635 2635 break;
2636 2636
2637 2637 /*
2638 2638 * There is still a reference to this address so
2639 2639 * there's nothing more to do.
2640 2640 */
2641 2641 goto done;
2642 2642 }
2643 2643 }
2644 2644
2645 2645 /*
2646 2646 * We did not find an entry for the given margin.
2647 2647 */
2648 2648 if (p == NULL) {
2649 2649 err = ENOENT;
2650 2650 goto done;
2651 2651 }
2652 2652
2653 2653 ASSERT(p->mmr_ref == 0);
2654 2654
2655 2655 /*
2656 2656 * Remove it from the list.
2657 2657 */
2658 2658 *pp = p->mmr_nextp;
2659 2659 kmem_free(p, sizeof (mac_margin_req_t));
2660 2660 done:
2661 2661 rw_exit(&(mip->mi_rw_lock));
2662 2662 return (err);
2663 2663 }
2664 2664
2665 2665 boolean_t
2666 2666 mac_margin_update(mac_handle_t mh, uint32_t margin)
2667 2667 {
2668 2668 mac_impl_t *mip = (mac_impl_t *)mh;
2669 2669 uint32_t margin_needed = 0;
2670 2670
2671 2671 rw_enter(&(mip->mi_rw_lock), RW_WRITER);
2672 2672
2673 2673 if (mip->mi_mmrp != NULL)
2674 2674 margin_needed = mip->mi_mmrp->mmr_margin;
2675 2675
2676 2676 if (margin_needed <= margin)
2677 2677 mip->mi_margin = margin;
2678 2678
2679 2679 rw_exit(&(mip->mi_rw_lock));
2680 2680
2681 2681 if (margin_needed <= margin)
2682 2682 i_mac_notify(mip, MAC_NOTE_MARGIN);
2683 2683
2684 2684 return (margin_needed <= margin);
2685 2685 }
2686 2686
2687 2687 /*
2688 2688 * MAC Type Plugin functions.
2689 2689 */
2690 2690
2691 2691 mactype_t *
2692 2692 mactype_getplugin(const char *pname)
2693 2693 {
2694 2694 mactype_t *mtype = NULL;
2695 2695 boolean_t tried_modload = B_FALSE;
2696 2696
2697 2697 mutex_enter(&i_mactype_lock);
2698 2698
2699 2699 find_registered_mactype:
2700 2700 if (mod_hash_find(i_mactype_hash, (mod_hash_key_t)pname,
2701 2701 (mod_hash_val_t *)&mtype) != 0) {
2702 2702 if (!tried_modload) {
2703 2703 /*
2704 2704 * If the plugin has not yet been loaded, then
2705 2705 * attempt to load it now. If modload() succeeds,
2706 2706 * the plugin should have registered using
2707 2707 * mactype_register(), in which case we can go back
2708 2708 * and attempt to find it again.
2709 2709 */
2710 2710 if (modload(MACTYPE_KMODDIR, (char *)pname) != -1) {
2711 2711 tried_modload = B_TRUE;
2712 2712 goto find_registered_mactype;
2713 2713 }
2714 2714 }
2715 2715 } else {
2716 2716 /*
2717 2717 * Note that there's no danger that the plugin we've loaded
2718 2718 * could be unloaded between the modload() step and the
2719 2719 * reference count bump here, as we're holding
2720 2720 * i_mactype_lock, which mactype_unregister() also holds.
2721 2721 */
2722 2722 atomic_inc_32(&mtype->mt_ref);
2723 2723 }
2724 2724
2725 2725 mutex_exit(&i_mactype_lock);
2726 2726 return (mtype);
2727 2727 }
2728 2728
2729 2729 mactype_register_t *
2730 2730 mactype_alloc(uint_t mactype_version)
2731 2731 {
2732 2732 mactype_register_t *mtrp;
2733 2733
2734 2734 /*
2735 2735 * Make sure there isn't a version mismatch between the plugin and
2736 2736 * the framework. In the future, if multiple versions are
2737 2737 * supported, this check could become more sophisticated.
2738 2738 */
2739 2739 if (mactype_version != MACTYPE_VERSION)
2740 2740 return (NULL);
2741 2741
2742 2742 mtrp = kmem_zalloc(sizeof (mactype_register_t), KM_SLEEP);
2743 2743 mtrp->mtr_version = mactype_version;
2744 2744 return (mtrp);
2745 2745 }
2746 2746
2747 2747 void
2748 2748 mactype_free(mactype_register_t *mtrp)
2749 2749 {
2750 2750 kmem_free(mtrp, sizeof (mactype_register_t));
2751 2751 }
2752 2752
2753 2753 int
2754 2754 mactype_register(mactype_register_t *mtrp)
2755 2755 {
2756 2756 mactype_t *mtp;
2757 2757 mactype_ops_t *ops = mtrp->mtr_ops;
2758 2758
2759 2759 /* Do some sanity checking before we register this MAC type. */
2760 2760 if (mtrp->mtr_ident == NULL || ops == NULL)
2761 2761 return (EINVAL);
2762 2762
2763 2763 /*
2764 2764 * Verify that all mandatory callbacks are set in the ops
2765 2765 * vector.
2766 2766 */
2767 2767 if (ops->mtops_unicst_verify == NULL ||
2768 2768 ops->mtops_multicst_verify == NULL ||
2769 2769 ops->mtops_sap_verify == NULL ||
2770 2770 ops->mtops_header == NULL ||
2771 2771 ops->mtops_header_info == NULL) {
2772 2772 return (EINVAL);
2773 2773 }
2774 2774
2775 2775 mtp = kmem_zalloc(sizeof (*mtp), KM_SLEEP);
2776 2776 mtp->mt_ident = mtrp->mtr_ident;
2777 2777 mtp->mt_ops = *ops;
2778 2778 mtp->mt_type = mtrp->mtr_mactype;
2779 2779 mtp->mt_nativetype = mtrp->mtr_nativetype;
2780 2780 mtp->mt_addr_length = mtrp->mtr_addrlen;
2781 2781 if (mtrp->mtr_brdcst_addr != NULL) {
2782 2782 mtp->mt_brdcst_addr = kmem_alloc(mtrp->mtr_addrlen, KM_SLEEP);
2783 2783 bcopy(mtrp->mtr_brdcst_addr, mtp->mt_brdcst_addr,
2784 2784 mtrp->mtr_addrlen);
2785 2785 }
2786 2786
2787 2787 mtp->mt_stats = mtrp->mtr_stats;
2788 2788 mtp->mt_statcount = mtrp->mtr_statcount;
2789 2789
2790 2790 mtp->mt_mapping = mtrp->mtr_mapping;
2791 2791 mtp->mt_mappingcount = mtrp->mtr_mappingcount;
2792 2792
2793 2793 if (mod_hash_insert(i_mactype_hash,
2794 2794 (mod_hash_key_t)mtp->mt_ident, (mod_hash_val_t)mtp) != 0) {
2795 2795 kmem_free(mtp->mt_brdcst_addr, mtp->mt_addr_length);
2796 2796 kmem_free(mtp, sizeof (*mtp));
2797 2797 return (EEXIST);
2798 2798 }
2799 2799 return (0);
2800 2800 }
2801 2801
2802 2802 int
2803 2803 mactype_unregister(const char *ident)
2804 2804 {
2805 2805 mactype_t *mtp;
2806 2806 mod_hash_val_t val;
2807 2807 int err;
2808 2808
2809 2809 /*
2810 2810 * Let's not allow MAC drivers to use this plugin while we're
2811 2811 * trying to unregister it. Holding i_mactype_lock also prevents a
2812 2812 * plugin from unregistering while a MAC driver is attempting to
2813 2813 * hold a reference to it in i_mactype_getplugin().
2814 2814 */
2815 2815 mutex_enter(&i_mactype_lock);
2816 2816
2817 2817 if ((err = mod_hash_find(i_mactype_hash, (mod_hash_key_t)ident,
2818 2818 (mod_hash_val_t *)&mtp)) != 0) {
2819 2819 /* A plugin is trying to unregister, but it never registered. */
2820 2820 err = ENXIO;
2821 2821 goto done;
2822 2822 }
2823 2823
2824 2824 if (mtp->mt_ref != 0) {
2825 2825 err = EBUSY;
2826 2826 goto done;
2827 2827 }
2828 2828
2829 2829 err = mod_hash_remove(i_mactype_hash, (mod_hash_key_t)ident, &val);
2830 2830 ASSERT(err == 0);
2831 2831 if (err != 0) {
2832 2832 /* This should never happen, thus the ASSERT() above. */
2833 2833 err = EINVAL;
2834 2834 goto done;
2835 2835 }
2836 2836 ASSERT(mtp == (mactype_t *)val);
2837 2837
2838 2838 if (mtp->mt_brdcst_addr != NULL)
2839 2839 kmem_free(mtp->mt_brdcst_addr, mtp->mt_addr_length);
2840 2840 kmem_free(mtp, sizeof (mactype_t));
2841 2841 done:
2842 2842 mutex_exit(&i_mactype_lock);
2843 2843 return (err);
2844 2844 }
2845 2845
2846 2846 /*
2847 2847 * Checks the size of the value size specified for a property as
2848 2848 * part of a property operation. Returns B_TRUE if the size is
2849 2849 * correct, B_FALSE otherwise.
2850 2850 */
2851 2851 boolean_t
2852 2852 mac_prop_check_size(mac_prop_id_t id, uint_t valsize, boolean_t is_range)
2853 2853 {
2854 2854 uint_t minsize = 0;
2855 2855
2856 2856 if (is_range)
2857 2857 return (valsize >= sizeof (mac_propval_range_t));
2858 2858
2859 2859 switch (id) {
2860 2860 case MAC_PROP_ZONE:
2861 2861 minsize = sizeof (dld_ioc_zid_t);
2862 2862 break;
2863 2863 case MAC_PROP_AUTOPUSH:
2864 2864 if (valsize != 0)
2865 2865 minsize = sizeof (struct dlautopush);
2866 2866 break;
2867 2867 case MAC_PROP_TAGMODE:
2868 2868 minsize = sizeof (link_tagmode_t);
2869 2869 break;
2870 2870 case MAC_PROP_RESOURCE:
2871 2871 case MAC_PROP_RESOURCE_EFF:
2872 2872 minsize = sizeof (mac_resource_props_t);
2873 2873 break;
2874 2874 case MAC_PROP_DUPLEX:
2875 2875 minsize = sizeof (link_duplex_t);
2876 2876 break;
2877 2877 case MAC_PROP_SPEED:
2878 2878 minsize = sizeof (uint64_t);
2879 2879 break;
2880 2880 case MAC_PROP_STATUS:
2881 2881 minsize = sizeof (link_state_t);
2882 2882 break;
2883 2883 case MAC_PROP_AUTONEG:
2884 2884 case MAC_PROP_EN_AUTONEG:
2885 2885 minsize = sizeof (uint8_t);
2886 2886 break;
2887 2887 case MAC_PROP_MTU:
2888 2888 case MAC_PROP_LLIMIT:
2889 2889 case MAC_PROP_LDECAY:
2890 2890 minsize = sizeof (uint32_t);
2891 2891 break;
2892 2892 case MAC_PROP_FLOWCTRL:
2893 2893 minsize = sizeof (link_flowctrl_t);
2894 2894 break;
2895 2895 case MAC_PROP_ADV_10GFDX_CAP:
2896 2896 case MAC_PROP_EN_10GFDX_CAP:
2897 2897 case MAC_PROP_ADV_1000HDX_CAP:
2898 2898 case MAC_PROP_EN_1000HDX_CAP:
2899 2899 case MAC_PROP_ADV_100FDX_CAP:
2900 2900 case MAC_PROP_EN_100FDX_CAP:
2901 2901 case MAC_PROP_ADV_100HDX_CAP:
2902 2902 case MAC_PROP_EN_100HDX_CAP:
2903 2903 case MAC_PROP_ADV_10FDX_CAP:
2904 2904 case MAC_PROP_EN_10FDX_CAP:
2905 2905 case MAC_PROP_ADV_10HDX_CAP:
2906 2906 case MAC_PROP_EN_10HDX_CAP:
2907 2907 case MAC_PROP_ADV_100T4_CAP:
2908 2908 case MAC_PROP_EN_100T4_CAP:
2909 2909 minsize = sizeof (uint8_t);
2910 2910 break;
2911 2911 case MAC_PROP_PVID:
2912 2912 minsize = sizeof (uint16_t);
2913 2913 break;
2914 2914 case MAC_PROP_IPTUN_HOPLIMIT:
2915 2915 minsize = sizeof (uint32_t);
2916 2916 break;
2917 2917 case MAC_PROP_IPTUN_ENCAPLIMIT:
2918 2918 minsize = sizeof (uint32_t);
2919 2919 break;
2920 2920 case MAC_PROP_MAX_TX_RINGS_AVAIL:
2921 2921 case MAC_PROP_MAX_RX_RINGS_AVAIL:
2922 2922 case MAC_PROP_MAX_RXHWCLNT_AVAIL:
2923 2923 case MAC_PROP_MAX_TXHWCLNT_AVAIL:
2924 2924 minsize = sizeof (uint_t);
2925 2925 break;
2926 2926 case MAC_PROP_WL_ESSID:
2927 2927 minsize = sizeof (wl_linkstatus_t);
2928 2928 break;
2929 2929 case MAC_PROP_WL_BSSID:
2930 2930 minsize = sizeof (wl_bssid_t);
2931 2931 break;
2932 2932 case MAC_PROP_WL_BSSTYPE:
2933 2933 minsize = sizeof (wl_bss_type_t);
2934 2934 break;
2935 2935 case MAC_PROP_WL_LINKSTATUS:
2936 2936 minsize = sizeof (wl_linkstatus_t);
2937 2937 break;
2938 2938 case MAC_PROP_WL_DESIRED_RATES:
2939 2939 minsize = sizeof (wl_rates_t);
2940 2940 break;
2941 2941 case MAC_PROP_WL_SUPPORTED_RATES:
2942 2942 minsize = sizeof (wl_rates_t);
2943 2943 break;
2944 2944 case MAC_PROP_WL_AUTH_MODE:
2945 2945 minsize = sizeof (wl_authmode_t);
2946 2946 break;
2947 2947 case MAC_PROP_WL_ENCRYPTION:
2948 2948 minsize = sizeof (wl_encryption_t);
2949 2949 break;
2950 2950 case MAC_PROP_WL_RSSI:
2951 2951 minsize = sizeof (wl_rssi_t);
2952 2952 break;
2953 2953 case MAC_PROP_WL_PHY_CONFIG:
2954 2954 minsize = sizeof (wl_phy_conf_t);
2955 2955 break;
2956 2956 case MAC_PROP_WL_CAPABILITY:
2957 2957 minsize = sizeof (wl_capability_t);
2958 2958 break;
2959 2959 case MAC_PROP_WL_WPA:
2960 2960 minsize = sizeof (wl_wpa_t);
2961 2961 break;
2962 2962 case MAC_PROP_WL_SCANRESULTS:
2963 2963 minsize = sizeof (wl_wpa_ess_t);
2964 2964 break;
2965 2965 case MAC_PROP_WL_POWER_MODE:
2966 2966 minsize = sizeof (wl_ps_mode_t);
2967 2967 break;
2968 2968 case MAC_PROP_WL_RADIO:
2969 2969 minsize = sizeof (wl_radio_t);
2970 2970 break;
2971 2971 case MAC_PROP_WL_ESS_LIST:
2972 2972 minsize = sizeof (wl_ess_list_t);
2973 2973 break;
2974 2974 case MAC_PROP_WL_KEY_TAB:
2975 2975 minsize = sizeof (wl_wep_key_tab_t);
2976 2976 break;
2977 2977 case MAC_PROP_WL_CREATE_IBSS:
2978 2978 minsize = sizeof (wl_create_ibss_t);
2979 2979 break;
2980 2980 case MAC_PROP_WL_SETOPTIE:
2981 2981 minsize = sizeof (wl_wpa_ie_t);
↓ open down ↓ |
2981 lines elided |
↑ open up ↑ |
2982 2982 break;
2983 2983 case MAC_PROP_WL_DELKEY:
2984 2984 minsize = sizeof (wl_del_key_t);
2985 2985 break;
2986 2986 case MAC_PROP_WL_KEY:
2987 2987 minsize = sizeof (wl_key_t);
2988 2988 break;
2989 2989 case MAC_PROP_WL_MLME:
2990 2990 minsize = sizeof (wl_mlme_t);
2991 2991 break;
2992 + case MAC_PROP_MACADDRESS:
2993 + minsize = sizeof (mac_addrprop_t);
2992 2994 }
2993 2995
2994 2996 return (valsize >= minsize);
2995 2997 }
2996 2998
2997 2999 /*
2998 3000 * mac_set_prop() sets MAC or hardware driver properties:
2999 3001 *
3000 3002 * - MAC-managed properties such as resource properties include maxbw,
3001 3003 * priority, and cpu binding list, as well as the default port VID
3002 3004 * used by bridging. These properties are consumed by the MAC layer
3003 3005 * itself and not passed down to the driver. For resource control
3004 3006 * properties, this function invokes mac_set_resources() which will
3005 3007 * cache the property value in mac_impl_t and may call
3006 3008 * mac_client_set_resource() to update property value of the primary
3007 3009 * mac client, if it exists.
3008 3010 *
3009 3011 * - Properties which act on the hardware and must be passed to the
3010 3012 * driver, such as MTU, through the driver's mc_setprop() entry point.
3011 3013 */
3012 3014 int
3013 3015 mac_set_prop(mac_handle_t mh, mac_prop_id_t id, char *name, void *val,
3014 3016 uint_t valsize)
3015 3017 {
3016 3018 int err = ENOTSUP;
3017 3019 mac_impl_t *mip = (mac_impl_t *)mh;
3018 3020
3019 3021 ASSERT(MAC_PERIM_HELD(mh));
3020 3022
3021 3023 switch (id) {
3022 3024 case MAC_PROP_RESOURCE: {
3023 3025 mac_resource_props_t *mrp;
3024 3026
3025 3027 /* call mac_set_resources() for MAC properties */
3026 3028 ASSERT(valsize >= sizeof (mac_resource_props_t));
3027 3029 mrp = kmem_zalloc(sizeof (*mrp), KM_SLEEP);
3028 3030 bcopy(val, mrp, sizeof (*mrp));
3029 3031 err = mac_set_resources(mh, mrp);
3030 3032 kmem_free(mrp, sizeof (*mrp));
3031 3033 break;
3032 3034 }
3033 3035
3034 3036 case MAC_PROP_PVID:
3035 3037 ASSERT(valsize >= sizeof (uint16_t));
3036 3038 if (mip->mi_state_flags & MIS_IS_VNIC)
3037 3039 return (EINVAL);
3038 3040 err = mac_set_pvid(mh, *(uint16_t *)val);
3039 3041 break;
3040 3042
3041 3043 case MAC_PROP_MTU: {
3042 3044 uint32_t mtu;
3043 3045
3044 3046 ASSERT(valsize >= sizeof (uint32_t));
3045 3047 bcopy(val, &mtu, sizeof (mtu));
3046 3048 err = mac_set_mtu(mh, mtu, NULL);
3047 3049 break;
3048 3050 }
3049 3051
3050 3052 case MAC_PROP_LLIMIT:
3051 3053 case MAC_PROP_LDECAY: {
3052 3054 uint32_t learnval;
3053 3055
3054 3056 if (valsize < sizeof (learnval) ||
3055 3057 (mip->mi_state_flags & MIS_IS_VNIC))
3056 3058 return (EINVAL);
3057 3059 bcopy(val, &learnval, sizeof (learnval));
↓ open down ↓ |
56 lines elided |
↑ open up ↑ |
3058 3060 if (learnval == 0 && id == MAC_PROP_LDECAY)
3059 3061 return (EINVAL);
3060 3062 if (id == MAC_PROP_LLIMIT)
3061 3063 mip->mi_llimit = learnval;
3062 3064 else
3063 3065 mip->mi_ldecay = learnval;
3064 3066 err = 0;
3065 3067 break;
3066 3068 }
3067 3069
3070 + case MAC_PROP_MACADDRESS: {
3071 + mac_addrprop_t *addrprop = val;
3072 +
3073 + if (addrprop->ma_len != mip->mi_type->mt_addr_length)
3074 + return (EINVAL);
3075 +
3076 + err = mac_unicast_primary_set(mh, addrprop->ma_addr);
3077 + break;
3078 + }
3079 +
3068 3080 default:
3069 3081 /* For other driver properties, call driver's callback */
3070 3082 if (mip->mi_callbacks->mc_callbacks & MC_SETPROP) {
3071 3083 err = mip->mi_callbacks->mc_setprop(mip->mi_driver,
3072 3084 name, id, valsize, val);
3073 3085 }
3074 3086 }
3075 3087 return (err);
3076 3088 }
3077 3089
3078 3090 /*
3079 3091 * mac_get_prop() gets MAC or device driver properties.
3080 3092 *
3081 3093 * If the property is a driver property, mac_get_prop() calls driver's callback
3082 3094 * entry point to get it.
3083 3095 * If the property is a MAC property, mac_get_prop() invokes mac_get_resources()
3084 3096 * which returns the cached value in mac_impl_t.
3085 3097 */
3086 3098 int
3087 3099 mac_get_prop(mac_handle_t mh, mac_prop_id_t id, char *name, void *val,
3088 3100 uint_t valsize)
3089 3101 {
3090 3102 int err = ENOTSUP;
3091 3103 mac_impl_t *mip = (mac_impl_t *)mh;
3092 3104 uint_t rings;
3093 3105 uint_t vlinks;
3094 3106
3095 3107 bzero(val, valsize);
3096 3108
3097 3109 switch (id) {
3098 3110 case MAC_PROP_RESOURCE: {
3099 3111 mac_resource_props_t *mrp;
3100 3112
3101 3113 /* If mac property, read from cache */
3102 3114 ASSERT(valsize >= sizeof (mac_resource_props_t));
3103 3115 mrp = kmem_zalloc(sizeof (*mrp), KM_SLEEP);
3104 3116 mac_get_resources(mh, mrp);
3105 3117 bcopy(mrp, val, sizeof (*mrp));
3106 3118 kmem_free(mrp, sizeof (*mrp));
3107 3119 return (0);
3108 3120 }
3109 3121 case MAC_PROP_RESOURCE_EFF: {
3110 3122 mac_resource_props_t *mrp;
3111 3123
3112 3124 /* If mac effective property, read from client */
3113 3125 ASSERT(valsize >= sizeof (mac_resource_props_t));
3114 3126 mrp = kmem_zalloc(sizeof (*mrp), KM_SLEEP);
3115 3127 mac_get_effective_resources(mh, mrp);
3116 3128 bcopy(mrp, val, sizeof (*mrp));
3117 3129 kmem_free(mrp, sizeof (*mrp));
3118 3130 return (0);
3119 3131 }
3120 3132
3121 3133 case MAC_PROP_PVID:
3122 3134 ASSERT(valsize >= sizeof (uint16_t));
3123 3135 if (mip->mi_state_flags & MIS_IS_VNIC)
3124 3136 return (EINVAL);
3125 3137 *(uint16_t *)val = mac_get_pvid(mh);
3126 3138 return (0);
3127 3139
3128 3140 case MAC_PROP_LLIMIT:
3129 3141 case MAC_PROP_LDECAY:
3130 3142 ASSERT(valsize >= sizeof (uint32_t));
3131 3143 if (mip->mi_state_flags & MIS_IS_VNIC)
3132 3144 return (EINVAL);
3133 3145 if (id == MAC_PROP_LLIMIT)
3134 3146 bcopy(&mip->mi_llimit, val, sizeof (mip->mi_llimit));
3135 3147 else
3136 3148 bcopy(&mip->mi_ldecay, val, sizeof (mip->mi_ldecay));
3137 3149 return (0);
3138 3150
3139 3151 case MAC_PROP_MTU: {
3140 3152 uint32_t sdu;
3141 3153
3142 3154 ASSERT(valsize >= sizeof (uint32_t));
3143 3155 mac_sdu_get2(mh, NULL, &sdu, NULL);
3144 3156 bcopy(&sdu, val, sizeof (sdu));
3145 3157
3146 3158 return (0);
3147 3159 }
3148 3160 case MAC_PROP_STATUS: {
3149 3161 link_state_t link_state;
3150 3162
3151 3163 if (valsize < sizeof (link_state))
3152 3164 return (EINVAL);
3153 3165 link_state = mac_link_get(mh);
3154 3166 bcopy(&link_state, val, sizeof (link_state));
3155 3167
3156 3168 return (0);
3157 3169 }
3158 3170
3159 3171 case MAC_PROP_MAX_RX_RINGS_AVAIL:
3160 3172 case MAC_PROP_MAX_TX_RINGS_AVAIL:
3161 3173 ASSERT(valsize >= sizeof (uint_t));
3162 3174 rings = id == MAC_PROP_MAX_RX_RINGS_AVAIL ?
3163 3175 mac_rxavail_get(mh) : mac_txavail_get(mh);
3164 3176 bcopy(&rings, val, sizeof (uint_t));
3165 3177 return (0);
3166 3178
3167 3179 case MAC_PROP_MAX_RXHWCLNT_AVAIL:
3168 3180 case MAC_PROP_MAX_TXHWCLNT_AVAIL:
3169 3181 ASSERT(valsize >= sizeof (uint_t));
3170 3182 vlinks = id == MAC_PROP_MAX_RXHWCLNT_AVAIL ?
3171 3183 mac_rxhwlnksavail_get(mh) : mac_txhwlnksavail_get(mh);
3172 3184 bcopy(&vlinks, val, sizeof (uint_t));
↓ open down ↓ |
95 lines elided |
↑ open up ↑ |
3173 3185 return (0);
3174 3186
3175 3187 case MAC_PROP_RXRINGSRANGE:
3176 3188 case MAC_PROP_TXRINGSRANGE:
3177 3189 /*
3178 3190 * The value for these properties are returned through
3179 3191 * the MAC_PROP_RESOURCE property.
3180 3192 */
3181 3193 return (0);
3182 3194
3195 + case MAC_PROP_MACADDRESS: {
3196 + mac_addrprop_t *addrprop = val;
3197 +
3198 + if (valsize < sizeof (mac_addrprop_t))
3199 + return (EINVAL);
3200 + mac_unicast_primary_get(mh, addrprop->ma_addr);
3201 + addrprop->ma_len = mip->mi_type->mt_addr_length;
3202 + return (0);
3203 + }
3204 +
3183 3205 default:
3184 3206 break;
3185 3207
3186 3208 }
3187 3209
3188 3210 /* If driver property, request from driver */
3189 3211 if (mip->mi_callbacks->mc_callbacks & MC_GETPROP) {
3190 3212 err = mip->mi_callbacks->mc_getprop(mip->mi_driver, name, id,
3191 3213 valsize, val);
3192 3214 }
3193 3215
3194 3216 return (err);
3195 3217 }
3196 3218
3197 3219 /*
3198 3220 * Helper function to initialize the range structure for use in
3199 3221 * mac_get_prop. If the type can be other than uint32, we can
3200 3222 * pass that as an arg.
3201 3223 */
3202 3224 static void
3203 3225 _mac_set_range(mac_propval_range_t *range, uint32_t min, uint32_t max)
3204 3226 {
3205 3227 range->mpr_count = 1;
3206 3228 range->mpr_type = MAC_PROPVAL_UINT32;
3207 3229 range->mpr_range_uint32[0].mpur_min = min;
3208 3230 range->mpr_range_uint32[0].mpur_max = max;
3209 3231 }
3210 3232
3211 3233 /*
3212 3234 * Returns information about the specified property, such as default
3213 3235 * values or permissions.
3214 3236 */
3215 3237 int
3216 3238 mac_prop_info(mac_handle_t mh, mac_prop_id_t id, char *name,
3217 3239 void *default_val, uint_t default_size, mac_propval_range_t *range,
3218 3240 uint_t *perm)
3219 3241 {
3220 3242 mac_prop_info_state_t state;
3221 3243 mac_impl_t *mip = (mac_impl_t *)mh;
3222 3244 uint_t max;
3223 3245
3224 3246 /*
3225 3247 * A property is read/write by default unless the driver says
3226 3248 * otherwise.
3227 3249 */
3228 3250 if (perm != NULL)
3229 3251 *perm = MAC_PROP_PERM_RW;
3230 3252
3231 3253 if (default_val != NULL)
3232 3254 bzero(default_val, default_size);
3233 3255
3234 3256 /*
3235 3257 * First, handle framework properties for which we don't need to
3236 3258 * involve the driver.
3237 3259 */
3238 3260 switch (id) {
3239 3261 case MAC_PROP_RESOURCE:
3240 3262 case MAC_PROP_PVID:
3241 3263 case MAC_PROP_LLIMIT:
3242 3264 case MAC_PROP_LDECAY:
3243 3265 return (0);
3244 3266
3245 3267 case MAC_PROP_MAX_RX_RINGS_AVAIL:
3246 3268 case MAC_PROP_MAX_TX_RINGS_AVAIL:
3247 3269 case MAC_PROP_MAX_RXHWCLNT_AVAIL:
3248 3270 case MAC_PROP_MAX_TXHWCLNT_AVAIL:
3249 3271 if (perm != NULL)
3250 3272 *perm = MAC_PROP_PERM_READ;
3251 3273 return (0);
3252 3274
3253 3275 case MAC_PROP_RXRINGSRANGE:
3254 3276 case MAC_PROP_TXRINGSRANGE:
3255 3277 /*
3256 3278 * Currently, we support range for RX and TX rings properties.
3257 3279 * When we extend this support to maxbw, cpus and priority,
3258 3280 * we should move this to mac_get_resources.
3259 3281 * There is no default value for RX or TX rings.
3260 3282 */
3261 3283 if ((mip->mi_state_flags & MIS_IS_VNIC) &&
3262 3284 mac_is_vnic_primary(mh)) {
3263 3285 /*
3264 3286 * We don't support setting rings for a VLAN
3265 3287 * data link because it shares its ring with the
3266 3288 * primary MAC client.
3267 3289 */
3268 3290 if (perm != NULL)
3269 3291 *perm = MAC_PROP_PERM_READ;
3270 3292 if (range != NULL)
3271 3293 range->mpr_count = 0;
3272 3294 } else if (range != NULL) {
3273 3295 if (mip->mi_state_flags & MIS_IS_VNIC)
3274 3296 mh = mac_get_lower_mac_handle(mh);
3275 3297 mip = (mac_impl_t *)mh;
3276 3298 if ((id == MAC_PROP_RXRINGSRANGE &&
3277 3299 mip->mi_rx_group_type == MAC_GROUP_TYPE_STATIC) ||
3278 3300 (id == MAC_PROP_TXRINGSRANGE &&
3279 3301 mip->mi_tx_group_type == MAC_GROUP_TYPE_STATIC)) {
3280 3302 if (id == MAC_PROP_RXRINGSRANGE) {
3281 3303 if ((mac_rxhwlnksavail_get(mh) +
3282 3304 mac_rxhwlnksrsvd_get(mh)) <= 1) {
3283 3305 /*
3284 3306 * doesn't support groups or
3285 3307 * rings
3286 3308 */
3287 3309 range->mpr_count = 0;
3288 3310 } else {
3289 3311 /*
3290 3312 * supports specifying groups,
3291 3313 * but not rings
3292 3314 */
3293 3315 _mac_set_range(range, 0, 0);
3294 3316 }
3295 3317 } else {
3296 3318 if ((mac_txhwlnksavail_get(mh) +
3297 3319 mac_txhwlnksrsvd_get(mh)) <= 1) {
3298 3320 /*
3299 3321 * doesn't support groups or
3300 3322 * rings
3301 3323 */
3302 3324 range->mpr_count = 0;
3303 3325 } else {
3304 3326 /*
3305 3327 * supports specifying groups,
3306 3328 * but not rings
3307 3329 */
3308 3330 _mac_set_range(range, 0, 0);
3309 3331 }
3310 3332 }
3311 3333 } else {
3312 3334 max = id == MAC_PROP_RXRINGSRANGE ?
3313 3335 mac_rxavail_get(mh) + mac_rxrsvd_get(mh) :
3314 3336 mac_txavail_get(mh) + mac_txrsvd_get(mh);
3315 3337 if (max <= 1) {
3316 3338 /*
3317 3339 * doesn't support groups or
3318 3340 * rings
3319 3341 */
3320 3342 range->mpr_count = 0;
3321 3343 } else {
3322 3344 /*
3323 3345 * -1 because we have to leave out the
3324 3346 * default ring.
3325 3347 */
↓ open down ↓ |
133 lines elided |
↑ open up ↑ |
3326 3348 _mac_set_range(range, 1, max - 1);
3327 3349 }
3328 3350 }
3329 3351 }
3330 3352 return (0);
3331 3353
3332 3354 case MAC_PROP_STATUS:
3333 3355 if (perm != NULL)
3334 3356 *perm = MAC_PROP_PERM_READ;
3335 3357 return (0);
3358 +
3359 + case MAC_PROP_MACADDRESS: {
3360 + mac_addrprop_t *defaddr = default_val;
3361 +
3362 + if (defaddr != NULL) {
3363 + if (default_size < sizeof (mac_addrprop_t))
3364 + return (EINVAL);
3365 + bcopy(mip->mi_info.mi_unicst_addr, defaddr->ma_addr,
3366 + mip->mi_type->mt_addr_length);
3367 + defaddr->ma_len = mip->mi_type->mt_addr_length;
3368 + }
3369 + return (0);
3370 + }
3336 3371 }
3337 3372
3338 3373 /*
3339 3374 * Get the property info from the driver if it implements the
3340 3375 * property info entry point.
3341 3376 */
3342 3377 bzero(&state, sizeof (state));
3343 3378
3344 3379 if (mip->mi_callbacks->mc_callbacks & MC_PROPINFO) {
3345 3380 state.pr_default = default_val;
3346 3381 state.pr_default_size = default_size;
3347 3382
3348 3383 /*
3349 3384 * The caller specifies the maximum number of ranges
3350 3385 * it can accomodate using mpr_count. We don't touch
3351 3386 * this value until the driver returns from its
3352 3387 * mc_propinfo() callback, and ensure we don't exceed
3353 3388 * this number of range as the driver defines
3354 3389 * supported range from its mc_propinfo().
3355 3390 *
3356 3391 * pr_range_cur_count keeps track of how many ranges
3357 3392 * were defined by the driver from its mc_propinfo()
3358 3393 * entry point.
3359 3394 *
3360 3395 * On exit, the user-specified range mpr_count returns
3361 3396 * the number of ranges specified by the driver on
3362 3397 * success, or the number of ranges it wanted to
3363 3398 * define if that number of ranges could not be
3364 3399 * accomodated by the specified range structure. In
3365 3400 * the latter case, the caller will be able to
3366 3401 * allocate a larger range structure, and query the
3367 3402 * property again.
3368 3403 */
3369 3404 state.pr_range_cur_count = 0;
3370 3405 state.pr_range = range;
3371 3406
3372 3407 mip->mi_callbacks->mc_propinfo(mip->mi_driver, name, id,
3373 3408 (mac_prop_info_handle_t)&state);
3374 3409
3375 3410 if (state.pr_flags & MAC_PROP_INFO_RANGE)
3376 3411 range->mpr_count = state.pr_range_cur_count;
3377 3412
3378 3413 /*
3379 3414 * The operation could fail if the buffer supplied by
3380 3415 * the user was too small for the range or default
3381 3416 * value of the property.
3382 3417 */
3383 3418 if (state.pr_errno != 0)
3384 3419 return (state.pr_errno);
3385 3420
3386 3421 if (perm != NULL && state.pr_flags & MAC_PROP_INFO_PERM)
3387 3422 *perm = state.pr_perm;
3388 3423 }
3389 3424
3390 3425 /*
3391 3426 * The MAC layer may want to provide default values or allowed
3392 3427 * ranges for properties if the driver does not provide a
3393 3428 * property info entry point, or that entry point exists, but
3394 3429 * it did not provide a default value or allowed ranges for
3395 3430 * that property.
3396 3431 */
3397 3432 switch (id) {
3398 3433 case MAC_PROP_MTU: {
3399 3434 uint32_t sdu;
3400 3435
3401 3436 mac_sdu_get2(mh, NULL, &sdu, NULL);
3402 3437
3403 3438 if (range != NULL && !(state.pr_flags &
3404 3439 MAC_PROP_INFO_RANGE)) {
3405 3440 /* MTU range */
3406 3441 _mac_set_range(range, sdu, sdu);
3407 3442 }
3408 3443
3409 3444 if (default_val != NULL && !(state.pr_flags &
3410 3445 MAC_PROP_INFO_DEFAULT)) {
3411 3446 if (mip->mi_info.mi_media == DL_ETHER)
3412 3447 sdu = ETHERMTU;
3413 3448 /* default MTU value */
3414 3449 bcopy(&sdu, default_val, sizeof (sdu));
3415 3450 }
3416 3451 }
3417 3452 }
3418 3453
3419 3454 return (0);
3420 3455 }
3421 3456
3422 3457 int
3423 3458 mac_fastpath_disable(mac_handle_t mh)
3424 3459 {
3425 3460 mac_impl_t *mip = (mac_impl_t *)mh;
3426 3461
3427 3462 if ((mip->mi_state_flags & MIS_LEGACY) == 0)
3428 3463 return (0);
3429 3464
3430 3465 return (mip->mi_capab_legacy.ml_fastpath_disable(mip->mi_driver));
3431 3466 }
3432 3467
3433 3468 void
3434 3469 mac_fastpath_enable(mac_handle_t mh)
3435 3470 {
3436 3471 mac_impl_t *mip = (mac_impl_t *)mh;
3437 3472
3438 3473 if ((mip->mi_state_flags & MIS_LEGACY) == 0)
3439 3474 return;
3440 3475
3441 3476 mip->mi_capab_legacy.ml_fastpath_enable(mip->mi_driver);
3442 3477 }
3443 3478
3444 3479 void
3445 3480 mac_register_priv_prop(mac_impl_t *mip, char **priv_props)
3446 3481 {
3447 3482 uint_t nprops, i;
3448 3483
3449 3484 if (priv_props == NULL)
3450 3485 return;
3451 3486
3452 3487 nprops = 0;
3453 3488 while (priv_props[nprops] != NULL)
3454 3489 nprops++;
3455 3490 if (nprops == 0)
3456 3491 return;
3457 3492
3458 3493
3459 3494 mip->mi_priv_prop = kmem_zalloc(nprops * sizeof (char *), KM_SLEEP);
3460 3495
3461 3496 for (i = 0; i < nprops; i++) {
3462 3497 mip->mi_priv_prop[i] = kmem_zalloc(MAXLINKPROPNAME, KM_SLEEP);
3463 3498 (void) strlcpy(mip->mi_priv_prop[i], priv_props[i],
3464 3499 MAXLINKPROPNAME);
3465 3500 }
3466 3501
3467 3502 mip->mi_priv_prop_count = nprops;
3468 3503 }
3469 3504
3470 3505 void
3471 3506 mac_unregister_priv_prop(mac_impl_t *mip)
3472 3507 {
3473 3508 uint_t i;
3474 3509
3475 3510 if (mip->mi_priv_prop_count == 0) {
3476 3511 ASSERT(mip->mi_priv_prop == NULL);
3477 3512 return;
3478 3513 }
3479 3514
3480 3515 for (i = 0; i < mip->mi_priv_prop_count; i++)
3481 3516 kmem_free(mip->mi_priv_prop[i], MAXLINKPROPNAME);
3482 3517 kmem_free(mip->mi_priv_prop, mip->mi_priv_prop_count *
3483 3518 sizeof (char *));
3484 3519
3485 3520 mip->mi_priv_prop = NULL;
3486 3521 mip->mi_priv_prop_count = 0;
3487 3522 }
3488 3523
3489 3524 /*
3490 3525 * mac_ring_t 'mr' macros. Some rogue drivers may access ring structure
3491 3526 * (by invoking mac_rx()) even after processing mac_stop_ring(). In such
3492 3527 * cases if MAC free's the ring structure after mac_stop_ring(), any
3493 3528 * illegal access to the ring structure coming from the driver will panic
3494 3529 * the system. In order to protect the system from such inadverent access,
3495 3530 * we maintain a cache of rings in the mac_impl_t after they get free'd up.
3496 3531 * When packets are received on free'd up rings, MAC (through the generation
3497 3532 * count mechanism) will drop such packets.
3498 3533 */
3499 3534 static mac_ring_t *
3500 3535 mac_ring_alloc(mac_impl_t *mip)
3501 3536 {
3502 3537 mac_ring_t *ring;
3503 3538
3504 3539 mutex_enter(&mip->mi_ring_lock);
3505 3540 if (mip->mi_ring_freelist != NULL) {
3506 3541 ring = mip->mi_ring_freelist;
3507 3542 mip->mi_ring_freelist = ring->mr_next;
3508 3543 bzero(ring, sizeof (mac_ring_t));
3509 3544 mutex_exit(&mip->mi_ring_lock);
3510 3545 } else {
3511 3546 mutex_exit(&mip->mi_ring_lock);
3512 3547 ring = kmem_cache_alloc(mac_ring_cache, KM_SLEEP);
3513 3548 }
3514 3549 ASSERT((ring != NULL) && (ring->mr_state == MR_FREE));
3515 3550 return (ring);
3516 3551 }
3517 3552
3518 3553 static void
3519 3554 mac_ring_free(mac_impl_t *mip, mac_ring_t *ring)
3520 3555 {
3521 3556 ASSERT(ring->mr_state == MR_FREE);
3522 3557
3523 3558 mutex_enter(&mip->mi_ring_lock);
3524 3559 ring->mr_state = MR_FREE;
3525 3560 ring->mr_flag = 0;
3526 3561 ring->mr_next = mip->mi_ring_freelist;
3527 3562 ring->mr_mip = NULL;
3528 3563 mip->mi_ring_freelist = ring;
3529 3564 mac_ring_stat_delete(ring);
3530 3565 mutex_exit(&mip->mi_ring_lock);
3531 3566 }
3532 3567
3533 3568 static void
3534 3569 mac_ring_freeall(mac_impl_t *mip)
3535 3570 {
3536 3571 mac_ring_t *ring_next;
3537 3572 mutex_enter(&mip->mi_ring_lock);
3538 3573 mac_ring_t *ring = mip->mi_ring_freelist;
3539 3574 while (ring != NULL) {
3540 3575 ring_next = ring->mr_next;
3541 3576 kmem_cache_free(mac_ring_cache, ring);
3542 3577 ring = ring_next;
3543 3578 }
3544 3579 mip->mi_ring_freelist = NULL;
3545 3580 mutex_exit(&mip->mi_ring_lock);
3546 3581 }
3547 3582
3548 3583 int
3549 3584 mac_start_ring(mac_ring_t *ring)
3550 3585 {
3551 3586 int rv = 0;
3552 3587
3553 3588 ASSERT(ring->mr_state == MR_FREE);
3554 3589
3555 3590 if (ring->mr_start != NULL) {
3556 3591 rv = ring->mr_start(ring->mr_driver, ring->mr_gen_num);
3557 3592 if (rv != 0)
3558 3593 return (rv);
3559 3594 }
3560 3595
3561 3596 ring->mr_state = MR_INUSE;
3562 3597 return (rv);
3563 3598 }
3564 3599
3565 3600 void
3566 3601 mac_stop_ring(mac_ring_t *ring)
3567 3602 {
3568 3603 ASSERT(ring->mr_state == MR_INUSE);
3569 3604
3570 3605 if (ring->mr_stop != NULL)
3571 3606 ring->mr_stop(ring->mr_driver);
3572 3607
3573 3608 ring->mr_state = MR_FREE;
3574 3609
3575 3610 /*
3576 3611 * Increment the ring generation number for this ring.
3577 3612 */
3578 3613 ring->mr_gen_num++;
3579 3614 }
3580 3615
3581 3616 int
3582 3617 mac_start_group(mac_group_t *group)
3583 3618 {
3584 3619 int rv = 0;
3585 3620
3586 3621 if (group->mrg_start != NULL)
3587 3622 rv = group->mrg_start(group->mrg_driver);
3588 3623
3589 3624 return (rv);
3590 3625 }
3591 3626
3592 3627 void
3593 3628 mac_stop_group(mac_group_t *group)
3594 3629 {
3595 3630 if (group->mrg_stop != NULL)
3596 3631 group->mrg_stop(group->mrg_driver);
3597 3632 }
3598 3633
3599 3634 /*
3600 3635 * Called from mac_start() on the default Rx group. Broadcast and multicast
3601 3636 * packets are received only on the default group. Hence the default group
3602 3637 * needs to be up even if the primary client is not up, for the other groups
3603 3638 * to be functional. We do this by calling this function at mac_start time
3604 3639 * itself. However the broadcast packets that are received can't make their
3605 3640 * way beyond mac_rx until a mac client creates a broadcast flow.
3606 3641 */
3607 3642 static int
3608 3643 mac_start_group_and_rings(mac_group_t *group)
3609 3644 {
3610 3645 mac_ring_t *ring;
3611 3646 int rv = 0;
3612 3647
3613 3648 ASSERT(group->mrg_state == MAC_GROUP_STATE_REGISTERED);
3614 3649 if ((rv = mac_start_group(group)) != 0)
3615 3650 return (rv);
3616 3651
3617 3652 for (ring = group->mrg_rings; ring != NULL; ring = ring->mr_next) {
3618 3653 ASSERT(ring->mr_state == MR_FREE);
3619 3654 if ((rv = mac_start_ring(ring)) != 0)
3620 3655 goto error;
3621 3656 ring->mr_classify_type = MAC_SW_CLASSIFIER;
3622 3657 }
3623 3658 return (0);
3624 3659
3625 3660 error:
3626 3661 mac_stop_group_and_rings(group);
3627 3662 return (rv);
3628 3663 }
3629 3664
3630 3665 /* Called from mac_stop on the default Rx group */
3631 3666 static void
3632 3667 mac_stop_group_and_rings(mac_group_t *group)
3633 3668 {
3634 3669 mac_ring_t *ring;
3635 3670
3636 3671 for (ring = group->mrg_rings; ring != NULL; ring = ring->mr_next) {
3637 3672 if (ring->mr_state != MR_FREE) {
3638 3673 mac_stop_ring(ring);
3639 3674 ring->mr_flag = 0;
3640 3675 ring->mr_classify_type = MAC_NO_CLASSIFIER;
3641 3676 }
3642 3677 }
3643 3678 mac_stop_group(group);
3644 3679 }
3645 3680
3646 3681
3647 3682 static mac_ring_t *
3648 3683 mac_init_ring(mac_impl_t *mip, mac_group_t *group, int index,
3649 3684 mac_capab_rings_t *cap_rings)
3650 3685 {
3651 3686 mac_ring_t *ring, *rnext;
3652 3687 mac_ring_info_t ring_info;
3653 3688 ddi_intr_handle_t ddi_handle;
3654 3689
3655 3690 ring = mac_ring_alloc(mip);
3656 3691
3657 3692 /* Prepare basic information of ring */
3658 3693
3659 3694 /*
3660 3695 * Ring index is numbered to be unique across a particular device.
3661 3696 * Ring index computation makes following assumptions:
3662 3697 * - For drivers with static grouping (e.g. ixgbe, bge),
3663 3698 * ring index exchanged with the driver (e.g. during mr_rget)
3664 3699 * is unique only across the group the ring belongs to.
3665 3700 * - Drivers with dynamic grouping (e.g. nxge), start
3666 3701 * with single group (mrg_index = 0).
3667 3702 */
3668 3703 ring->mr_index = group->mrg_index * group->mrg_info.mgi_count + index;
3669 3704 ring->mr_type = group->mrg_type;
3670 3705 ring->mr_gh = (mac_group_handle_t)group;
3671 3706
3672 3707 /* Insert the new ring to the list. */
3673 3708 ring->mr_next = group->mrg_rings;
3674 3709 group->mrg_rings = ring;
3675 3710
3676 3711 /* Zero to reuse the info data structure */
3677 3712 bzero(&ring_info, sizeof (ring_info));
3678 3713
3679 3714 /* Query ring information from driver */
3680 3715 cap_rings->mr_rget(mip->mi_driver, group->mrg_type, group->mrg_index,
3681 3716 index, &ring_info, (mac_ring_handle_t)ring);
3682 3717
3683 3718 ring->mr_info = ring_info;
3684 3719
3685 3720 /*
3686 3721 * The interrupt handle could be shared among multiple rings.
3687 3722 * Thus if there is a bunch of rings that are sharing an
3688 3723 * interrupt, then only one ring among the bunch will be made
3689 3724 * available for interrupt re-targeting; the rest will have
3690 3725 * ddi_shared flag set to TRUE and would not be available for
3691 3726 * be interrupt re-targeting.
3692 3727 */
3693 3728 if ((ddi_handle = ring_info.mri_intr.mi_ddi_handle) != NULL) {
3694 3729 rnext = ring->mr_next;
3695 3730 while (rnext != NULL) {
3696 3731 if (rnext->mr_info.mri_intr.mi_ddi_handle ==
3697 3732 ddi_handle) {
3698 3733 /*
3699 3734 * If default ring (mr_index == 0) is part
3700 3735 * of a group of rings sharing an
3701 3736 * interrupt, then set ddi_shared flag for
3702 3737 * the default ring and give another ring
3703 3738 * the chance to be re-targeted.
3704 3739 */
3705 3740 if (rnext->mr_index == 0 &&
3706 3741 !rnext->mr_info.mri_intr.mi_ddi_shared) {
3707 3742 rnext->mr_info.mri_intr.mi_ddi_shared =
3708 3743 B_TRUE;
3709 3744 } else {
3710 3745 ring->mr_info.mri_intr.mi_ddi_shared =
3711 3746 B_TRUE;
3712 3747 }
3713 3748 break;
3714 3749 }
3715 3750 rnext = rnext->mr_next;
3716 3751 }
3717 3752 /*
3718 3753 * If rnext is NULL, then no matching ddi_handle was found.
3719 3754 * Rx rings get registered first. So if this is a Tx ring,
3720 3755 * then go through all the Rx rings and see if there is a
3721 3756 * matching ddi handle.
3722 3757 */
3723 3758 if (rnext == NULL && ring->mr_type == MAC_RING_TYPE_TX) {
3724 3759 mac_compare_ddi_handle(mip->mi_rx_groups,
3725 3760 mip->mi_rx_group_count, ring);
3726 3761 }
3727 3762 }
3728 3763
3729 3764 /* Update ring's status */
3730 3765 ring->mr_state = MR_FREE;
3731 3766 ring->mr_flag = 0;
3732 3767
3733 3768 /* Update the ring count of the group */
3734 3769 group->mrg_cur_count++;
3735 3770
3736 3771 /* Create per ring kstats */
3737 3772 if (ring->mr_stat != NULL) {
3738 3773 ring->mr_mip = mip;
3739 3774 mac_ring_stat_create(ring);
3740 3775 }
3741 3776
3742 3777 return (ring);
3743 3778 }
3744 3779
3745 3780 /*
3746 3781 * Rings are chained together for easy regrouping.
3747 3782 */
3748 3783 static void
3749 3784 mac_init_group(mac_impl_t *mip, mac_group_t *group, int size,
3750 3785 mac_capab_rings_t *cap_rings)
3751 3786 {
3752 3787 int index;
3753 3788
3754 3789 /*
3755 3790 * Initialize all ring members of this group. Size of zero will not
3756 3791 * enter the loop, so it's safe for initializing an empty group.
3757 3792 */
3758 3793 for (index = size - 1; index >= 0; index--)
3759 3794 (void) mac_init_ring(mip, group, index, cap_rings);
3760 3795 }
3761 3796
3762 3797 int
3763 3798 mac_init_rings(mac_impl_t *mip, mac_ring_type_t rtype)
3764 3799 {
3765 3800 mac_capab_rings_t *cap_rings;
3766 3801 mac_group_t *group;
3767 3802 mac_group_t *groups;
3768 3803 mac_group_info_t group_info;
3769 3804 uint_t group_free = 0;
3770 3805 uint_t ring_left;
3771 3806 mac_ring_t *ring;
3772 3807 int g;
3773 3808 int err = 0;
3774 3809 uint_t grpcnt;
3775 3810 boolean_t pseudo_txgrp = B_FALSE;
3776 3811
3777 3812 switch (rtype) {
3778 3813 case MAC_RING_TYPE_RX:
3779 3814 ASSERT(mip->mi_rx_groups == NULL);
3780 3815
3781 3816 cap_rings = &mip->mi_rx_rings_cap;
3782 3817 cap_rings->mr_type = MAC_RING_TYPE_RX;
3783 3818 break;
3784 3819 case MAC_RING_TYPE_TX:
3785 3820 ASSERT(mip->mi_tx_groups == NULL);
3786 3821
3787 3822 cap_rings = &mip->mi_tx_rings_cap;
3788 3823 cap_rings->mr_type = MAC_RING_TYPE_TX;
3789 3824 break;
3790 3825 default:
3791 3826 ASSERT(B_FALSE);
3792 3827 }
3793 3828
3794 3829 if (!i_mac_capab_get((mac_handle_t)mip, MAC_CAPAB_RINGS, cap_rings))
3795 3830 return (0);
3796 3831 grpcnt = cap_rings->mr_gnum;
3797 3832
3798 3833 /*
3799 3834 * If we have multiple TX rings, but only one TX group, we can
3800 3835 * create pseudo TX groups (one per TX ring) in the MAC layer,
3801 3836 * except for an aggr. For an aggr currently we maintain only
3802 3837 * one group with all the rings (for all its ports), going
3803 3838 * forwards we might change this.
3804 3839 */
3805 3840 if (rtype == MAC_RING_TYPE_TX &&
3806 3841 cap_rings->mr_gnum == 0 && cap_rings->mr_rnum > 0 &&
3807 3842 (mip->mi_state_flags & MIS_IS_AGGR) == 0) {
3808 3843 /*
3809 3844 * The -1 here is because we create a default TX group
3810 3845 * with all the rings in it.
3811 3846 */
3812 3847 grpcnt = cap_rings->mr_rnum - 1;
3813 3848 pseudo_txgrp = B_TRUE;
3814 3849 }
3815 3850
3816 3851 /*
3817 3852 * Allocate a contiguous buffer for all groups.
3818 3853 */
3819 3854 groups = kmem_zalloc(sizeof (mac_group_t) * (grpcnt+ 1), KM_SLEEP);
3820 3855
3821 3856 ring_left = cap_rings->mr_rnum;
3822 3857
3823 3858 /*
3824 3859 * Get all ring groups if any, and get their ring members
3825 3860 * if any.
3826 3861 */
3827 3862 for (g = 0; g < grpcnt; g++) {
3828 3863 group = groups + g;
3829 3864
3830 3865 /* Prepare basic information of the group */
3831 3866 group->mrg_index = g;
3832 3867 group->mrg_type = rtype;
3833 3868 group->mrg_state = MAC_GROUP_STATE_UNINIT;
3834 3869 group->mrg_mh = (mac_handle_t)mip;
3835 3870 group->mrg_next = group + 1;
3836 3871
3837 3872 /* Zero to reuse the info data structure */
3838 3873 bzero(&group_info, sizeof (group_info));
3839 3874
3840 3875 if (pseudo_txgrp) {
3841 3876 /*
3842 3877 * This is a pseudo group that we created, apart
3843 3878 * from setting the state there is nothing to be
3844 3879 * done.
3845 3880 */
3846 3881 group->mrg_state = MAC_GROUP_STATE_REGISTERED;
3847 3882 group_free++;
3848 3883 continue;
3849 3884 }
3850 3885 /* Query group information from driver */
3851 3886 cap_rings->mr_gget(mip->mi_driver, rtype, g, &group_info,
3852 3887 (mac_group_handle_t)group);
3853 3888
3854 3889 switch (cap_rings->mr_group_type) {
3855 3890 case MAC_GROUP_TYPE_DYNAMIC:
3856 3891 if (cap_rings->mr_gaddring == NULL ||
3857 3892 cap_rings->mr_gremring == NULL) {
3858 3893 DTRACE_PROBE3(
3859 3894 mac__init__rings_no_addremring,
3860 3895 char *, mip->mi_name,
3861 3896 mac_group_add_ring_t,
3862 3897 cap_rings->mr_gaddring,
3863 3898 mac_group_add_ring_t,
3864 3899 cap_rings->mr_gremring);
3865 3900 err = EINVAL;
3866 3901 goto bail;
3867 3902 }
3868 3903
3869 3904 switch (rtype) {
3870 3905 case MAC_RING_TYPE_RX:
3871 3906 /*
3872 3907 * The first RX group must have non-zero
3873 3908 * rings, and the following groups must
3874 3909 * have zero rings.
3875 3910 */
3876 3911 if (g == 0 && group_info.mgi_count == 0) {
3877 3912 DTRACE_PROBE1(
3878 3913 mac__init__rings__rx__def__zero,
3879 3914 char *, mip->mi_name);
3880 3915 err = EINVAL;
3881 3916 goto bail;
3882 3917 }
3883 3918 if (g > 0 && group_info.mgi_count != 0) {
3884 3919 DTRACE_PROBE3(
3885 3920 mac__init__rings__rx__nonzero,
3886 3921 char *, mip->mi_name,
3887 3922 int, g, int, group_info.mgi_count);
3888 3923 err = EINVAL;
3889 3924 goto bail;
3890 3925 }
3891 3926 break;
3892 3927 case MAC_RING_TYPE_TX:
3893 3928 /*
3894 3929 * All TX ring groups must have zero rings.
3895 3930 */
3896 3931 if (group_info.mgi_count != 0) {
3897 3932 DTRACE_PROBE3(
3898 3933 mac__init__rings__tx__nonzero,
3899 3934 char *, mip->mi_name,
3900 3935 int, g, int, group_info.mgi_count);
3901 3936 err = EINVAL;
3902 3937 goto bail;
3903 3938 }
3904 3939 break;
3905 3940 }
3906 3941 break;
3907 3942 case MAC_GROUP_TYPE_STATIC:
3908 3943 /*
3909 3944 * Note that an empty group is allowed, e.g., an aggr
3910 3945 * would start with an empty group.
3911 3946 */
3912 3947 break;
3913 3948 default:
3914 3949 /* unknown group type */
3915 3950 DTRACE_PROBE2(mac__init__rings__unknown__type,
3916 3951 char *, mip->mi_name,
3917 3952 int, cap_rings->mr_group_type);
3918 3953 err = EINVAL;
3919 3954 goto bail;
3920 3955 }
3921 3956
3922 3957
3923 3958 /*
3924 3959 * Driver must register group->mgi_addmac/remmac() for rx groups
3925 3960 * to support multiple MAC addresses.
3926 3961 */
3927 3962 if (rtype == MAC_RING_TYPE_RX) {
3928 3963 if ((group_info.mgi_addmac == NULL) ||
3929 3964 (group_info.mgi_addmac == NULL)) {
3930 3965 goto bail;
3931 3966 }
3932 3967 }
3933 3968
3934 3969 /* Cache driver-supplied information */
3935 3970 group->mrg_info = group_info;
3936 3971
3937 3972 /* Update the group's status and group count. */
3938 3973 mac_set_group_state(group, MAC_GROUP_STATE_REGISTERED);
3939 3974 group_free++;
3940 3975
3941 3976 group->mrg_rings = NULL;
3942 3977 group->mrg_cur_count = 0;
3943 3978 mac_init_group(mip, group, group_info.mgi_count, cap_rings);
3944 3979 ring_left -= group_info.mgi_count;
3945 3980
3946 3981 /* The current group size should be equal to default value */
3947 3982 ASSERT(group->mrg_cur_count == group_info.mgi_count);
3948 3983 }
3949 3984
3950 3985 /* Build up a dummy group for free resources as a pool */
3951 3986 group = groups + grpcnt;
3952 3987
3953 3988 /* Prepare basic information of the group */
3954 3989 group->mrg_index = -1;
3955 3990 group->mrg_type = rtype;
3956 3991 group->mrg_state = MAC_GROUP_STATE_UNINIT;
3957 3992 group->mrg_mh = (mac_handle_t)mip;
3958 3993 group->mrg_next = NULL;
3959 3994
3960 3995 /*
3961 3996 * If there are ungrouped rings, allocate a continuous buffer for
3962 3997 * remaining resources.
3963 3998 */
3964 3999 if (ring_left != 0) {
3965 4000 group->mrg_rings = NULL;
3966 4001 group->mrg_cur_count = 0;
3967 4002 mac_init_group(mip, group, ring_left, cap_rings);
3968 4003
3969 4004 /* The current group size should be equal to ring_left */
3970 4005 ASSERT(group->mrg_cur_count == ring_left);
3971 4006
3972 4007 ring_left = 0;
3973 4008
3974 4009 /* Update this group's status */
3975 4010 mac_set_group_state(group, MAC_GROUP_STATE_REGISTERED);
3976 4011 } else
3977 4012 group->mrg_rings = NULL;
3978 4013
3979 4014 ASSERT(ring_left == 0);
3980 4015
3981 4016 bail:
3982 4017
3983 4018 /* Cache other important information to finalize the initialization */
3984 4019 switch (rtype) {
3985 4020 case MAC_RING_TYPE_RX:
3986 4021 mip->mi_rx_group_type = cap_rings->mr_group_type;
3987 4022 mip->mi_rx_group_count = cap_rings->mr_gnum;
3988 4023 mip->mi_rx_groups = groups;
3989 4024 mip->mi_rx_donor_grp = groups;
3990 4025 if (mip->mi_rx_group_type == MAC_GROUP_TYPE_DYNAMIC) {
3991 4026 /*
3992 4027 * The default ring is reserved since it is
3993 4028 * used for sending the broadcast etc. packets.
3994 4029 */
3995 4030 mip->mi_rxrings_avail =
3996 4031 mip->mi_rx_groups->mrg_cur_count - 1;
3997 4032 mip->mi_rxrings_rsvd = 1;
3998 4033 }
3999 4034 /*
4000 4035 * The default group cannot be reserved. It is used by
4001 4036 * all the clients that do not have an exclusive group.
4002 4037 */
4003 4038 mip->mi_rxhwclnt_avail = mip->mi_rx_group_count - 1;
4004 4039 mip->mi_rxhwclnt_used = 1;
4005 4040 break;
4006 4041 case MAC_RING_TYPE_TX:
4007 4042 mip->mi_tx_group_type = pseudo_txgrp ? MAC_GROUP_TYPE_DYNAMIC :
4008 4043 cap_rings->mr_group_type;
4009 4044 mip->mi_tx_group_count = grpcnt;
4010 4045 mip->mi_tx_group_free = group_free;
4011 4046 mip->mi_tx_groups = groups;
4012 4047
4013 4048 group = groups + grpcnt;
4014 4049 ring = group->mrg_rings;
4015 4050 /*
4016 4051 * The ring can be NULL in the case of aggr. Aggr will
4017 4052 * have an empty Tx group which will get populated
4018 4053 * later when pseudo Tx rings are added after
4019 4054 * mac_register() is done.
4020 4055 */
4021 4056 if (ring == NULL) {
4022 4057 ASSERT(mip->mi_state_flags & MIS_IS_AGGR);
4023 4058 /*
4024 4059 * pass the group to aggr so it can add Tx
4025 4060 * rings to the group later.
4026 4061 */
4027 4062 cap_rings->mr_gget(mip->mi_driver, rtype, 0, NULL,
4028 4063 (mac_group_handle_t)group);
4029 4064 /*
4030 4065 * Even though there are no rings at this time
4031 4066 * (rings will come later), set the group
4032 4067 * state to registered.
4033 4068 */
4034 4069 group->mrg_state = MAC_GROUP_STATE_REGISTERED;
4035 4070 } else {
4036 4071 /*
4037 4072 * Ring 0 is used as the default one and it could be
4038 4073 * assigned to a client as well.
4039 4074 */
4040 4075 while ((ring->mr_index != 0) && (ring->mr_next != NULL))
4041 4076 ring = ring->mr_next;
4042 4077 ASSERT(ring->mr_index == 0);
4043 4078 mip->mi_default_tx_ring = (mac_ring_handle_t)ring;
4044 4079 }
4045 4080 if (mip->mi_tx_group_type == MAC_GROUP_TYPE_DYNAMIC)
4046 4081 mip->mi_txrings_avail = group->mrg_cur_count - 1;
4047 4082 /*
4048 4083 * The default ring cannot be reserved.
4049 4084 */
4050 4085 mip->mi_txrings_rsvd = 1;
4051 4086 /*
4052 4087 * The default group cannot be reserved. It will be shared
4053 4088 * by clients that do not have an exclusive group.
4054 4089 */
4055 4090 mip->mi_txhwclnt_avail = mip->mi_tx_group_count;
4056 4091 mip->mi_txhwclnt_used = 1;
4057 4092 break;
4058 4093 default:
4059 4094 ASSERT(B_FALSE);
4060 4095 }
4061 4096
4062 4097 if (err != 0)
4063 4098 mac_free_rings(mip, rtype);
4064 4099
4065 4100 return (err);
4066 4101 }
4067 4102
4068 4103 /*
4069 4104 * The ddi interrupt handle could be shared amoung rings. If so, compare
4070 4105 * the new ring's ddi handle with the existing ones and set ddi_shared
4071 4106 * flag.
4072 4107 */
4073 4108 void
4074 4109 mac_compare_ddi_handle(mac_group_t *groups, uint_t grpcnt, mac_ring_t *cring)
4075 4110 {
4076 4111 mac_group_t *group;
4077 4112 mac_ring_t *ring;
4078 4113 ddi_intr_handle_t ddi_handle;
4079 4114 int g;
4080 4115
4081 4116 ddi_handle = cring->mr_info.mri_intr.mi_ddi_handle;
4082 4117 for (g = 0; g < grpcnt; g++) {
4083 4118 group = groups + g;
4084 4119 for (ring = group->mrg_rings; ring != NULL;
4085 4120 ring = ring->mr_next) {
4086 4121 if (ring == cring)
4087 4122 continue;
4088 4123 if (ring->mr_info.mri_intr.mi_ddi_handle ==
4089 4124 ddi_handle) {
4090 4125 if (cring->mr_type == MAC_RING_TYPE_RX &&
4091 4126 ring->mr_index == 0 &&
4092 4127 !ring->mr_info.mri_intr.mi_ddi_shared) {
4093 4128 ring->mr_info.mri_intr.mi_ddi_shared =
4094 4129 B_TRUE;
4095 4130 } else {
4096 4131 cring->mr_info.mri_intr.mi_ddi_shared =
4097 4132 B_TRUE;
4098 4133 }
4099 4134 return;
4100 4135 }
4101 4136 }
4102 4137 }
4103 4138 }
4104 4139
4105 4140 /*
4106 4141 * Called to free all groups of particular type (RX or TX). It's assumed that
4107 4142 * no clients are using these groups.
4108 4143 */
4109 4144 void
4110 4145 mac_free_rings(mac_impl_t *mip, mac_ring_type_t rtype)
4111 4146 {
4112 4147 mac_group_t *group, *groups;
4113 4148 uint_t group_count;
4114 4149
4115 4150 switch (rtype) {
4116 4151 case MAC_RING_TYPE_RX:
4117 4152 if (mip->mi_rx_groups == NULL)
4118 4153 return;
4119 4154
4120 4155 groups = mip->mi_rx_groups;
4121 4156 group_count = mip->mi_rx_group_count;
4122 4157
4123 4158 mip->mi_rx_groups = NULL;
4124 4159 mip->mi_rx_donor_grp = NULL;
4125 4160 mip->mi_rx_group_count = 0;
4126 4161 break;
4127 4162 case MAC_RING_TYPE_TX:
4128 4163 ASSERT(mip->mi_tx_group_count == mip->mi_tx_group_free);
4129 4164
4130 4165 if (mip->mi_tx_groups == NULL)
4131 4166 return;
4132 4167
4133 4168 groups = mip->mi_tx_groups;
4134 4169 group_count = mip->mi_tx_group_count;
4135 4170
4136 4171 mip->mi_tx_groups = NULL;
4137 4172 mip->mi_tx_group_count = 0;
4138 4173 mip->mi_tx_group_free = 0;
4139 4174 mip->mi_default_tx_ring = NULL;
4140 4175 break;
4141 4176 default:
4142 4177 ASSERT(B_FALSE);
4143 4178 }
4144 4179
4145 4180 for (group = groups; group != NULL; group = group->mrg_next) {
4146 4181 mac_ring_t *ring;
4147 4182
4148 4183 if (group->mrg_cur_count == 0)
4149 4184 continue;
4150 4185
4151 4186 ASSERT(group->mrg_rings != NULL);
4152 4187
4153 4188 while ((ring = group->mrg_rings) != NULL) {
4154 4189 group->mrg_rings = ring->mr_next;
4155 4190 mac_ring_free(mip, ring);
4156 4191 }
4157 4192 }
4158 4193
4159 4194 /* Free all the cached rings */
4160 4195 mac_ring_freeall(mip);
4161 4196 /* Free the block of group data strutures */
4162 4197 kmem_free(groups, sizeof (mac_group_t) * (group_count + 1));
4163 4198 }
4164 4199
4165 4200 /*
4166 4201 * Associate a MAC address with a receive group.
4167 4202 *
4168 4203 * The return value of this function should always be checked properly, because
4169 4204 * any type of failure could cause unexpected results. A group can be added
4170 4205 * or removed with a MAC address only after it has been reserved. Ideally,
4171 4206 * a successful reservation always leads to calling mac_group_addmac() to
4172 4207 * steer desired traffic. Failure of adding an unicast MAC address doesn't
4173 4208 * always imply that the group is functioning abnormally.
4174 4209 *
4175 4210 * Currently this function is called everywhere, and it reflects assumptions
4176 4211 * about MAC addresses in the implementation. CR 6735196.
4177 4212 */
4178 4213 int
4179 4214 mac_group_addmac(mac_group_t *group, const uint8_t *addr)
4180 4215 {
4181 4216 ASSERT(group->mrg_type == MAC_RING_TYPE_RX);
4182 4217 ASSERT(group->mrg_info.mgi_addmac != NULL);
4183 4218
4184 4219 return (group->mrg_info.mgi_addmac(group->mrg_info.mgi_driver, addr));
4185 4220 }
4186 4221
4187 4222 /*
4188 4223 * Remove the association between MAC address and receive group.
4189 4224 */
4190 4225 int
4191 4226 mac_group_remmac(mac_group_t *group, const uint8_t *addr)
4192 4227 {
4193 4228 ASSERT(group->mrg_type == MAC_RING_TYPE_RX);
4194 4229 ASSERT(group->mrg_info.mgi_remmac != NULL);
4195 4230
4196 4231 return (group->mrg_info.mgi_remmac(group->mrg_info.mgi_driver, addr));
4197 4232 }
4198 4233
4199 4234 /*
4200 4235 * This is the entry point for packets transmitted through the bridging code.
4201 4236 * If no bridge is in place, MAC_RING_TX transmits using tx ring. The 'rh'
4202 4237 * pointer may be NULL to select the default ring.
4203 4238 */
4204 4239 mblk_t *
4205 4240 mac_bridge_tx(mac_impl_t *mip, mac_ring_handle_t rh, mblk_t *mp)
4206 4241 {
4207 4242 mac_handle_t mh;
4208 4243
4209 4244 /*
4210 4245 * Once we take a reference on the bridge link, the bridge
4211 4246 * module itself can't unload, so the callback pointers are
4212 4247 * stable.
4213 4248 */
4214 4249 mutex_enter(&mip->mi_bridge_lock);
4215 4250 if ((mh = mip->mi_bridge_link) != NULL)
4216 4251 mac_bridge_ref_cb(mh, B_TRUE);
4217 4252 mutex_exit(&mip->mi_bridge_lock);
4218 4253 if (mh == NULL) {
4219 4254 MAC_RING_TX(mip, rh, mp, mp);
4220 4255 } else {
4221 4256 mp = mac_bridge_tx_cb(mh, rh, mp);
4222 4257 mac_bridge_ref_cb(mh, B_FALSE);
4223 4258 }
4224 4259
4225 4260 return (mp);
4226 4261 }
4227 4262
4228 4263 /*
4229 4264 * Find a ring from its index.
4230 4265 */
4231 4266 mac_ring_handle_t
4232 4267 mac_find_ring(mac_group_handle_t gh, int index)
4233 4268 {
4234 4269 mac_group_t *group = (mac_group_t *)gh;
4235 4270 mac_ring_t *ring = group->mrg_rings;
4236 4271
4237 4272 for (ring = group->mrg_rings; ring != NULL; ring = ring->mr_next)
4238 4273 if (ring->mr_index == index)
4239 4274 break;
4240 4275
4241 4276 return ((mac_ring_handle_t)ring);
4242 4277 }
4243 4278 /*
4244 4279 * Add a ring to an existing group.
4245 4280 *
4246 4281 * The ring must be either passed directly (for example if the ring
4247 4282 * movement is initiated by the framework), or specified through a driver
4248 4283 * index (for example when the ring is added by the driver.
4249 4284 *
4250 4285 * The caller needs to call mac_perim_enter() before calling this function.
4251 4286 */
4252 4287 int
4253 4288 i_mac_group_add_ring(mac_group_t *group, mac_ring_t *ring, int index)
4254 4289 {
4255 4290 mac_impl_t *mip = (mac_impl_t *)group->mrg_mh;
4256 4291 mac_capab_rings_t *cap_rings;
4257 4292 boolean_t driver_call = (ring == NULL);
4258 4293 mac_group_type_t group_type;
4259 4294 int ret = 0;
4260 4295 flow_entry_t *flent;
4261 4296
4262 4297 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
4263 4298
4264 4299 switch (group->mrg_type) {
4265 4300 case MAC_RING_TYPE_RX:
4266 4301 cap_rings = &mip->mi_rx_rings_cap;
4267 4302 group_type = mip->mi_rx_group_type;
4268 4303 break;
4269 4304 case MAC_RING_TYPE_TX:
4270 4305 cap_rings = &mip->mi_tx_rings_cap;
4271 4306 group_type = mip->mi_tx_group_type;
4272 4307 break;
4273 4308 default:
4274 4309 ASSERT(B_FALSE);
4275 4310 }
4276 4311
4277 4312 /*
4278 4313 * There should be no ring with the same ring index in the target
4279 4314 * group.
4280 4315 */
4281 4316 ASSERT(mac_find_ring((mac_group_handle_t)group,
4282 4317 driver_call ? index : ring->mr_index) == NULL);
4283 4318
4284 4319 if (driver_call) {
4285 4320 /*
4286 4321 * The function is called as a result of a request from
4287 4322 * a driver to add a ring to an existing group, for example
4288 4323 * from the aggregation driver. Allocate a new mac_ring_t
4289 4324 * for that ring.
4290 4325 */
4291 4326 ring = mac_init_ring(mip, group, index, cap_rings);
4292 4327 ASSERT(group->mrg_state > MAC_GROUP_STATE_UNINIT);
4293 4328 } else {
4294 4329 /*
4295 4330 * The function is called as a result of a MAC layer request
4296 4331 * to add a ring to an existing group. In this case the
4297 4332 * ring is being moved between groups, which requires
4298 4333 * the underlying driver to support dynamic grouping,
4299 4334 * and the mac_ring_t already exists.
4300 4335 */
4301 4336 ASSERT(group_type == MAC_GROUP_TYPE_DYNAMIC);
4302 4337 ASSERT(group->mrg_driver == NULL ||
4303 4338 cap_rings->mr_gaddring != NULL);
4304 4339 ASSERT(ring->mr_gh == NULL);
4305 4340 }
4306 4341
4307 4342 /*
4308 4343 * At this point the ring should not be in use, and it should be
4309 4344 * of the right for the target group.
4310 4345 */
4311 4346 ASSERT(ring->mr_state < MR_INUSE);
4312 4347 ASSERT(ring->mr_srs == NULL);
4313 4348 ASSERT(ring->mr_type == group->mrg_type);
4314 4349
4315 4350 if (!driver_call) {
4316 4351 /*
4317 4352 * Add the driver level hardware ring if the process was not
4318 4353 * initiated by the driver, and the target group is not the
4319 4354 * group.
4320 4355 */
4321 4356 if (group->mrg_driver != NULL) {
4322 4357 cap_rings->mr_gaddring(group->mrg_driver,
4323 4358 ring->mr_driver, ring->mr_type);
4324 4359 }
4325 4360
4326 4361 /*
4327 4362 * Insert the ring ahead existing rings.
4328 4363 */
4329 4364 ring->mr_next = group->mrg_rings;
4330 4365 group->mrg_rings = ring;
4331 4366 ring->mr_gh = (mac_group_handle_t)group;
4332 4367 group->mrg_cur_count++;
4333 4368 }
4334 4369
4335 4370 /*
4336 4371 * If the group has not been actively used, we're done.
4337 4372 */
4338 4373 if (group->mrg_index != -1 &&
4339 4374 group->mrg_state < MAC_GROUP_STATE_RESERVED)
4340 4375 return (0);
4341 4376
4342 4377 /*
4343 4378 * Start the ring if needed. Failure causes to undo the grouping action.
4344 4379 */
4345 4380 if (ring->mr_state != MR_INUSE) {
4346 4381 if ((ret = mac_start_ring(ring)) != 0) {
4347 4382 if (!driver_call) {
4348 4383 cap_rings->mr_gremring(group->mrg_driver,
4349 4384 ring->mr_driver, ring->mr_type);
4350 4385 }
4351 4386 group->mrg_cur_count--;
4352 4387 group->mrg_rings = ring->mr_next;
4353 4388
4354 4389 ring->mr_gh = NULL;
4355 4390
4356 4391 if (driver_call)
4357 4392 mac_ring_free(mip, ring);
4358 4393
4359 4394 return (ret);
4360 4395 }
4361 4396 }
4362 4397
4363 4398 /*
4364 4399 * Set up SRS/SR according to the ring type.
4365 4400 */
4366 4401 switch (ring->mr_type) {
4367 4402 case MAC_RING_TYPE_RX:
4368 4403 /*
4369 4404 * Setup SRS on top of the new ring if the group is
4370 4405 * reserved for someones exclusive use.
4371 4406 */
4372 4407 if (group->mrg_state == MAC_GROUP_STATE_RESERVED) {
4373 4408 mac_client_impl_t *mcip;
4374 4409
4375 4410 mcip = MAC_GROUP_ONLY_CLIENT(group);
4376 4411 /*
4377 4412 * Even though this group is reserved we migth still
4378 4413 * have multiple clients, i.e a VLAN shares the
4379 4414 * group with the primary mac client.
4380 4415 */
4381 4416 if (mcip != NULL) {
4382 4417 flent = mcip->mci_flent;
4383 4418 ASSERT(flent->fe_rx_srs_cnt > 0);
4384 4419 mac_rx_srs_group_setup(mcip, flent, SRST_LINK);
4385 4420 mac_fanout_setup(mcip, flent,
4386 4421 MCIP_RESOURCE_PROPS(mcip), mac_rx_deliver,
4387 4422 mcip, NULL, NULL);
4388 4423 } else {
4389 4424 ring->mr_classify_type = MAC_SW_CLASSIFIER;
4390 4425 }
4391 4426 }
4392 4427 break;
4393 4428 case MAC_RING_TYPE_TX:
4394 4429 {
4395 4430 mac_grp_client_t *mgcp = group->mrg_clients;
4396 4431 mac_client_impl_t *mcip;
4397 4432 mac_soft_ring_set_t *mac_srs;
4398 4433 mac_srs_tx_t *tx;
4399 4434
4400 4435 if (MAC_GROUP_NO_CLIENT(group)) {
4401 4436 if (ring->mr_state == MR_INUSE)
4402 4437 mac_stop_ring(ring);
4403 4438 ring->mr_flag = 0;
4404 4439 break;
4405 4440 }
4406 4441 /*
4407 4442 * If the rings are being moved to a group that has
4408 4443 * clients using it, then add the new rings to the
4409 4444 * clients SRS.
4410 4445 */
4411 4446 while (mgcp != NULL) {
4412 4447 boolean_t is_aggr;
4413 4448
4414 4449 mcip = mgcp->mgc_client;
4415 4450 flent = mcip->mci_flent;
4416 4451 is_aggr = (mcip->mci_state_flags & MCIS_IS_AGGR);
4417 4452 mac_srs = MCIP_TX_SRS(mcip);
4418 4453 tx = &mac_srs->srs_tx;
4419 4454 mac_tx_client_quiesce((mac_client_handle_t)mcip);
4420 4455 /*
4421 4456 * If we are growing from 1 to multiple rings.
4422 4457 */
4423 4458 if (tx->st_mode == SRS_TX_BW ||
4424 4459 tx->st_mode == SRS_TX_SERIALIZE ||
4425 4460 tx->st_mode == SRS_TX_DEFAULT) {
4426 4461 mac_ring_t *tx_ring = tx->st_arg2;
4427 4462
4428 4463 tx->st_arg2 = NULL;
4429 4464 mac_tx_srs_stat_recreate(mac_srs, B_TRUE);
4430 4465 mac_tx_srs_add_ring(mac_srs, tx_ring);
4431 4466 if (mac_srs->srs_type & SRST_BW_CONTROL) {
4432 4467 tx->st_mode = is_aggr ? SRS_TX_BW_AGGR :
4433 4468 SRS_TX_BW_FANOUT;
4434 4469 } else {
4435 4470 tx->st_mode = is_aggr ? SRS_TX_AGGR :
4436 4471 SRS_TX_FANOUT;
4437 4472 }
4438 4473 tx->st_func = mac_tx_get_func(tx->st_mode);
4439 4474 }
4440 4475 mac_tx_srs_add_ring(mac_srs, ring);
4441 4476 mac_fanout_setup(mcip, flent, MCIP_RESOURCE_PROPS(mcip),
4442 4477 mac_rx_deliver, mcip, NULL, NULL);
4443 4478 mac_tx_client_restart((mac_client_handle_t)mcip);
4444 4479 mgcp = mgcp->mgc_next;
4445 4480 }
4446 4481 break;
4447 4482 }
4448 4483 default:
4449 4484 ASSERT(B_FALSE);
4450 4485 }
4451 4486 /*
4452 4487 * For aggr, the default ring will be NULL to begin with. If it
4453 4488 * is NULL, then pick the first ring that gets added as the
4454 4489 * default ring. Any ring in an aggregation can be removed at
4455 4490 * any time (by the user action of removing a link) and if the
4456 4491 * current default ring gets removed, then a new one gets
4457 4492 * picked (see i_mac_group_rem_ring()).
4458 4493 */
4459 4494 if (mip->mi_state_flags & MIS_IS_AGGR &&
4460 4495 mip->mi_default_tx_ring == NULL &&
4461 4496 ring->mr_type == MAC_RING_TYPE_TX) {
4462 4497 mip->mi_default_tx_ring = (mac_ring_handle_t)ring;
4463 4498 }
4464 4499
4465 4500 MAC_RING_UNMARK(ring, MR_INCIPIENT);
4466 4501 return (0);
4467 4502 }
4468 4503
4469 4504 /*
4470 4505 * Remove a ring from it's current group. MAC internal function for dynamic
4471 4506 * grouping.
4472 4507 *
4473 4508 * The caller needs to call mac_perim_enter() before calling this function.
4474 4509 */
4475 4510 void
4476 4511 i_mac_group_rem_ring(mac_group_t *group, mac_ring_t *ring,
4477 4512 boolean_t driver_call)
4478 4513 {
4479 4514 mac_impl_t *mip = (mac_impl_t *)group->mrg_mh;
4480 4515 mac_capab_rings_t *cap_rings = NULL;
4481 4516 mac_group_type_t group_type;
4482 4517
4483 4518 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
4484 4519
4485 4520 ASSERT(mac_find_ring((mac_group_handle_t)group,
4486 4521 ring->mr_index) == (mac_ring_handle_t)ring);
4487 4522 ASSERT((mac_group_t *)ring->mr_gh == group);
4488 4523 ASSERT(ring->mr_type == group->mrg_type);
4489 4524
4490 4525 if (ring->mr_state == MR_INUSE)
4491 4526 mac_stop_ring(ring);
4492 4527 switch (ring->mr_type) {
4493 4528 case MAC_RING_TYPE_RX:
4494 4529 group_type = mip->mi_rx_group_type;
4495 4530 cap_rings = &mip->mi_rx_rings_cap;
4496 4531
4497 4532 /*
4498 4533 * Only hardware classified packets hold a reference to the
4499 4534 * ring all the way up the Rx path. mac_rx_srs_remove()
4500 4535 * will take care of quiescing the Rx path and removing the
4501 4536 * SRS. The software classified path neither holds a reference
4502 4537 * nor any association with the ring in mac_rx.
4503 4538 */
4504 4539 if (ring->mr_srs != NULL) {
4505 4540 mac_rx_srs_remove(ring->mr_srs);
4506 4541 ring->mr_srs = NULL;
4507 4542 }
4508 4543
4509 4544 break;
4510 4545 case MAC_RING_TYPE_TX:
4511 4546 {
4512 4547 mac_grp_client_t *mgcp;
4513 4548 mac_client_impl_t *mcip;
4514 4549 mac_soft_ring_set_t *mac_srs;
4515 4550 mac_srs_tx_t *tx;
4516 4551 mac_ring_t *rem_ring;
4517 4552 mac_group_t *defgrp;
4518 4553 uint_t ring_info = 0;
4519 4554
4520 4555 /*
4521 4556 * For TX this function is invoked in three
4522 4557 * cases:
4523 4558 *
4524 4559 * 1) In the case of a failure during the
4525 4560 * initial creation of a group when a share is
4526 4561 * associated with a MAC client. So the SRS is not
4527 4562 * yet setup, and will be setup later after the
4528 4563 * group has been reserved and populated.
4529 4564 *
4530 4565 * 2) From mac_release_tx_group() when freeing
4531 4566 * a TX SRS.
4532 4567 *
4533 4568 * 3) In the case of aggr, when a port gets removed,
4534 4569 * the pseudo Tx rings that it exposed gets removed.
4535 4570 *
4536 4571 * In the first two cases the SRS and its soft
4537 4572 * rings are already quiesced.
4538 4573 */
4539 4574 if (driver_call) {
4540 4575 mac_client_impl_t *mcip;
4541 4576 mac_soft_ring_set_t *mac_srs;
4542 4577 mac_soft_ring_t *sringp;
4543 4578 mac_srs_tx_t *srs_tx;
4544 4579
4545 4580 if (mip->mi_state_flags & MIS_IS_AGGR &&
4546 4581 mip->mi_default_tx_ring ==
4547 4582 (mac_ring_handle_t)ring) {
4548 4583 /* pick a new default Tx ring */
4549 4584 mip->mi_default_tx_ring =
4550 4585 (group->mrg_rings != ring) ?
4551 4586 (mac_ring_handle_t)group->mrg_rings :
4552 4587 (mac_ring_handle_t)(ring->mr_next);
4553 4588 }
4554 4589 /* Presently only aggr case comes here */
4555 4590 if (group->mrg_state != MAC_GROUP_STATE_RESERVED)
4556 4591 break;
4557 4592
4558 4593 mcip = MAC_GROUP_ONLY_CLIENT(group);
4559 4594 ASSERT(mcip != NULL);
4560 4595 ASSERT(mcip->mci_state_flags & MCIS_IS_AGGR);
4561 4596 mac_srs = MCIP_TX_SRS(mcip);
4562 4597 ASSERT(mac_srs->srs_tx.st_mode == SRS_TX_AGGR ||
4563 4598 mac_srs->srs_tx.st_mode == SRS_TX_BW_AGGR);
4564 4599 srs_tx = &mac_srs->srs_tx;
4565 4600 /*
4566 4601 * Wakeup any callers blocked on this
4567 4602 * Tx ring due to flow control.
4568 4603 */
4569 4604 sringp = srs_tx->st_soft_rings[ring->mr_index];
4570 4605 ASSERT(sringp != NULL);
4571 4606 mac_tx_invoke_callbacks(mcip, (mac_tx_cookie_t)sringp);
4572 4607 mac_tx_client_quiesce((mac_client_handle_t)mcip);
4573 4608 mac_tx_srs_del_ring(mac_srs, ring);
4574 4609 mac_tx_client_restart((mac_client_handle_t)mcip);
4575 4610 break;
4576 4611 }
4577 4612 ASSERT(ring != (mac_ring_t *)mip->mi_default_tx_ring);
4578 4613 group_type = mip->mi_tx_group_type;
4579 4614 cap_rings = &mip->mi_tx_rings_cap;
4580 4615 /*
4581 4616 * See if we need to take it out of the MAC clients using
4582 4617 * this group
4583 4618 */
4584 4619 if (MAC_GROUP_NO_CLIENT(group))
4585 4620 break;
4586 4621 mgcp = group->mrg_clients;
4587 4622 defgrp = MAC_DEFAULT_TX_GROUP(mip);
4588 4623 while (mgcp != NULL) {
4589 4624 mcip = mgcp->mgc_client;
4590 4625 mac_srs = MCIP_TX_SRS(mcip);
4591 4626 tx = &mac_srs->srs_tx;
4592 4627 mac_tx_client_quiesce((mac_client_handle_t)mcip);
4593 4628 /*
4594 4629 * If we are here when removing rings from the
4595 4630 * defgroup, mac_reserve_tx_ring would have
4596 4631 * already deleted the ring from the MAC
4597 4632 * clients in the group.
4598 4633 */
4599 4634 if (group != defgrp) {
4600 4635 mac_tx_invoke_callbacks(mcip,
4601 4636 (mac_tx_cookie_t)
4602 4637 mac_tx_srs_get_soft_ring(mac_srs, ring));
4603 4638 mac_tx_srs_del_ring(mac_srs, ring);
4604 4639 }
4605 4640 /*
4606 4641 * Additionally, if we are left with only
4607 4642 * one ring in the group after this, we need
4608 4643 * to modify the mode etc. to. (We haven't
4609 4644 * yet taken the ring out, so we check with 2).
4610 4645 */
4611 4646 if (group->mrg_cur_count == 2) {
4612 4647 if (ring->mr_next == NULL)
4613 4648 rem_ring = group->mrg_rings;
4614 4649 else
4615 4650 rem_ring = ring->mr_next;
4616 4651 mac_tx_invoke_callbacks(mcip,
4617 4652 (mac_tx_cookie_t)
4618 4653 mac_tx_srs_get_soft_ring(mac_srs,
4619 4654 rem_ring));
4620 4655 mac_tx_srs_del_ring(mac_srs, rem_ring);
4621 4656 if (rem_ring->mr_state != MR_INUSE) {
4622 4657 (void) mac_start_ring(rem_ring);
4623 4658 }
4624 4659 tx->st_arg2 = (void *)rem_ring;
4625 4660 mac_tx_srs_stat_recreate(mac_srs, B_FALSE);
4626 4661 ring_info = mac_hwring_getinfo(
4627 4662 (mac_ring_handle_t)rem_ring);
4628 4663 /*
4629 4664 * We are shrinking from multiple
4630 4665 * to 1 ring.
4631 4666 */
4632 4667 if (mac_srs->srs_type & SRST_BW_CONTROL) {
4633 4668 tx->st_mode = SRS_TX_BW;
4634 4669 } else if (mac_tx_serialize ||
4635 4670 (ring_info & MAC_RING_TX_SERIALIZE)) {
4636 4671 tx->st_mode = SRS_TX_SERIALIZE;
4637 4672 } else {
4638 4673 tx->st_mode = SRS_TX_DEFAULT;
4639 4674 }
4640 4675 tx->st_func = mac_tx_get_func(tx->st_mode);
4641 4676 }
4642 4677 mac_tx_client_restart((mac_client_handle_t)mcip);
4643 4678 mgcp = mgcp->mgc_next;
4644 4679 }
4645 4680 break;
4646 4681 }
4647 4682 default:
4648 4683 ASSERT(B_FALSE);
4649 4684 }
4650 4685
4651 4686 /*
4652 4687 * Remove the ring from the group.
4653 4688 */
4654 4689 if (ring == group->mrg_rings)
4655 4690 group->mrg_rings = ring->mr_next;
4656 4691 else {
4657 4692 mac_ring_t *pre;
4658 4693
4659 4694 pre = group->mrg_rings;
4660 4695 while (pre->mr_next != ring)
4661 4696 pre = pre->mr_next;
4662 4697 pre->mr_next = ring->mr_next;
4663 4698 }
4664 4699 group->mrg_cur_count--;
4665 4700
4666 4701 if (!driver_call) {
4667 4702 ASSERT(group_type == MAC_GROUP_TYPE_DYNAMIC);
4668 4703 ASSERT(group->mrg_driver == NULL ||
4669 4704 cap_rings->mr_gremring != NULL);
4670 4705
4671 4706 /*
4672 4707 * Remove the driver level hardware ring.
4673 4708 */
4674 4709 if (group->mrg_driver != NULL) {
4675 4710 cap_rings->mr_gremring(group->mrg_driver,
4676 4711 ring->mr_driver, ring->mr_type);
4677 4712 }
4678 4713 }
4679 4714
4680 4715 ring->mr_gh = NULL;
4681 4716 if (driver_call)
4682 4717 mac_ring_free(mip, ring);
4683 4718 else
4684 4719 ring->mr_flag = 0;
4685 4720 }
4686 4721
4687 4722 /*
4688 4723 * Move a ring to the target group. If needed, remove the ring from the group
4689 4724 * that it currently belongs to.
4690 4725 *
4691 4726 * The caller need to enter MAC's perimeter by calling mac_perim_enter().
4692 4727 */
4693 4728 static int
4694 4729 mac_group_mov_ring(mac_impl_t *mip, mac_group_t *d_group, mac_ring_t *ring)
4695 4730 {
4696 4731 mac_group_t *s_group = (mac_group_t *)ring->mr_gh;
4697 4732 int rv;
4698 4733
4699 4734 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
4700 4735 ASSERT(d_group != NULL);
4701 4736 ASSERT(s_group->mrg_mh == d_group->mrg_mh);
4702 4737
4703 4738 if (s_group == d_group)
4704 4739 return (0);
4705 4740
4706 4741 /*
4707 4742 * Remove it from current group first.
4708 4743 */
4709 4744 if (s_group != NULL)
4710 4745 i_mac_group_rem_ring(s_group, ring, B_FALSE);
4711 4746
4712 4747 /*
4713 4748 * Add it to the new group.
4714 4749 */
4715 4750 rv = i_mac_group_add_ring(d_group, ring, 0);
4716 4751 if (rv != 0) {
4717 4752 /*
4718 4753 * Failed to add ring back to source group. If
4719 4754 * that fails, the ring is stuck in limbo, log message.
4720 4755 */
4721 4756 if (i_mac_group_add_ring(s_group, ring, 0)) {
4722 4757 cmn_err(CE_WARN, "%s: failed to move ring %p\n",
4723 4758 mip->mi_name, (void *)ring);
4724 4759 }
4725 4760 }
4726 4761
4727 4762 return (rv);
4728 4763 }
4729 4764
4730 4765 /*
4731 4766 * Find a MAC address according to its value.
4732 4767 */
4733 4768 mac_address_t *
4734 4769 mac_find_macaddr(mac_impl_t *mip, uint8_t *mac_addr)
4735 4770 {
4736 4771 mac_address_t *map;
4737 4772
4738 4773 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
4739 4774
4740 4775 for (map = mip->mi_addresses; map != NULL; map = map->ma_next) {
4741 4776 if (bcmp(mac_addr, map->ma_addr, map->ma_len) == 0)
4742 4777 break;
4743 4778 }
4744 4779
4745 4780 return (map);
4746 4781 }
4747 4782
4748 4783 /*
4749 4784 * Check whether the MAC address is shared by multiple clients.
4750 4785 */
4751 4786 boolean_t
4752 4787 mac_check_macaddr_shared(mac_address_t *map)
4753 4788 {
4754 4789 ASSERT(MAC_PERIM_HELD((mac_handle_t)map->ma_mip));
4755 4790
4756 4791 return (map->ma_nusers > 1);
4757 4792 }
4758 4793
4759 4794 /*
4760 4795 * Remove the specified MAC address from the MAC address list and free it.
4761 4796 */
4762 4797 static void
4763 4798 mac_free_macaddr(mac_address_t *map)
4764 4799 {
4765 4800 mac_impl_t *mip = map->ma_mip;
4766 4801
4767 4802 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
4768 4803 ASSERT(mip->mi_addresses != NULL);
4769 4804
4770 4805 map = mac_find_macaddr(mip, map->ma_addr);
4771 4806
4772 4807 ASSERT(map != NULL);
4773 4808 ASSERT(map->ma_nusers == 0);
4774 4809
4775 4810 if (map == mip->mi_addresses) {
4776 4811 mip->mi_addresses = map->ma_next;
4777 4812 } else {
4778 4813 mac_address_t *pre;
4779 4814
4780 4815 pre = mip->mi_addresses;
4781 4816 while (pre->ma_next != map)
4782 4817 pre = pre->ma_next;
4783 4818 pre->ma_next = map->ma_next;
4784 4819 }
4785 4820
4786 4821 kmem_free(map, sizeof (mac_address_t));
4787 4822 }
4788 4823
4789 4824 /*
4790 4825 * Add a MAC address reference for a client. If the desired MAC address
4791 4826 * exists, add a reference to it. Otherwise, add the new address by adding
4792 4827 * it to a reserved group or setting promiscuous mode. Won't try different
4793 4828 * group is the group is non-NULL, so the caller must explictly share
4794 4829 * default group when needed.
4795 4830 *
4796 4831 * Note, the primary MAC address is initialized at registration time, so
4797 4832 * to add it to default group only need to activate it if its reference
4798 4833 * count is still zero. Also, some drivers may not have advertised RINGS
4799 4834 * capability.
4800 4835 */
4801 4836 int
4802 4837 mac_add_macaddr(mac_impl_t *mip, mac_group_t *group, uint8_t *mac_addr,
4803 4838 boolean_t use_hw)
4804 4839 {
4805 4840 mac_address_t *map;
4806 4841 int err = 0;
4807 4842 boolean_t allocated_map = B_FALSE;
4808 4843
4809 4844 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
4810 4845
4811 4846 map = mac_find_macaddr(mip, mac_addr);
4812 4847
4813 4848 /*
4814 4849 * If the new MAC address has not been added. Allocate a new one
4815 4850 * and set it up.
4816 4851 */
4817 4852 if (map == NULL) {
4818 4853 map = kmem_zalloc(sizeof (mac_address_t), KM_SLEEP);
4819 4854 map->ma_len = mip->mi_type->mt_addr_length;
4820 4855 bcopy(mac_addr, map->ma_addr, map->ma_len);
4821 4856 map->ma_nusers = 0;
4822 4857 map->ma_group = group;
4823 4858 map->ma_mip = mip;
4824 4859
4825 4860 /* add the new MAC address to the head of the address list */
4826 4861 map->ma_next = mip->mi_addresses;
4827 4862 mip->mi_addresses = map;
4828 4863
4829 4864 allocated_map = B_TRUE;
4830 4865 }
4831 4866
4832 4867 ASSERT(map->ma_group == NULL || map->ma_group == group);
4833 4868 if (map->ma_group == NULL)
4834 4869 map->ma_group = group;
4835 4870
4836 4871 /*
4837 4872 * If the MAC address is already in use, simply account for the
4838 4873 * new client.
4839 4874 */
4840 4875 if (map->ma_nusers++ > 0)
4841 4876 return (0);
4842 4877
4843 4878 /*
4844 4879 * Activate this MAC address by adding it to the reserved group.
4845 4880 */
4846 4881 if (group != NULL) {
4847 4882 err = mac_group_addmac(group, (const uint8_t *)mac_addr);
4848 4883 if (err == 0) {
4849 4884 map->ma_type = MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED;
4850 4885 return (0);
4851 4886 }
4852 4887 }
4853 4888
4854 4889 /*
4855 4890 * The MAC address addition failed. If the client requires a
4856 4891 * hardware classified MAC address, fail the operation.
4857 4892 */
4858 4893 if (use_hw) {
4859 4894 err = ENOSPC;
4860 4895 goto bail;
4861 4896 }
4862 4897
4863 4898 /*
4864 4899 * Try promiscuous mode.
4865 4900 *
4866 4901 * For drivers that don't advertise RINGS capability, do
4867 4902 * nothing for the primary address.
4868 4903 */
4869 4904 if ((group == NULL) &&
4870 4905 (bcmp(map->ma_addr, mip->mi_addr, map->ma_len) == 0)) {
4871 4906 map->ma_type = MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED;
4872 4907 return (0);
4873 4908 }
4874 4909
4875 4910 /*
4876 4911 * Enable promiscuous mode in order to receive traffic
4877 4912 * to the new MAC address.
4878 4913 */
4879 4914 if ((err = i_mac_promisc_set(mip, B_TRUE)) == 0) {
4880 4915 map->ma_type = MAC_ADDRESS_TYPE_UNICAST_PROMISC;
4881 4916 return (0);
4882 4917 }
4883 4918
4884 4919 /*
4885 4920 * Free the MAC address that could not be added. Don't free
4886 4921 * a pre-existing address, it could have been the entry
4887 4922 * for the primary MAC address which was pre-allocated by
4888 4923 * mac_init_macaddr(), and which must remain on the list.
4889 4924 */
4890 4925 bail:
4891 4926 map->ma_nusers--;
4892 4927 if (allocated_map)
4893 4928 mac_free_macaddr(map);
4894 4929 return (err);
4895 4930 }
4896 4931
4897 4932 /*
4898 4933 * Remove a reference to a MAC address. This may cause to remove the MAC
4899 4934 * address from an associated group or to turn off promiscuous mode.
4900 4935 * The caller needs to handle the failure properly.
4901 4936 */
4902 4937 int
4903 4938 mac_remove_macaddr(mac_address_t *map)
4904 4939 {
4905 4940 mac_impl_t *mip = map->ma_mip;
4906 4941 int err = 0;
4907 4942
4908 4943 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
4909 4944
4910 4945 ASSERT(map == mac_find_macaddr(mip, map->ma_addr));
4911 4946
4912 4947 /*
4913 4948 * If it's not the last client using this MAC address, only update
4914 4949 * the MAC clients count.
4915 4950 */
4916 4951 if (--map->ma_nusers > 0)
4917 4952 return (0);
4918 4953
4919 4954 /*
4920 4955 * The MAC address is no longer used by any MAC client, so remove
4921 4956 * it from its associated group, or turn off promiscuous mode
4922 4957 * if it was enabled for the MAC address.
4923 4958 */
4924 4959 switch (map->ma_type) {
4925 4960 case MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED:
4926 4961 /*
4927 4962 * Don't free the preset primary address for drivers that
4928 4963 * don't advertise RINGS capability.
4929 4964 */
4930 4965 if (map->ma_group == NULL)
4931 4966 return (0);
4932 4967
4933 4968 err = mac_group_remmac(map->ma_group, map->ma_addr);
4934 4969 if (err == 0)
4935 4970 map->ma_group = NULL;
4936 4971 break;
4937 4972 case MAC_ADDRESS_TYPE_UNICAST_PROMISC:
4938 4973 err = i_mac_promisc_set(mip, B_FALSE);
4939 4974 break;
4940 4975 default:
4941 4976 ASSERT(B_FALSE);
4942 4977 }
4943 4978
4944 4979 if (err != 0)
4945 4980 return (err);
4946 4981
4947 4982 /*
4948 4983 * We created MAC address for the primary one at registration, so we
4949 4984 * won't free it here. mac_fini_macaddr() will take care of it.
4950 4985 */
4951 4986 if (bcmp(map->ma_addr, mip->mi_addr, map->ma_len) != 0)
4952 4987 mac_free_macaddr(map);
4953 4988
4954 4989 return (0);
4955 4990 }
4956 4991
4957 4992 /*
4958 4993 * Update an existing MAC address. The caller need to make sure that the new
4959 4994 * value has not been used.
4960 4995 */
4961 4996 int
4962 4997 mac_update_macaddr(mac_address_t *map, uint8_t *mac_addr)
4963 4998 {
4964 4999 mac_impl_t *mip = map->ma_mip;
4965 5000 int err = 0;
4966 5001
4967 5002 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
4968 5003 ASSERT(mac_find_macaddr(mip, mac_addr) == NULL);
4969 5004
4970 5005 switch (map->ma_type) {
4971 5006 case MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED:
4972 5007 /*
4973 5008 * Update the primary address for drivers that are not
4974 5009 * RINGS capable.
4975 5010 */
4976 5011 if (mip->mi_rx_groups == NULL) {
4977 5012 err = mip->mi_unicst(mip->mi_driver, (const uint8_t *)
4978 5013 mac_addr);
4979 5014 if (err != 0)
4980 5015 return (err);
4981 5016 break;
4982 5017 }
4983 5018
4984 5019 /*
4985 5020 * If this MAC address is not currently in use,
4986 5021 * simply break out and update the value.
4987 5022 */
4988 5023 if (map->ma_nusers == 0)
4989 5024 break;
4990 5025
4991 5026 /*
4992 5027 * Need to replace the MAC address associated with a group.
4993 5028 */
4994 5029 err = mac_group_remmac(map->ma_group, map->ma_addr);
4995 5030 if (err != 0)
4996 5031 return (err);
4997 5032
4998 5033 err = mac_group_addmac(map->ma_group, mac_addr);
4999 5034
5000 5035 /*
5001 5036 * Failure hints hardware error. The MAC layer needs to
5002 5037 * have error notification facility to handle this.
5003 5038 * Now, simply try to restore the value.
5004 5039 */
5005 5040 if (err != 0)
5006 5041 (void) mac_group_addmac(map->ma_group, map->ma_addr);
5007 5042
5008 5043 break;
5009 5044 case MAC_ADDRESS_TYPE_UNICAST_PROMISC:
5010 5045 /*
5011 5046 * Need to do nothing more if in promiscuous mode.
5012 5047 */
5013 5048 break;
5014 5049 default:
5015 5050 ASSERT(B_FALSE);
5016 5051 }
5017 5052
5018 5053 /*
5019 5054 * Successfully replaced the MAC address.
5020 5055 */
5021 5056 if (err == 0)
5022 5057 bcopy(mac_addr, map->ma_addr, map->ma_len);
5023 5058
5024 5059 return (err);
5025 5060 }
5026 5061
5027 5062 /*
5028 5063 * Freshen the MAC address with new value. Its caller must have updated the
5029 5064 * hardware MAC address before calling this function.
5030 5065 * This funcitons is supposed to be used to handle the MAC address change
5031 5066 * notification from underlying drivers.
5032 5067 */
5033 5068 void
5034 5069 mac_freshen_macaddr(mac_address_t *map, uint8_t *mac_addr)
5035 5070 {
5036 5071 mac_impl_t *mip = map->ma_mip;
5037 5072
5038 5073 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
5039 5074 ASSERT(mac_find_macaddr(mip, mac_addr) == NULL);
5040 5075
5041 5076 /*
5042 5077 * Freshen the MAC address with new value.
5043 5078 */
5044 5079 bcopy(mac_addr, map->ma_addr, map->ma_len);
5045 5080 bcopy(mac_addr, mip->mi_addr, map->ma_len);
5046 5081
5047 5082 /*
5048 5083 * Update all MAC clients that share this MAC address.
5049 5084 */
5050 5085 mac_unicast_update_clients(mip, map);
5051 5086 }
5052 5087
5053 5088 /*
5054 5089 * Set up the primary MAC address.
5055 5090 */
5056 5091 void
5057 5092 mac_init_macaddr(mac_impl_t *mip)
5058 5093 {
5059 5094 mac_address_t *map;
5060 5095
5061 5096 /*
5062 5097 * The reference count is initialized to zero, until it's really
5063 5098 * activated.
5064 5099 */
5065 5100 map = kmem_zalloc(sizeof (mac_address_t), KM_SLEEP);
5066 5101 map->ma_len = mip->mi_type->mt_addr_length;
5067 5102 bcopy(mip->mi_addr, map->ma_addr, map->ma_len);
5068 5103
5069 5104 /*
5070 5105 * If driver advertises RINGS capability, it shouldn't have initialized
5071 5106 * its primary MAC address. For other drivers, including VNIC, the
5072 5107 * primary address must work after registration.
5073 5108 */
5074 5109 if (mip->mi_rx_groups == NULL)
5075 5110 map->ma_type = MAC_ADDRESS_TYPE_UNICAST_CLASSIFIED;
5076 5111
5077 5112 map->ma_mip = mip;
5078 5113
5079 5114 mip->mi_addresses = map;
5080 5115 }
5081 5116
5082 5117 /*
5083 5118 * Clean up the primary MAC address. Note, only one primary MAC address
5084 5119 * is allowed. All other MAC addresses must have been freed appropriately.
5085 5120 */
5086 5121 void
5087 5122 mac_fini_macaddr(mac_impl_t *mip)
5088 5123 {
5089 5124 mac_address_t *map = mip->mi_addresses;
5090 5125
5091 5126 if (map == NULL)
5092 5127 return;
5093 5128
5094 5129 /*
5095 5130 * If mi_addresses is initialized, there should be exactly one
5096 5131 * entry left on the list with no users.
5097 5132 */
5098 5133 ASSERT(map->ma_nusers == 0);
5099 5134 ASSERT(map->ma_next == NULL);
5100 5135
5101 5136 kmem_free(map, sizeof (mac_address_t));
5102 5137 mip->mi_addresses = NULL;
5103 5138 }
5104 5139
5105 5140 /*
5106 5141 * Logging related functions.
5107 5142 *
5108 5143 * Note that Kernel statistics have been extended to maintain fine
5109 5144 * granularity of statistics viz. hardware lane, software lane, fanout
5110 5145 * stats etc. However, extended accounting continues to support only
5111 5146 * aggregate statistics like before.
5112 5147 */
5113 5148
5114 5149 /* Write the flow description to a netinfo_t record */
5115 5150 static netinfo_t *
5116 5151 mac_write_flow_desc(flow_entry_t *flent, mac_client_impl_t *mcip)
5117 5152 {
5118 5153 netinfo_t *ninfo;
5119 5154 net_desc_t *ndesc;
5120 5155 flow_desc_t *fdesc;
5121 5156 mac_resource_props_t *mrp;
5122 5157
5123 5158 ninfo = kmem_zalloc(sizeof (netinfo_t), KM_NOSLEEP);
5124 5159 if (ninfo == NULL)
5125 5160 return (NULL);
5126 5161 ndesc = kmem_zalloc(sizeof (net_desc_t), KM_NOSLEEP);
5127 5162 if (ndesc == NULL) {
5128 5163 kmem_free(ninfo, sizeof (netinfo_t));
5129 5164 return (NULL);
5130 5165 }
5131 5166
5132 5167 /*
5133 5168 * Grab the fe_lock to see a self-consistent fe_flow_desc.
5134 5169 * Updates to the fe_flow_desc are done under the fe_lock
5135 5170 */
5136 5171 mutex_enter(&flent->fe_lock);
5137 5172 fdesc = &flent->fe_flow_desc;
5138 5173 mrp = &flent->fe_resource_props;
5139 5174
5140 5175 ndesc->nd_name = flent->fe_flow_name;
5141 5176 ndesc->nd_devname = mcip->mci_name;
5142 5177 bcopy(fdesc->fd_src_mac, ndesc->nd_ehost, ETHERADDRL);
5143 5178 bcopy(fdesc->fd_dst_mac, ndesc->nd_edest, ETHERADDRL);
5144 5179 ndesc->nd_sap = htonl(fdesc->fd_sap);
5145 5180 ndesc->nd_isv4 = (uint8_t)fdesc->fd_ipversion == IPV4_VERSION;
5146 5181 ndesc->nd_bw_limit = mrp->mrp_maxbw;
5147 5182 if (ndesc->nd_isv4) {
5148 5183 ndesc->nd_saddr[3] = htonl(fdesc->fd_local_addr.s6_addr32[3]);
5149 5184 ndesc->nd_daddr[3] = htonl(fdesc->fd_remote_addr.s6_addr32[3]);
5150 5185 } else {
5151 5186 bcopy(&fdesc->fd_local_addr, ndesc->nd_saddr, IPV6_ADDR_LEN);
5152 5187 bcopy(&fdesc->fd_remote_addr, ndesc->nd_daddr, IPV6_ADDR_LEN);
5153 5188 }
5154 5189 ndesc->nd_sport = htons(fdesc->fd_local_port);
5155 5190 ndesc->nd_dport = htons(fdesc->fd_remote_port);
5156 5191 ndesc->nd_protocol = (uint8_t)fdesc->fd_protocol;
5157 5192 mutex_exit(&flent->fe_lock);
5158 5193
5159 5194 ninfo->ni_record = ndesc;
5160 5195 ninfo->ni_size = sizeof (net_desc_t);
5161 5196 ninfo->ni_type = EX_NET_FLDESC_REC;
5162 5197
5163 5198 return (ninfo);
5164 5199 }
5165 5200
5166 5201 /* Write the flow statistics to a netinfo_t record */
5167 5202 static netinfo_t *
5168 5203 mac_write_flow_stats(flow_entry_t *flent)
5169 5204 {
5170 5205 netinfo_t *ninfo;
5171 5206 net_stat_t *nstat;
5172 5207 mac_soft_ring_set_t *mac_srs;
5173 5208 mac_rx_stats_t *mac_rx_stat;
5174 5209 mac_tx_stats_t *mac_tx_stat;
5175 5210 int i;
5176 5211
5177 5212 ninfo = kmem_zalloc(sizeof (netinfo_t), KM_NOSLEEP);
5178 5213 if (ninfo == NULL)
5179 5214 return (NULL);
5180 5215 nstat = kmem_zalloc(sizeof (net_stat_t), KM_NOSLEEP);
5181 5216 if (nstat == NULL) {
5182 5217 kmem_free(ninfo, sizeof (netinfo_t));
5183 5218 return (NULL);
5184 5219 }
5185 5220
5186 5221 nstat->ns_name = flent->fe_flow_name;
5187 5222 for (i = 0; i < flent->fe_rx_srs_cnt; i++) {
5188 5223 mac_srs = (mac_soft_ring_set_t *)flent->fe_rx_srs[i];
5189 5224 mac_rx_stat = &mac_srs->srs_rx.sr_stat;
5190 5225
5191 5226 nstat->ns_ibytes += mac_rx_stat->mrs_intrbytes +
5192 5227 mac_rx_stat->mrs_pollbytes + mac_rx_stat->mrs_lclbytes;
5193 5228 nstat->ns_ipackets += mac_rx_stat->mrs_intrcnt +
5194 5229 mac_rx_stat->mrs_pollcnt + mac_rx_stat->mrs_lclcnt;
5195 5230 nstat->ns_oerrors += mac_rx_stat->mrs_ierrors;
5196 5231 }
5197 5232
5198 5233 mac_srs = (mac_soft_ring_set_t *)(flent->fe_tx_srs);
5199 5234 if (mac_srs != NULL) {
5200 5235 mac_tx_stat = &mac_srs->srs_tx.st_stat;
5201 5236
5202 5237 nstat->ns_obytes = mac_tx_stat->mts_obytes;
5203 5238 nstat->ns_opackets = mac_tx_stat->mts_opackets;
5204 5239 nstat->ns_oerrors = mac_tx_stat->mts_oerrors;
5205 5240 }
5206 5241
5207 5242 ninfo->ni_record = nstat;
5208 5243 ninfo->ni_size = sizeof (net_stat_t);
5209 5244 ninfo->ni_type = EX_NET_FLSTAT_REC;
5210 5245
5211 5246 return (ninfo);
5212 5247 }
5213 5248
5214 5249 /* Write the link description to a netinfo_t record */
5215 5250 static netinfo_t *
5216 5251 mac_write_link_desc(mac_client_impl_t *mcip)
5217 5252 {
5218 5253 netinfo_t *ninfo;
5219 5254 net_desc_t *ndesc;
5220 5255 flow_entry_t *flent = mcip->mci_flent;
5221 5256
5222 5257 ninfo = kmem_zalloc(sizeof (netinfo_t), KM_NOSLEEP);
5223 5258 if (ninfo == NULL)
5224 5259 return (NULL);
5225 5260 ndesc = kmem_zalloc(sizeof (net_desc_t), KM_NOSLEEP);
5226 5261 if (ndesc == NULL) {
5227 5262 kmem_free(ninfo, sizeof (netinfo_t));
5228 5263 return (NULL);
5229 5264 }
5230 5265
5231 5266 ndesc->nd_name = mcip->mci_name;
5232 5267 ndesc->nd_devname = mcip->mci_name;
5233 5268 ndesc->nd_isv4 = B_TRUE;
5234 5269 /*
5235 5270 * Grab the fe_lock to see a self-consistent fe_flow_desc.
5236 5271 * Updates to the fe_flow_desc are done under the fe_lock
5237 5272 * after removing the flent from the flow table.
5238 5273 */
5239 5274 mutex_enter(&flent->fe_lock);
5240 5275 bcopy(flent->fe_flow_desc.fd_src_mac, ndesc->nd_ehost, ETHERADDRL);
5241 5276 mutex_exit(&flent->fe_lock);
5242 5277
5243 5278 ninfo->ni_record = ndesc;
5244 5279 ninfo->ni_size = sizeof (net_desc_t);
5245 5280 ninfo->ni_type = EX_NET_LNDESC_REC;
5246 5281
5247 5282 return (ninfo);
5248 5283 }
5249 5284
5250 5285 /* Write the link statistics to a netinfo_t record */
5251 5286 static netinfo_t *
5252 5287 mac_write_link_stats(mac_client_impl_t *mcip)
5253 5288 {
5254 5289 netinfo_t *ninfo;
5255 5290 net_stat_t *nstat;
5256 5291 flow_entry_t *flent;
5257 5292 mac_soft_ring_set_t *mac_srs;
5258 5293 mac_rx_stats_t *mac_rx_stat;
5259 5294 mac_tx_stats_t *mac_tx_stat;
5260 5295 int i;
5261 5296
5262 5297 ninfo = kmem_zalloc(sizeof (netinfo_t), KM_NOSLEEP);
5263 5298 if (ninfo == NULL)
5264 5299 return (NULL);
5265 5300 nstat = kmem_zalloc(sizeof (net_stat_t), KM_NOSLEEP);
5266 5301 if (nstat == NULL) {
5267 5302 kmem_free(ninfo, sizeof (netinfo_t));
5268 5303 return (NULL);
5269 5304 }
5270 5305
5271 5306 nstat->ns_name = mcip->mci_name;
5272 5307 flent = mcip->mci_flent;
5273 5308 if (flent != NULL) {
5274 5309 for (i = 0; i < flent->fe_rx_srs_cnt; i++) {
5275 5310 mac_srs = (mac_soft_ring_set_t *)flent->fe_rx_srs[i];
5276 5311 mac_rx_stat = &mac_srs->srs_rx.sr_stat;
5277 5312
5278 5313 nstat->ns_ibytes += mac_rx_stat->mrs_intrbytes +
5279 5314 mac_rx_stat->mrs_pollbytes +
5280 5315 mac_rx_stat->mrs_lclbytes;
5281 5316 nstat->ns_ipackets += mac_rx_stat->mrs_intrcnt +
5282 5317 mac_rx_stat->mrs_pollcnt + mac_rx_stat->mrs_lclcnt;
5283 5318 nstat->ns_oerrors += mac_rx_stat->mrs_ierrors;
5284 5319 }
5285 5320 }
5286 5321
5287 5322 mac_srs = (mac_soft_ring_set_t *)(mcip->mci_flent->fe_tx_srs);
5288 5323 if (mac_srs != NULL) {
5289 5324 mac_tx_stat = &mac_srs->srs_tx.st_stat;
5290 5325
5291 5326 nstat->ns_obytes = mac_tx_stat->mts_obytes;
5292 5327 nstat->ns_opackets = mac_tx_stat->mts_opackets;
5293 5328 nstat->ns_oerrors = mac_tx_stat->mts_oerrors;
5294 5329 }
5295 5330
5296 5331 ninfo->ni_record = nstat;
5297 5332 ninfo->ni_size = sizeof (net_stat_t);
5298 5333 ninfo->ni_type = EX_NET_LNSTAT_REC;
5299 5334
5300 5335 return (ninfo);
5301 5336 }
5302 5337
5303 5338 typedef struct i_mac_log_state_s {
5304 5339 boolean_t mi_last;
5305 5340 int mi_fenable;
5306 5341 int mi_lenable;
5307 5342 list_t *mi_list;
5308 5343 } i_mac_log_state_t;
5309 5344
5310 5345 /*
5311 5346 * For a given flow, if the description has not been logged before, do it now.
5312 5347 * If it is a VNIC, then we have collected information about it from the MAC
5313 5348 * table, so skip it.
5314 5349 *
5315 5350 * Called through mac_flow_walk_nolock()
5316 5351 *
5317 5352 * Return 0 if successful.
5318 5353 */
5319 5354 static int
5320 5355 mac_log_flowinfo(flow_entry_t *flent, void *arg)
5321 5356 {
5322 5357 mac_client_impl_t *mcip = flent->fe_mcip;
5323 5358 i_mac_log_state_t *lstate = arg;
5324 5359 netinfo_t *ninfo;
5325 5360
5326 5361 if (mcip == NULL)
5327 5362 return (0);
5328 5363
5329 5364 /*
5330 5365 * If the name starts with "vnic", and fe_user_generated is true (to
5331 5366 * exclude the mcast and active flow entries created implicitly for
5332 5367 * a vnic, it is a VNIC flow. i.e. vnic1 is a vnic flow,
5333 5368 * vnic/bge1/mcast1 is not and neither is vnic/bge1/active.
5334 5369 */
5335 5370 if (strncasecmp(flent->fe_flow_name, "vnic", 4) == 0 &&
5336 5371 (flent->fe_type & FLOW_USER) != 0) {
5337 5372 return (0);
5338 5373 }
5339 5374
5340 5375 if (!flent->fe_desc_logged) {
5341 5376 /*
5342 5377 * We don't return error because we want to continue the
5343 5378 * walk in case this is the last walk which means we
5344 5379 * need to reset fe_desc_logged in all the flows.
5345 5380 */
5346 5381 if ((ninfo = mac_write_flow_desc(flent, mcip)) == NULL)
5347 5382 return (0);
5348 5383 list_insert_tail(lstate->mi_list, ninfo);
5349 5384 flent->fe_desc_logged = B_TRUE;
5350 5385 }
5351 5386
5352 5387 /*
5353 5388 * Regardless of the error, we want to proceed in case we have to
5354 5389 * reset fe_desc_logged.
5355 5390 */
5356 5391 ninfo = mac_write_flow_stats(flent);
5357 5392 if (ninfo == NULL)
5358 5393 return (-1);
5359 5394
5360 5395 list_insert_tail(lstate->mi_list, ninfo);
5361 5396
5362 5397 if (mcip != NULL && !(mcip->mci_state_flags & MCIS_DESC_LOGGED))
5363 5398 flent->fe_desc_logged = B_FALSE;
5364 5399
5365 5400 return (0);
5366 5401 }
5367 5402
5368 5403 /*
5369 5404 * Log the description for each mac client of this mac_impl_t, if it
5370 5405 * hasn't already been done. Additionally, log statistics for the link as
5371 5406 * well. Walk the flow table and log information for each flow as well.
5372 5407 * If it is the last walk (mci_last), then we turn off mci_desc_logged (and
5373 5408 * also fe_desc_logged, if flow logging is on) since we want to log the
5374 5409 * description if and when logging is restarted.
5375 5410 *
5376 5411 * Return 0 upon success or -1 upon failure
5377 5412 */
5378 5413 static int
5379 5414 i_mac_impl_log(mac_impl_t *mip, i_mac_log_state_t *lstate)
5380 5415 {
5381 5416 mac_client_impl_t *mcip;
5382 5417 netinfo_t *ninfo;
5383 5418
5384 5419 i_mac_perim_enter(mip);
5385 5420 /*
5386 5421 * Only walk the client list for NIC and etherstub
5387 5422 */
5388 5423 if ((mip->mi_state_flags & MIS_DISABLED) ||
5389 5424 ((mip->mi_state_flags & MIS_IS_VNIC) &&
5390 5425 (mac_get_lower_mac_handle((mac_handle_t)mip) != NULL))) {
5391 5426 i_mac_perim_exit(mip);
5392 5427 return (0);
5393 5428 }
5394 5429
5395 5430 for (mcip = mip->mi_clients_list; mcip != NULL;
5396 5431 mcip = mcip->mci_client_next) {
5397 5432 if (!MCIP_DATAPATH_SETUP(mcip))
5398 5433 continue;
5399 5434 if (lstate->mi_lenable) {
5400 5435 if (!(mcip->mci_state_flags & MCIS_DESC_LOGGED)) {
5401 5436 ninfo = mac_write_link_desc(mcip);
5402 5437 if (ninfo == NULL) {
5403 5438 /*
5404 5439 * We can't terminate it if this is the last
5405 5440 * walk, else there might be some links with
5406 5441 * mi_desc_logged set to true, which means
5407 5442 * their description won't be logged the next
5408 5443 * time logging is started (similarly for the
5409 5444 * flows within such links). We can continue
5410 5445 * without walking the flow table (i.e. to
5411 5446 * set fe_desc_logged to false) because we
5412 5447 * won't have written any flow stuff for this
5413 5448 * link as we haven't logged the link itself.
5414 5449 */
5415 5450 i_mac_perim_exit(mip);
5416 5451 if (lstate->mi_last)
5417 5452 return (0);
5418 5453 else
5419 5454 return (-1);
5420 5455 }
5421 5456 mcip->mci_state_flags |= MCIS_DESC_LOGGED;
5422 5457 list_insert_tail(lstate->mi_list, ninfo);
5423 5458 }
5424 5459 }
5425 5460
5426 5461 ninfo = mac_write_link_stats(mcip);
5427 5462 if (ninfo == NULL && !lstate->mi_last) {
5428 5463 i_mac_perim_exit(mip);
5429 5464 return (-1);
5430 5465 }
5431 5466 list_insert_tail(lstate->mi_list, ninfo);
5432 5467
5433 5468 if (lstate->mi_last)
5434 5469 mcip->mci_state_flags &= ~MCIS_DESC_LOGGED;
5435 5470
5436 5471 if (lstate->mi_fenable) {
5437 5472 if (mcip->mci_subflow_tab != NULL) {
5438 5473 (void) mac_flow_walk_nolock(
5439 5474 mcip->mci_subflow_tab, mac_log_flowinfo,
5440 5475 lstate);
5441 5476 }
5442 5477 }
5443 5478 }
5444 5479 i_mac_perim_exit(mip);
5445 5480 return (0);
5446 5481 }
5447 5482
5448 5483 /*
5449 5484 * modhash walker function to add a mac_impl_t to a list
5450 5485 */
5451 5486 /*ARGSUSED*/
5452 5487 static uint_t
5453 5488 i_mac_impl_list_walker(mod_hash_key_t key, mod_hash_val_t *val, void *arg)
5454 5489 {
5455 5490 list_t *list = (list_t *)arg;
5456 5491 mac_impl_t *mip = (mac_impl_t *)val;
5457 5492
5458 5493 if ((mip->mi_state_flags & MIS_DISABLED) == 0) {
5459 5494 list_insert_tail(list, mip);
5460 5495 mip->mi_ref++;
5461 5496 }
5462 5497
5463 5498 return (MH_WALK_CONTINUE);
5464 5499 }
5465 5500
5466 5501 void
5467 5502 i_mac_log_info(list_t *net_log_list, i_mac_log_state_t *lstate)
5468 5503 {
5469 5504 list_t mac_impl_list;
5470 5505 mac_impl_t *mip;
5471 5506 netinfo_t *ninfo;
5472 5507
5473 5508 /* Create list of mac_impls */
5474 5509 ASSERT(RW_LOCK_HELD(&i_mac_impl_lock));
5475 5510 list_create(&mac_impl_list, sizeof (mac_impl_t), offsetof(mac_impl_t,
5476 5511 mi_node));
5477 5512 mod_hash_walk(i_mac_impl_hash, i_mac_impl_list_walker, &mac_impl_list);
5478 5513 rw_exit(&i_mac_impl_lock);
5479 5514
5480 5515 /* Create log entries for each mac_impl */
5481 5516 for (mip = list_head(&mac_impl_list); mip != NULL;
5482 5517 mip = list_next(&mac_impl_list, mip)) {
5483 5518 if (i_mac_impl_log(mip, lstate) != 0)
5484 5519 continue;
5485 5520 }
5486 5521
5487 5522 /* Remove elements and destroy list of mac_impls */
5488 5523 rw_enter(&i_mac_impl_lock, RW_WRITER);
5489 5524 while ((mip = list_remove_tail(&mac_impl_list)) != NULL) {
5490 5525 mip->mi_ref--;
5491 5526 }
5492 5527 rw_exit(&i_mac_impl_lock);
5493 5528 list_destroy(&mac_impl_list);
5494 5529
5495 5530 /*
5496 5531 * Write log entries to files outside of locks, free associated
5497 5532 * structures, and remove entries from the list.
5498 5533 */
5499 5534 while ((ninfo = list_head(net_log_list)) != NULL) {
5500 5535 (void) exacct_commit_netinfo(ninfo->ni_record, ninfo->ni_type);
5501 5536 list_remove(net_log_list, ninfo);
5502 5537 kmem_free(ninfo->ni_record, ninfo->ni_size);
5503 5538 kmem_free(ninfo, sizeof (*ninfo));
5504 5539 }
5505 5540 list_destroy(net_log_list);
5506 5541 }
5507 5542
5508 5543 /*
5509 5544 * The timer thread that runs every mac_logging_interval seconds and logs
5510 5545 * link and/or flow information.
5511 5546 */
5512 5547 /* ARGSUSED */
5513 5548 void
5514 5549 mac_log_linkinfo(void *arg)
5515 5550 {
5516 5551 i_mac_log_state_t lstate;
5517 5552 list_t net_log_list;
5518 5553
5519 5554 list_create(&net_log_list, sizeof (netinfo_t),
5520 5555 offsetof(netinfo_t, ni_link));
5521 5556
5522 5557 rw_enter(&i_mac_impl_lock, RW_READER);
5523 5558 if (!mac_flow_log_enable && !mac_link_log_enable) {
5524 5559 rw_exit(&i_mac_impl_lock);
5525 5560 return;
5526 5561 }
5527 5562 lstate.mi_fenable = mac_flow_log_enable;
5528 5563 lstate.mi_lenable = mac_link_log_enable;
5529 5564 lstate.mi_last = B_FALSE;
5530 5565 lstate.mi_list = &net_log_list;
5531 5566
5532 5567 /* Write log entries for each mac_impl in the list */
5533 5568 i_mac_log_info(&net_log_list, &lstate);
5534 5569
5535 5570 if (mac_flow_log_enable || mac_link_log_enable) {
5536 5571 mac_logging_timer = timeout(mac_log_linkinfo, NULL,
5537 5572 SEC_TO_TICK(mac_logging_interval));
5538 5573 }
5539 5574 }
5540 5575
5541 5576 typedef struct i_mac_fastpath_state_s {
5542 5577 boolean_t mf_disable;
5543 5578 int mf_err;
5544 5579 } i_mac_fastpath_state_t;
5545 5580
5546 5581 /* modhash walker function to enable or disable fastpath */
5547 5582 /*ARGSUSED*/
5548 5583 static uint_t
5549 5584 i_mac_fastpath_walker(mod_hash_key_t key, mod_hash_val_t *val,
5550 5585 void *arg)
5551 5586 {
5552 5587 i_mac_fastpath_state_t *state = arg;
5553 5588 mac_handle_t mh = (mac_handle_t)val;
5554 5589
5555 5590 if (state->mf_disable)
5556 5591 state->mf_err = mac_fastpath_disable(mh);
5557 5592 else
5558 5593 mac_fastpath_enable(mh);
5559 5594
5560 5595 return (state->mf_err == 0 ? MH_WALK_CONTINUE : MH_WALK_TERMINATE);
5561 5596 }
5562 5597
5563 5598 /*
5564 5599 * Start the logging timer.
5565 5600 */
5566 5601 int
5567 5602 mac_start_logusage(mac_logtype_t type, uint_t interval)
5568 5603 {
5569 5604 i_mac_fastpath_state_t dstate = {B_TRUE, 0};
5570 5605 i_mac_fastpath_state_t estate = {B_FALSE, 0};
5571 5606 int err;
5572 5607
5573 5608 rw_enter(&i_mac_impl_lock, RW_WRITER);
5574 5609 switch (type) {
5575 5610 case MAC_LOGTYPE_FLOW:
5576 5611 if (mac_flow_log_enable) {
5577 5612 rw_exit(&i_mac_impl_lock);
5578 5613 return (0);
5579 5614 }
5580 5615 /* FALLTHRU */
5581 5616 case MAC_LOGTYPE_LINK:
5582 5617 if (mac_link_log_enable) {
5583 5618 rw_exit(&i_mac_impl_lock);
5584 5619 return (0);
5585 5620 }
5586 5621 break;
5587 5622 default:
5588 5623 ASSERT(0);
5589 5624 }
5590 5625
5591 5626 /* Disable fastpath */
5592 5627 mod_hash_walk(i_mac_impl_hash, i_mac_fastpath_walker, &dstate);
5593 5628 if ((err = dstate.mf_err) != 0) {
5594 5629 /* Reenable fastpath */
5595 5630 mod_hash_walk(i_mac_impl_hash, i_mac_fastpath_walker, &estate);
5596 5631 rw_exit(&i_mac_impl_lock);
5597 5632 return (err);
5598 5633 }
5599 5634
5600 5635 switch (type) {
5601 5636 case MAC_LOGTYPE_FLOW:
5602 5637 mac_flow_log_enable = B_TRUE;
5603 5638 /* FALLTHRU */
5604 5639 case MAC_LOGTYPE_LINK:
5605 5640 mac_link_log_enable = B_TRUE;
5606 5641 break;
5607 5642 }
5608 5643
5609 5644 mac_logging_interval = interval;
5610 5645 rw_exit(&i_mac_impl_lock);
5611 5646 mac_log_linkinfo(NULL);
5612 5647 return (0);
5613 5648 }
5614 5649
5615 5650 /*
5616 5651 * Stop the logging timer if both link and flow logging are turned off.
5617 5652 */
5618 5653 void
5619 5654 mac_stop_logusage(mac_logtype_t type)
5620 5655 {
5621 5656 i_mac_log_state_t lstate;
5622 5657 i_mac_fastpath_state_t estate = {B_FALSE, 0};
5623 5658 list_t net_log_list;
5624 5659
5625 5660 list_create(&net_log_list, sizeof (netinfo_t),
5626 5661 offsetof(netinfo_t, ni_link));
5627 5662
5628 5663 rw_enter(&i_mac_impl_lock, RW_WRITER);
5629 5664
5630 5665 lstate.mi_fenable = mac_flow_log_enable;
5631 5666 lstate.mi_lenable = mac_link_log_enable;
5632 5667 lstate.mi_list = &net_log_list;
5633 5668
5634 5669 /* Last walk */
5635 5670 lstate.mi_last = B_TRUE;
5636 5671
5637 5672 switch (type) {
5638 5673 case MAC_LOGTYPE_FLOW:
5639 5674 if (lstate.mi_fenable) {
5640 5675 ASSERT(mac_link_log_enable);
5641 5676 mac_flow_log_enable = B_FALSE;
5642 5677 mac_link_log_enable = B_FALSE;
5643 5678 break;
5644 5679 }
5645 5680 /* FALLTHRU */
5646 5681 case MAC_LOGTYPE_LINK:
5647 5682 if (!lstate.mi_lenable || mac_flow_log_enable) {
5648 5683 rw_exit(&i_mac_impl_lock);
5649 5684 return;
5650 5685 }
5651 5686 mac_link_log_enable = B_FALSE;
5652 5687 break;
5653 5688 default:
5654 5689 ASSERT(0);
5655 5690 }
5656 5691
5657 5692 /* Reenable fastpath */
5658 5693 mod_hash_walk(i_mac_impl_hash, i_mac_fastpath_walker, &estate);
5659 5694
5660 5695 (void) untimeout(mac_logging_timer);
5661 5696 mac_logging_timer = 0;
5662 5697
5663 5698 /* Write log entries for each mac_impl in the list */
5664 5699 i_mac_log_info(&net_log_list, &lstate);
5665 5700 }
5666 5701
5667 5702 /*
5668 5703 * Walk the rx and tx SRS/SRs for a flow and update the priority value.
5669 5704 */
5670 5705 void
5671 5706 mac_flow_update_priority(mac_client_impl_t *mcip, flow_entry_t *flent)
5672 5707 {
5673 5708 pri_t pri;
5674 5709 int count;
5675 5710 mac_soft_ring_set_t *mac_srs;
5676 5711
5677 5712 if (flent->fe_rx_srs_cnt <= 0)
5678 5713 return;
5679 5714
5680 5715 if (((mac_soft_ring_set_t *)flent->fe_rx_srs[0])->srs_type ==
5681 5716 SRST_FLOW) {
5682 5717 pri = FLOW_PRIORITY(mcip->mci_min_pri,
5683 5718 mcip->mci_max_pri,
5684 5719 flent->fe_resource_props.mrp_priority);
5685 5720 } else {
5686 5721 pri = mcip->mci_max_pri;
5687 5722 }
5688 5723
5689 5724 for (count = 0; count < flent->fe_rx_srs_cnt; count++) {
5690 5725 mac_srs = flent->fe_rx_srs[count];
5691 5726 mac_update_srs_priority(mac_srs, pri);
5692 5727 }
5693 5728 /*
5694 5729 * If we have a Tx SRS, we need to modify all the threads associated
5695 5730 * with it.
5696 5731 */
5697 5732 if (flent->fe_tx_srs != NULL)
5698 5733 mac_update_srs_priority(flent->fe_tx_srs, pri);
5699 5734 }
5700 5735
5701 5736 /*
5702 5737 * RX and TX rings are reserved according to different semantics depending
5703 5738 * on the requests from the MAC clients and type of rings:
5704 5739 *
5705 5740 * On the Tx side, by default we reserve individual rings, independently from
5706 5741 * the groups.
5707 5742 *
5708 5743 * On the Rx side, the reservation is at the granularity of the group
5709 5744 * of rings, and used for v12n level 1 only. It has a special case for the
5710 5745 * primary client.
5711 5746 *
5712 5747 * If a share is allocated to a MAC client, we allocate a TX group and an
5713 5748 * RX group to the client, and assign TX rings and RX rings to these
5714 5749 * groups according to information gathered from the driver through
5715 5750 * the share capability.
5716 5751 *
5717 5752 * The foreseable evolution of Rx rings will handle v12n level 2 and higher
5718 5753 * to allocate individual rings out of a group and program the hw classifier
5719 5754 * based on IP address or higher level criteria.
5720 5755 */
5721 5756
5722 5757 /*
5723 5758 * mac_reserve_tx_ring()
5724 5759 * Reserve a unused ring by marking it with MR_INUSE state.
5725 5760 * As reserved, the ring is ready to function.
5726 5761 *
5727 5762 * Notes for Hybrid I/O:
5728 5763 *
5729 5764 * If a specific ring is needed, it is specified through the desired_ring
5730 5765 * argument. Otherwise that argument is set to NULL.
5731 5766 * If the desired ring was previous allocated to another client, this
5732 5767 * function swaps it with a new ring from the group of unassigned rings.
5733 5768 */
5734 5769 mac_ring_t *
5735 5770 mac_reserve_tx_ring(mac_impl_t *mip, mac_ring_t *desired_ring)
5736 5771 {
5737 5772 mac_group_t *group;
5738 5773 mac_grp_client_t *mgcp;
5739 5774 mac_client_impl_t *mcip;
5740 5775 mac_soft_ring_set_t *srs;
5741 5776
5742 5777 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
5743 5778
5744 5779 /*
5745 5780 * Find an available ring and start it before changing its status.
5746 5781 * The unassigned rings are at the end of the mi_tx_groups
5747 5782 * array.
5748 5783 */
5749 5784 group = MAC_DEFAULT_TX_GROUP(mip);
5750 5785
5751 5786 /* Can't take the default ring out of the default group */
5752 5787 ASSERT(desired_ring != (mac_ring_t *)mip->mi_default_tx_ring);
5753 5788
5754 5789 if (desired_ring->mr_state == MR_FREE) {
5755 5790 ASSERT(MAC_GROUP_NO_CLIENT(group));
5756 5791 if (mac_start_ring(desired_ring) != 0)
5757 5792 return (NULL);
5758 5793 return (desired_ring);
5759 5794 }
5760 5795 /*
5761 5796 * There are clients using this ring, so let's move the clients
5762 5797 * away from using this ring.
5763 5798 */
5764 5799 for (mgcp = group->mrg_clients; mgcp != NULL; mgcp = mgcp->mgc_next) {
5765 5800 mcip = mgcp->mgc_client;
5766 5801 mac_tx_client_quiesce((mac_client_handle_t)mcip);
5767 5802 srs = MCIP_TX_SRS(mcip);
5768 5803 ASSERT(mac_tx_srs_ring_present(srs, desired_ring));
5769 5804 mac_tx_invoke_callbacks(mcip,
5770 5805 (mac_tx_cookie_t)mac_tx_srs_get_soft_ring(srs,
5771 5806 desired_ring));
5772 5807 mac_tx_srs_del_ring(srs, desired_ring);
5773 5808 mac_tx_client_restart((mac_client_handle_t)mcip);
5774 5809 }
5775 5810 return (desired_ring);
5776 5811 }
5777 5812
5778 5813 /*
5779 5814 * For a reserved group with multiple clients, return the primary client.
5780 5815 */
5781 5816 static mac_client_impl_t *
5782 5817 mac_get_grp_primary(mac_group_t *grp)
5783 5818 {
5784 5819 mac_grp_client_t *mgcp = grp->mrg_clients;
5785 5820 mac_client_impl_t *mcip;
5786 5821
5787 5822 while (mgcp != NULL) {
5788 5823 mcip = mgcp->mgc_client;
5789 5824 if (mcip->mci_flent->fe_type & FLOW_PRIMARY_MAC)
5790 5825 return (mcip);
5791 5826 mgcp = mgcp->mgc_next;
5792 5827 }
5793 5828 return (NULL);
5794 5829 }
5795 5830
5796 5831 /*
5797 5832 * Hybrid I/O specifies the ring that should be given to a share.
5798 5833 * If the ring is already used by clients, then we need to release
5799 5834 * the ring back to the default group so that we can give it to
5800 5835 * the share. This means the clients using this ring now get a
5801 5836 * replacement ring. If there aren't any replacement rings, this
5802 5837 * function returns a failure.
5803 5838 */
5804 5839 static int
5805 5840 mac_reclaim_ring_from_grp(mac_impl_t *mip, mac_ring_type_t ring_type,
5806 5841 mac_ring_t *ring, mac_ring_t **rings, int nrings)
5807 5842 {
5808 5843 mac_group_t *group = (mac_group_t *)ring->mr_gh;
5809 5844 mac_resource_props_t *mrp;
5810 5845 mac_client_impl_t *mcip;
5811 5846 mac_group_t *defgrp;
5812 5847 mac_ring_t *tring;
5813 5848 mac_group_t *tgrp;
5814 5849 int i;
5815 5850 int j;
5816 5851
5817 5852 mcip = MAC_GROUP_ONLY_CLIENT(group);
5818 5853 if (mcip == NULL)
5819 5854 mcip = mac_get_grp_primary(group);
5820 5855 ASSERT(mcip != NULL);
5821 5856 ASSERT(mcip->mci_share == NULL);
5822 5857
5823 5858 mrp = MCIP_RESOURCE_PROPS(mcip);
5824 5859 if (ring_type == MAC_RING_TYPE_RX) {
5825 5860 defgrp = mip->mi_rx_donor_grp;
5826 5861 if ((mrp->mrp_mask & MRP_RX_RINGS) == 0) {
5827 5862 /* Need to put this mac client in the default group */
5828 5863 if (mac_rx_switch_group(mcip, group, defgrp) != 0)
5829 5864 return (ENOSPC);
5830 5865 } else {
5831 5866 /*
5832 5867 * Switch this ring with some other ring from
5833 5868 * the default group.
5834 5869 */
5835 5870 for (tring = defgrp->mrg_rings; tring != NULL;
5836 5871 tring = tring->mr_next) {
5837 5872 if (tring->mr_index == 0)
5838 5873 continue;
5839 5874 for (j = 0; j < nrings; j++) {
5840 5875 if (rings[j] == tring)
5841 5876 break;
5842 5877 }
5843 5878 if (j >= nrings)
5844 5879 break;
5845 5880 }
5846 5881 if (tring == NULL)
5847 5882 return (ENOSPC);
5848 5883 if (mac_group_mov_ring(mip, group, tring) != 0)
5849 5884 return (ENOSPC);
5850 5885 if (mac_group_mov_ring(mip, defgrp, ring) != 0) {
5851 5886 (void) mac_group_mov_ring(mip, defgrp, tring);
5852 5887 return (ENOSPC);
5853 5888 }
5854 5889 }
5855 5890 ASSERT(ring->mr_gh == (mac_group_handle_t)defgrp);
5856 5891 return (0);
5857 5892 }
5858 5893
5859 5894 defgrp = MAC_DEFAULT_TX_GROUP(mip);
5860 5895 if (ring == (mac_ring_t *)mip->mi_default_tx_ring) {
5861 5896 /*
5862 5897 * See if we can get a spare ring to replace the default
5863 5898 * ring.
5864 5899 */
5865 5900 if (defgrp->mrg_cur_count == 1) {
5866 5901 /*
5867 5902 * Need to get a ring from another client, see if
5868 5903 * there are any clients that can be moved to
5869 5904 * the default group, thereby freeing some rings.
5870 5905 */
5871 5906 for (i = 0; i < mip->mi_tx_group_count; i++) {
5872 5907 tgrp = &mip->mi_tx_groups[i];
5873 5908 if (tgrp->mrg_state ==
5874 5909 MAC_GROUP_STATE_REGISTERED) {
5875 5910 continue;
5876 5911 }
5877 5912 mcip = MAC_GROUP_ONLY_CLIENT(tgrp);
5878 5913 if (mcip == NULL)
5879 5914 mcip = mac_get_grp_primary(tgrp);
5880 5915 ASSERT(mcip != NULL);
5881 5916 mrp = MCIP_RESOURCE_PROPS(mcip);
5882 5917 if ((mrp->mrp_mask & MRP_TX_RINGS) == 0) {
5883 5918 ASSERT(tgrp->mrg_cur_count == 1);
5884 5919 /*
5885 5920 * If this ring is part of the
5886 5921 * rings asked by the share we cannot
5887 5922 * use it as the default ring.
5888 5923 */
5889 5924 for (j = 0; j < nrings; j++) {
5890 5925 if (rings[j] == tgrp->mrg_rings)
5891 5926 break;
5892 5927 }
5893 5928 if (j < nrings)
5894 5929 continue;
5895 5930 mac_tx_client_quiesce(
5896 5931 (mac_client_handle_t)mcip);
5897 5932 mac_tx_switch_group(mcip, tgrp,
5898 5933 defgrp);
5899 5934 mac_tx_client_restart(
5900 5935 (mac_client_handle_t)mcip);
5901 5936 break;
5902 5937 }
5903 5938 }
5904 5939 /*
5905 5940 * All the rings are reserved, can't give up the
5906 5941 * default ring.
5907 5942 */
5908 5943 if (defgrp->mrg_cur_count <= 1)
5909 5944 return (ENOSPC);
5910 5945 }
5911 5946 /*
5912 5947 * Swap the default ring with another.
5913 5948 */
5914 5949 for (tring = defgrp->mrg_rings; tring != NULL;
5915 5950 tring = tring->mr_next) {
5916 5951 /*
5917 5952 * If this ring is part of the rings asked by the
5918 5953 * share we cannot use it as the default ring.
5919 5954 */
5920 5955 for (j = 0; j < nrings; j++) {
5921 5956 if (rings[j] == tring)
5922 5957 break;
5923 5958 }
5924 5959 if (j >= nrings)
5925 5960 break;
5926 5961 }
5927 5962 ASSERT(tring != NULL);
5928 5963 mip->mi_default_tx_ring = (mac_ring_handle_t)tring;
5929 5964 return (0);
5930 5965 }
5931 5966 /*
5932 5967 * The Tx ring is with a group reserved by a MAC client. See if
5933 5968 * we can swap it.
5934 5969 */
5935 5970 ASSERT(group->mrg_state == MAC_GROUP_STATE_RESERVED);
5936 5971 mcip = MAC_GROUP_ONLY_CLIENT(group);
5937 5972 if (mcip == NULL)
5938 5973 mcip = mac_get_grp_primary(group);
5939 5974 ASSERT(mcip != NULL);
5940 5975 mrp = MCIP_RESOURCE_PROPS(mcip);
5941 5976 mac_tx_client_quiesce((mac_client_handle_t)mcip);
5942 5977 if ((mrp->mrp_mask & MRP_TX_RINGS) == 0) {
5943 5978 ASSERT(group->mrg_cur_count == 1);
5944 5979 /* Put this mac client in the default group */
5945 5980 mac_tx_switch_group(mcip, group, defgrp);
5946 5981 } else {
5947 5982 /*
5948 5983 * Switch this ring with some other ring from
5949 5984 * the default group.
5950 5985 */
5951 5986 for (tring = defgrp->mrg_rings; tring != NULL;
5952 5987 tring = tring->mr_next) {
5953 5988 if (tring == (mac_ring_t *)mip->mi_default_tx_ring)
5954 5989 continue;
5955 5990 /*
5956 5991 * If this ring is part of the rings asked by the
5957 5992 * share we cannot use it for swapping.
5958 5993 */
5959 5994 for (j = 0; j < nrings; j++) {
5960 5995 if (rings[j] == tring)
5961 5996 break;
5962 5997 }
5963 5998 if (j >= nrings)
5964 5999 break;
5965 6000 }
5966 6001 if (tring == NULL) {
5967 6002 mac_tx_client_restart((mac_client_handle_t)mcip);
5968 6003 return (ENOSPC);
5969 6004 }
5970 6005 if (mac_group_mov_ring(mip, group, tring) != 0) {
5971 6006 mac_tx_client_restart((mac_client_handle_t)mcip);
5972 6007 return (ENOSPC);
5973 6008 }
5974 6009 if (mac_group_mov_ring(mip, defgrp, ring) != 0) {
5975 6010 (void) mac_group_mov_ring(mip, defgrp, tring);
5976 6011 mac_tx_client_restart((mac_client_handle_t)mcip);
5977 6012 return (ENOSPC);
5978 6013 }
5979 6014 }
5980 6015 mac_tx_client_restart((mac_client_handle_t)mcip);
5981 6016 ASSERT(ring->mr_gh == (mac_group_handle_t)defgrp);
5982 6017 return (0);
5983 6018 }
5984 6019
5985 6020 /*
5986 6021 * Populate a zero-ring group with rings. If the share is non-NULL,
5987 6022 * the rings are chosen according to that share.
5988 6023 * Invoked after allocating a new RX or TX group through
5989 6024 * mac_reserve_rx_group() or mac_reserve_tx_group(), respectively.
5990 6025 * Returns zero on success, an errno otherwise.
5991 6026 */
5992 6027 int
5993 6028 i_mac_group_allocate_rings(mac_impl_t *mip, mac_ring_type_t ring_type,
5994 6029 mac_group_t *src_group, mac_group_t *new_group, mac_share_handle_t share,
5995 6030 uint32_t ringcnt)
5996 6031 {
5997 6032 mac_ring_t **rings, *ring;
5998 6033 uint_t nrings;
5999 6034 int rv = 0, i = 0, j;
6000 6035
6001 6036 ASSERT((ring_type == MAC_RING_TYPE_RX &&
6002 6037 mip->mi_rx_group_type == MAC_GROUP_TYPE_DYNAMIC) ||
6003 6038 (ring_type == MAC_RING_TYPE_TX &&
6004 6039 mip->mi_tx_group_type == MAC_GROUP_TYPE_DYNAMIC));
6005 6040
6006 6041 /*
6007 6042 * First find the rings to allocate to the group.
6008 6043 */
6009 6044 if (share != NULL) {
6010 6045 /* get rings through ms_squery() */
6011 6046 mip->mi_share_capab.ms_squery(share, ring_type, NULL, &nrings);
6012 6047 ASSERT(nrings != 0);
6013 6048 rings = kmem_alloc(nrings * sizeof (mac_ring_handle_t),
6014 6049 KM_SLEEP);
6015 6050 mip->mi_share_capab.ms_squery(share, ring_type,
6016 6051 (mac_ring_handle_t *)rings, &nrings);
6017 6052 for (i = 0; i < nrings; i++) {
6018 6053 /*
6019 6054 * If we have given this ring to a non-default
6020 6055 * group, we need to check if we can get this
6021 6056 * ring.
6022 6057 */
6023 6058 ring = rings[i];
6024 6059 if (ring->mr_gh != (mac_group_handle_t)src_group ||
6025 6060 ring == (mac_ring_t *)mip->mi_default_tx_ring) {
6026 6061 if (mac_reclaim_ring_from_grp(mip, ring_type,
6027 6062 ring, rings, nrings) != 0) {
6028 6063 rv = ENOSPC;
6029 6064 goto bail;
6030 6065 }
6031 6066 }
6032 6067 }
6033 6068 } else {
6034 6069 /*
6035 6070 * Pick one ring from default group.
6036 6071 *
6037 6072 * for now pick the second ring which requires the first ring
6038 6073 * at index 0 to stay in the default group, since it is the
6039 6074 * ring which carries the multicast traffic.
6040 6075 * We need a better way for a driver to indicate this,
6041 6076 * for example a per-ring flag.
6042 6077 */
6043 6078 rings = kmem_alloc(ringcnt * sizeof (mac_ring_handle_t),
6044 6079 KM_SLEEP);
6045 6080 for (ring = src_group->mrg_rings; ring != NULL;
6046 6081 ring = ring->mr_next) {
6047 6082 if (ring_type == MAC_RING_TYPE_RX &&
6048 6083 ring->mr_index == 0) {
6049 6084 continue;
6050 6085 }
6051 6086 if (ring_type == MAC_RING_TYPE_TX &&
6052 6087 ring == (mac_ring_t *)mip->mi_default_tx_ring) {
6053 6088 continue;
6054 6089 }
6055 6090 rings[i++] = ring;
6056 6091 if (i == ringcnt)
6057 6092 break;
6058 6093 }
6059 6094 ASSERT(ring != NULL);
6060 6095 nrings = i;
6061 6096 /* Not enough rings as required */
6062 6097 if (nrings != ringcnt) {
6063 6098 rv = ENOSPC;
6064 6099 goto bail;
6065 6100 }
6066 6101 }
6067 6102
6068 6103 switch (ring_type) {
6069 6104 case MAC_RING_TYPE_RX:
6070 6105 if (src_group->mrg_cur_count - nrings < 1) {
6071 6106 /* we ran out of rings */
6072 6107 rv = ENOSPC;
6073 6108 goto bail;
6074 6109 }
6075 6110
6076 6111 /* move receive rings to new group */
6077 6112 for (i = 0; i < nrings; i++) {
6078 6113 rv = mac_group_mov_ring(mip, new_group, rings[i]);
6079 6114 if (rv != 0) {
6080 6115 /* move rings back on failure */
6081 6116 for (j = 0; j < i; j++) {
6082 6117 (void) mac_group_mov_ring(mip,
6083 6118 src_group, rings[j]);
6084 6119 }
6085 6120 goto bail;
6086 6121 }
6087 6122 }
6088 6123 break;
6089 6124
6090 6125 case MAC_RING_TYPE_TX: {
6091 6126 mac_ring_t *tmp_ring;
6092 6127
6093 6128 /* move the TX rings to the new group */
6094 6129 for (i = 0; i < nrings; i++) {
6095 6130 /* get the desired ring */
6096 6131 tmp_ring = mac_reserve_tx_ring(mip, rings[i]);
6097 6132 if (tmp_ring == NULL) {
6098 6133 rv = ENOSPC;
6099 6134 goto bail;
6100 6135 }
6101 6136 ASSERT(tmp_ring == rings[i]);
6102 6137 rv = mac_group_mov_ring(mip, new_group, rings[i]);
6103 6138 if (rv != 0) {
6104 6139 /* cleanup on failure */
6105 6140 for (j = 0; j < i; j++) {
6106 6141 (void) mac_group_mov_ring(mip,
6107 6142 MAC_DEFAULT_TX_GROUP(mip),
6108 6143 rings[j]);
6109 6144 }
6110 6145 goto bail;
6111 6146 }
6112 6147 }
6113 6148 break;
6114 6149 }
6115 6150 }
6116 6151
6117 6152 /* add group to share */
6118 6153 if (share != NULL)
6119 6154 mip->mi_share_capab.ms_sadd(share, new_group->mrg_driver);
6120 6155
6121 6156 bail:
6122 6157 /* free temporary array of rings */
6123 6158 kmem_free(rings, nrings * sizeof (mac_ring_handle_t));
6124 6159
6125 6160 return (rv);
6126 6161 }
6127 6162
6128 6163 void
6129 6164 mac_group_add_client(mac_group_t *grp, mac_client_impl_t *mcip)
6130 6165 {
6131 6166 mac_grp_client_t *mgcp;
6132 6167
6133 6168 for (mgcp = grp->mrg_clients; mgcp != NULL; mgcp = mgcp->mgc_next) {
6134 6169 if (mgcp->mgc_client == mcip)
6135 6170 break;
6136 6171 }
6137 6172
6138 6173 VERIFY(mgcp == NULL);
6139 6174
6140 6175 mgcp = kmem_zalloc(sizeof (mac_grp_client_t), KM_SLEEP);
6141 6176 mgcp->mgc_client = mcip;
6142 6177 mgcp->mgc_next = grp->mrg_clients;
6143 6178 grp->mrg_clients = mgcp;
6144 6179
6145 6180 }
6146 6181
6147 6182 void
6148 6183 mac_group_remove_client(mac_group_t *grp, mac_client_impl_t *mcip)
6149 6184 {
6150 6185 mac_grp_client_t *mgcp, **pprev;
6151 6186
6152 6187 for (pprev = &grp->mrg_clients, mgcp = *pprev; mgcp != NULL;
6153 6188 pprev = &mgcp->mgc_next, mgcp = *pprev) {
6154 6189 if (mgcp->mgc_client == mcip)
6155 6190 break;
6156 6191 }
6157 6192
6158 6193 ASSERT(mgcp != NULL);
6159 6194
6160 6195 *pprev = mgcp->mgc_next;
6161 6196 kmem_free(mgcp, sizeof (mac_grp_client_t));
6162 6197 }
6163 6198
6164 6199 /*
6165 6200 * mac_reserve_rx_group()
6166 6201 *
6167 6202 * Finds an available group and exclusively reserves it for a client.
6168 6203 * The group is chosen to suit the flow's resource controls (bandwidth and
6169 6204 * fanout requirements) and the address type.
6170 6205 * If the requestor is the pimary MAC then return the group with the
6171 6206 * largest number of rings, otherwise the default ring when available.
6172 6207 */
6173 6208 mac_group_t *
6174 6209 mac_reserve_rx_group(mac_client_impl_t *mcip, uint8_t *mac_addr, boolean_t move)
6175 6210 {
6176 6211 mac_share_handle_t share = mcip->mci_share;
6177 6212 mac_impl_t *mip = mcip->mci_mip;
6178 6213 mac_group_t *grp = NULL;
6179 6214 int i;
6180 6215 int err = 0;
6181 6216 mac_address_t *map;
6182 6217 mac_resource_props_t *mrp = MCIP_RESOURCE_PROPS(mcip);
6183 6218 int nrings;
6184 6219 int donor_grp_rcnt;
6185 6220 boolean_t need_exclgrp = B_FALSE;
6186 6221 int need_rings = 0;
6187 6222 mac_group_t *candidate_grp = NULL;
6188 6223 mac_client_impl_t *gclient;
6189 6224 mac_resource_props_t *gmrp;
6190 6225 mac_group_t *donorgrp = NULL;
6191 6226 boolean_t rxhw = mrp->mrp_mask & MRP_RX_RINGS;
6192 6227 boolean_t unspec = mrp->mrp_mask & MRP_RXRINGS_UNSPEC;
6193 6228 boolean_t isprimary;
6194 6229
6195 6230 ASSERT(MAC_PERIM_HELD((mac_handle_t)mip));
6196 6231
6197 6232 isprimary = mcip->mci_flent->fe_type & FLOW_PRIMARY_MAC;
6198 6233
6199 6234 /*
6200 6235 * Check if a group already has this mac address (case of VLANs)
6201 6236 * unless we are moving this MAC client from one group to another.
6202 6237 */
6203 6238 if (!move && (map = mac_find_macaddr(mip, mac_addr)) != NULL) {
6204 6239 if (map->ma_group != NULL)
6205 6240 return (map->ma_group);
6206 6241 }
6207 6242 if (mip->mi_rx_groups == NULL || mip->mi_rx_group_count == 0)
6208 6243 return (NULL);
6209 6244 /*
6210 6245 * If exclusive open, return NULL which will enable the
6211 6246 * caller to use the default group.
6212 6247 */
6213 6248 if (mcip->mci_state_flags & MCIS_EXCLUSIVE)
6214 6249 return (NULL);
6215 6250
6216 6251 /* For dynamic groups default unspecified to 1 */
6217 6252 if (rxhw && unspec &&
6218 6253 mip->mi_rx_group_type == MAC_GROUP_TYPE_DYNAMIC) {
6219 6254 mrp->mrp_nrxrings = 1;
6220 6255 }
6221 6256 /*
6222 6257 * For static grouping we allow only specifying rings=0 and
6223 6258 * unspecified
6224 6259 */
6225 6260 if (rxhw && mrp->mrp_nrxrings > 0 &&
6226 6261 mip->mi_rx_group_type == MAC_GROUP_TYPE_STATIC) {
6227 6262 return (NULL);
6228 6263 }
6229 6264 if (rxhw) {
6230 6265 /*
6231 6266 * We have explicitly asked for a group (with nrxrings,
6232 6267 * if unspec).
6233 6268 */
6234 6269 if (unspec || mrp->mrp_nrxrings > 0) {
6235 6270 need_exclgrp = B_TRUE;
6236 6271 need_rings = mrp->mrp_nrxrings;
6237 6272 } else if (mrp->mrp_nrxrings == 0) {
6238 6273 /*
6239 6274 * We have asked for a software group.
6240 6275 */
6241 6276 return (NULL);
6242 6277 }
6243 6278 } else if (isprimary && mip->mi_nactiveclients == 1 &&
6244 6279 mip->mi_rx_group_type == MAC_GROUP_TYPE_DYNAMIC) {
6245 6280 /*
6246 6281 * If the primary is the only active client on this
6247 6282 * mip and we have not asked for any rings, we give
6248 6283 * it the default group so that the primary gets to
6249 6284 * use all the rings.
6250 6285 */
6251 6286 return (NULL);
6252 6287 }
6253 6288
6254 6289 /* The group that can donate rings */
6255 6290 donorgrp = mip->mi_rx_donor_grp;
6256 6291
6257 6292 /*
6258 6293 * The number of rings that the default group can donate.
6259 6294 * We need to leave at least one ring.
6260 6295 */
6261 6296 donor_grp_rcnt = donorgrp->mrg_cur_count - 1;
6262 6297
6263 6298 /*
6264 6299 * Try to exclusively reserve a RX group.
6265 6300 *
6266 6301 * For flows requiring HW_DEFAULT_RING (unicast flow of the primary
6267 6302 * client), try to reserve the a non-default RX group and give
6268 6303 * it all the rings from the donor group, except the default ring
6269 6304 *
6270 6305 * For flows requiring HW_RING (unicast flow of other clients), try
6271 6306 * to reserve non-default RX group with the specified number of
6272 6307 * rings, if available.
6273 6308 *
6274 6309 * For flows that have not asked for software or hardware ring,
6275 6310 * try to reserve a non-default group with 1 ring, if available.
6276 6311 */
6277 6312 for (i = 1; i < mip->mi_rx_group_count; i++) {
6278 6313 grp = &mip->mi_rx_groups[i];
6279 6314
6280 6315 DTRACE_PROBE3(rx__group__trying, char *, mip->mi_name,
6281 6316 int, grp->mrg_index, mac_group_state_t, grp->mrg_state);
6282 6317
6283 6318 /*
6284 6319 * Check if this group could be a candidate group for
6285 6320 * eviction if we need a group for this MAC client,
6286 6321 * but there aren't any. A candidate group is one
6287 6322 * that didn't ask for an exclusive group, but got
6288 6323 * one and it has enough rings (combined with what
6289 6324 * the donor group can donate) for the new MAC
6290 6325 * client
6291 6326 */
6292 6327 if (grp->mrg_state >= MAC_GROUP_STATE_RESERVED) {
6293 6328 /*
6294 6329 * If the primary/donor group is not the default
6295 6330 * group, don't bother looking for a candidate group.
6296 6331 * If we don't have enough rings we will check
6297 6332 * if the primary group can be vacated.
6298 6333 */
6299 6334 if (candidate_grp == NULL &&
6300 6335 donorgrp == MAC_DEFAULT_RX_GROUP(mip)) {
6301 6336 ASSERT(!MAC_GROUP_NO_CLIENT(grp));
6302 6337 gclient = MAC_GROUP_ONLY_CLIENT(grp);
6303 6338 if (gclient == NULL)
6304 6339 gclient = mac_get_grp_primary(grp);
6305 6340 ASSERT(gclient != NULL);
6306 6341 gmrp = MCIP_RESOURCE_PROPS(gclient);
6307 6342 if (gclient->mci_share == NULL &&
6308 6343 (gmrp->mrp_mask & MRP_RX_RINGS) == 0 &&
6309 6344 (unspec ||
6310 6345 (grp->mrg_cur_count + donor_grp_rcnt >=
6311 6346 need_rings))) {
6312 6347 candidate_grp = grp;
6313 6348 }
6314 6349 }
6315 6350 continue;
6316 6351 }
6317 6352 /*
6318 6353 * This group could already be SHARED by other multicast
6319 6354 * flows on this client. In that case, the group would
6320 6355 * be shared and has already been started.
6321 6356 */
6322 6357 ASSERT(grp->mrg_state != MAC_GROUP_STATE_UNINIT);
6323 6358
6324 6359 if ((grp->mrg_state == MAC_GROUP_STATE_REGISTERED) &&
6325 6360 (mac_start_group(grp) != 0)) {
6326 6361 continue;
6327 6362 }
6328 6363
6329 6364 if (mip->mi_rx_group_type != MAC_GROUP_TYPE_DYNAMIC)
6330 6365 break;
6331 6366 ASSERT(grp->mrg_cur_count == 0);
6332 6367
6333 6368 /*
6334 6369 * Populate the group. Rings should be taken
6335 6370 * from the donor group.
6336 6371 */
6337 6372 nrings = rxhw ? need_rings : isprimary ? donor_grp_rcnt: 1;
6338 6373
6339 6374 /*
6340 6375 * If the donor group can't donate, let's just walk and
6341 6376 * see if someone can vacate a group, so that we have
6342 6377 * enough rings for this, unless we already have
6343 6378 * identified a candiate group..
6344 6379 */
6345 6380 if (nrings <= donor_grp_rcnt) {
6346 6381 err = i_mac_group_allocate_rings(mip, MAC_RING_TYPE_RX,
6347 6382 donorgrp, grp, share, nrings);
6348 6383 if (err == 0) {
6349 6384 /*
6350 6385 * For a share i_mac_group_allocate_rings gets
6351 6386 * the rings from the driver, let's populate
6352 6387 * the property for the client now.
6353 6388 */
6354 6389 if (share != NULL) {
6355 6390 mac_client_set_rings(
6356 6391 (mac_client_handle_t)mcip,
6357 6392 grp->mrg_cur_count, -1);
6358 6393 }
6359 6394 if (mac_is_primary_client(mcip) && !rxhw)
6360 6395 mip->mi_rx_donor_grp = grp;
6361 6396 break;
6362 6397 }
6363 6398 }
6364 6399
6365 6400 DTRACE_PROBE3(rx__group__reserve__alloc__rings, char *,
6366 6401 mip->mi_name, int, grp->mrg_index, int, err);
6367 6402
6368 6403 /*
6369 6404 * It's a dynamic group but the grouping operation
6370 6405 * failed.
6371 6406 */
6372 6407 mac_stop_group(grp);
6373 6408 }
6374 6409 /* We didn't find an exclusive group for this MAC client */
6375 6410 if (i >= mip->mi_rx_group_count) {
6376 6411
6377 6412 if (!need_exclgrp)
6378 6413 return (NULL);
6379 6414
6380 6415 /*
6381 6416 * If we found a candidate group then we switch the
6382 6417 * MAC client from the candidate_group to the default
6383 6418 * group and give the group to this MAC client. If
6384 6419 * we didn't find a candidate_group, check if the
6385 6420 * primary is in its own group and if it can make way
6386 6421 * for this MAC client.
6387 6422 */
6388 6423 if (candidate_grp == NULL &&
6389 6424 donorgrp != MAC_DEFAULT_RX_GROUP(mip) &&
6390 6425 donorgrp->mrg_cur_count >= need_rings) {
6391 6426 candidate_grp = donorgrp;
6392 6427 }
6393 6428 if (candidate_grp != NULL) {
6394 6429 boolean_t prim_grp = B_FALSE;
6395 6430
6396 6431 /*
6397 6432 * Switch the MAC client from the candidate group
6398 6433 * to the default group.. If this group was the
6399 6434 * donor group, then after the switch we need
6400 6435 * to update the donor group too.
6401 6436 */
6402 6437 grp = candidate_grp;
6403 6438 gclient = MAC_GROUP_ONLY_CLIENT(grp);
6404 6439 if (gclient == NULL)
6405 6440 gclient = mac_get_grp_primary(grp);
6406 6441 if (grp == mip->mi_rx_donor_grp)
6407 6442 prim_grp = B_TRUE;
6408 6443 if (mac_rx_switch_group(gclient, grp,
6409 6444 MAC_DEFAULT_RX_GROUP(mip)) != 0) {
6410 6445 return (NULL);
6411 6446 }
6412 6447 if (prim_grp) {
6413 6448 mip->mi_rx_donor_grp =
6414 6449 MAC_DEFAULT_RX_GROUP(mip);
6415 6450 donorgrp = MAC_DEFAULT_RX_GROUP(mip);
6416 6451 }
6417 6452
6418 6453
6419 6454 /*
6420 6455 * Now give this group with the required rings
6421 6456 * to this MAC client.
6422 6457 */
6423 6458 ASSERT(grp->mrg_state == MAC_GROUP_STATE_REGISTERED);
6424 6459 if (mac_start_group(grp) != 0)
6425 6460 return (NULL);
6426 6461
6427 6462 if (mip->mi_rx_group_type != MAC_GROUP_TYPE_DYNAMIC)
6428 6463 return (grp);
6429 6464
6430 6465 donor_grp_rcnt = donorgrp->mrg_cur_count - 1;
6431 6466 ASSERT(grp->mrg_cur_count == 0);
6432 6467 ASSERT(donor_grp_rcnt >= need_rings);
6433 6468 err = i_mac_group_allocate_rings(mip, MAC_RING_TYPE_RX,
6434 6469 donorgrp, grp, share, need_rings);
6435 6470 if (err == 0) {
6436 6471 /*
6437 6472 * For a share i_mac_group_allocate_rings gets
6438 6473 * the rings from the driver, let's populate
6439 6474 * the property for the client now.
6440 6475 */
6441 6476 if (share != NULL) {
6442 6477 mac_client_set_rings(
6443 6478 (mac_client_handle_t)mcip,
6444 6479 grp->mrg_cur_count, -1);
6445 6480 }
6446 6481 DTRACE_PROBE2(rx__group__reserved,
6447 6482 char *, mip->mi_name, int, grp->mrg_index);
6448 6483 return (grp);
6449 6484 }
6450 6485 DTRACE_PROBE3(rx__group__reserve__alloc__rings, char *,
6451 6486 mip->mi_name, int, grp->mrg_index, int, err);
6452 6487 mac_stop_group(grp);
6453 6488 }
6454 6489 return (NULL);
6455 6490 }
6456 6491 ASSERT(grp != NULL);
6457 6492
6458 6493 DTRACE_PROBE2(rx__group__reserved,
6459 6494 char *, mip->mi_name, int, grp->mrg_index);
6460 6495 return (grp);
6461 6496 }
6462 6497
6463 6498 /*
6464 6499 * mac_rx_release_group()
6465 6500 *
6466 6501 * This is called when there are no clients left for the group.
6467 6502 * The group is stopped and marked MAC_GROUP_STATE_REGISTERED,
6468 6503 * and if it is a non default group, the shares are removed and
6469 6504 * all rings are assigned back to default group.
6470 6505 */
6471 6506 void
6472 6507 mac_release_rx_group(mac_client_impl_t *mcip, mac_group_t *group)
6473 6508 {
6474 6509 mac_impl_t *mip = mcip->mci_mip;
6475 6510 mac_ring_t *ring;
6476 6511
6477 6512 ASSERT(group != MAC_DEFAULT_RX_GROUP(mip));
6478 6513
6479 6514 if (mip->mi_rx_donor_grp == group)
6480 6515 mip->mi_rx_donor_grp = MAC_DEFAULT_RX_GROUP(mip);
6481 6516
6482 6517 /*
6483 6518 * This is the case where there are no clients left. Any
6484 6519 * SRS etc on this group have also be quiesced.
6485 6520 */
6486 6521 for (ring = group->mrg_rings; ring != NULL; ring = ring->mr_next) {
6487 6522 if (ring->mr_classify_type == MAC_HW_CLASSIFIER) {
6488 6523 ASSERT(group->mrg_state == MAC_GROUP_STATE_RESERVED);
6489 6524 /*
6490 6525 * Remove the SRS associated with the HW ring.
6491 6526 * As a result, polling will be disabled.
6492 6527 */
6493 6528 ring->mr_srs = NULL;
6494 6529 }
6495 6530 ASSERT(group->mrg_state < MAC_GROUP_STATE_RESERVED ||
6496 6531 ring->mr_state == MR_INUSE);
6497 6532 if (ring->mr_state == MR_INUSE) {
6498 6533 mac_stop_ring(ring);
6499 6534 ring->mr_flag = 0;
6500 6535 }
6501 6536 }
6502 6537
6503 6538 /* remove group from share */
6504 6539 if (mcip->mci_share != NULL) {
6505 6540 mip->mi_share_capab.ms_sremove(mcip->mci_share,
6506 6541 group->mrg_driver);
6507 6542 }
6508 6543
6509 6544 if (mip->mi_rx_group_type == MAC_GROUP_TYPE_DYNAMIC) {
6510 6545 mac_ring_t *ring;
6511 6546
6512 6547 /*
6513 6548 * Rings were dynamically allocated to group.
6514 6549 * Move rings back to default group.
6515 6550 */
6516 6551 while ((ring = group->mrg_rings) != NULL) {
6517 6552 (void) mac_group_mov_ring(mip, mip->mi_rx_donor_grp,
6518 6553 ring);
6519 6554 }
6520 6555 }
6521 6556 mac_stop_group(group);
6522 6557 /*
6523 6558 * Possible improvement: See if we can assign the group just released
6524 6559 * to a another client of the mip
6525 6560 */
6526 6561 }
6527 6562
6528 6563 /*
6529 6564 * When we move the primary's mac address between groups, we need to also
6530 6565 * take all the clients sharing the same mac address along with it (VLANs)
6531 6566 * We remove the mac address for such clients from the group after quiescing
6532 6567 * them. When we add the mac address we restart the client. Note that
6533 6568 * the primary's mac address is removed from the group after all the
6534 6569 * other clients sharing the address are removed. Similarly, the primary's
6535 6570 * mac address is added before all the other client's mac address are
6536 6571 * added. While grp is the group where the clients reside, tgrp is
6537 6572 * the group where the addresses have to be added.
6538 6573 */
6539 6574 static void
6540 6575 mac_rx_move_macaddr_prim(mac_client_impl_t *mcip, mac_group_t *grp,
6541 6576 mac_group_t *tgrp, uint8_t *maddr, boolean_t add)
6542 6577 {
6543 6578 mac_impl_t *mip = mcip->mci_mip;
6544 6579 mac_grp_client_t *mgcp = grp->mrg_clients;
6545 6580 mac_client_impl_t *gmcip;
6546 6581 boolean_t prim;
6547 6582
6548 6583 prim = (mcip->mci_state_flags & MCIS_UNICAST_HW) != 0;
6549 6584
6550 6585 /*
6551 6586 * If the clients are in a non-default group, we just have to
6552 6587 * walk the group's client list. If it is in the default group
6553 6588 * (which will be shared by other clients as well, we need to
6554 6589 * check if the unicast address matches mcip's unicast.
6555 6590 */
6556 6591 while (mgcp != NULL) {
6557 6592 gmcip = mgcp->mgc_client;
6558 6593 if (gmcip != mcip &&
6559 6594 (grp != MAC_DEFAULT_RX_GROUP(mip) ||
6560 6595 mcip->mci_unicast == gmcip->mci_unicast)) {
6561 6596 if (!add) {
6562 6597 mac_rx_client_quiesce(
6563 6598 (mac_client_handle_t)gmcip);
6564 6599 (void) mac_remove_macaddr(mcip->mci_unicast);
6565 6600 } else {
6566 6601 (void) mac_add_macaddr(mip, tgrp, maddr, prim);
6567 6602 mac_rx_client_restart(
6568 6603 (mac_client_handle_t)gmcip);
6569 6604 }
6570 6605 }
6571 6606 mgcp = mgcp->mgc_next;
6572 6607 }
6573 6608 }
6574 6609
6575 6610
6576 6611 /*
6577 6612 * Move the MAC address from fgrp to tgrp. If this is the primary client,
6578 6613 * we need to take any VLANs etc. together too.
6579 6614 */
6580 6615 static int
6581 6616 mac_rx_move_macaddr(mac_client_impl_t *mcip, mac_group_t *fgrp,
6582 6617 mac_group_t *tgrp)
6583 6618 {
6584 6619 mac_impl_t *mip = mcip->mci_mip;
6585 6620 uint8_t maddr[MAXMACADDRLEN];
6586 6621 int err = 0;
6587 6622 boolean_t prim;
6588 6623 boolean_t multiclnt = B_FALSE;
6589 6624
6590 6625 mac_rx_client_quiesce((mac_client_handle_t)mcip);
6591 6626 ASSERT(mcip->mci_unicast != NULL);
6592 6627 bcopy(mcip->mci_unicast->ma_addr, maddr, mcip->mci_unicast->ma_len);
6593 6628
6594 6629 prim = (mcip->mci_state_flags & MCIS_UNICAST_HW) != 0;
6595 6630 if (mcip->mci_unicast->ma_nusers > 1) {
6596 6631 mac_rx_move_macaddr_prim(mcip, fgrp, NULL, maddr, B_FALSE);
6597 6632 multiclnt = B_TRUE;
6598 6633 }
6599 6634 ASSERT(mcip->mci_unicast->ma_nusers == 1);
6600 6635 err = mac_remove_macaddr(mcip->mci_unicast);
6601 6636 if (err != 0) {
6602 6637 mac_rx_client_restart((mac_client_handle_t)mcip);
6603 6638 if (multiclnt) {
6604 6639 mac_rx_move_macaddr_prim(mcip, fgrp, fgrp, maddr,
6605 6640 B_TRUE);
6606 6641 }
6607 6642 return (err);
6608 6643 }
6609 6644 /*
6610 6645 * Program the H/W Classifier first, if this fails we need
6611 6646 * not proceed with the other stuff.
6612 6647 */
6613 6648 if ((err = mac_add_macaddr(mip, tgrp, maddr, prim)) != 0) {
6614 6649 /* Revert back the H/W Classifier */
6615 6650 if ((err = mac_add_macaddr(mip, fgrp, maddr, prim)) != 0) {
6616 6651 /*
6617 6652 * This should not fail now since it worked earlier,
6618 6653 * should we panic?
6619 6654 */
6620 6655 cmn_err(CE_WARN,
6621 6656 "mac_rx_switch_group: switching %p back"
6622 6657 " to group %p failed!!", (void *)mcip,
6623 6658 (void *)fgrp);
6624 6659 }
6625 6660 mac_rx_client_restart((mac_client_handle_t)mcip);
6626 6661 if (multiclnt) {
6627 6662 mac_rx_move_macaddr_prim(mcip, fgrp, fgrp, maddr,
6628 6663 B_TRUE);
6629 6664 }
6630 6665 return (err);
6631 6666 }
6632 6667 mcip->mci_unicast = mac_find_macaddr(mip, maddr);
6633 6668 mac_rx_client_restart((mac_client_handle_t)mcip);
6634 6669 if (multiclnt)
6635 6670 mac_rx_move_macaddr_prim(mcip, fgrp, tgrp, maddr, B_TRUE);
6636 6671 return (err);
6637 6672 }
6638 6673
6639 6674 /*
6640 6675 * Switch the MAC client from one group to another. This means we need
6641 6676 * to remove the MAC address from the group, remove the MAC client,
6642 6677 * teardown the SRSs and revert the group state. Then, we add the client
6643 6678 * to the destination group, set the SRSs, and add the MAC address to the
6644 6679 * group.
6645 6680 */
6646 6681 int
6647 6682 mac_rx_switch_group(mac_client_impl_t *mcip, mac_group_t *fgrp,
6648 6683 mac_group_t *tgrp)
6649 6684 {
6650 6685 int err;
6651 6686 mac_group_state_t next_state;
6652 6687 mac_client_impl_t *group_only_mcip;
6653 6688 mac_client_impl_t *gmcip;
6654 6689 mac_impl_t *mip = mcip->mci_mip;
6655 6690 mac_grp_client_t *mgcp;
6656 6691
6657 6692 ASSERT(fgrp == mcip->mci_flent->fe_rx_ring_group);
6658 6693
6659 6694 if ((err = mac_rx_move_macaddr(mcip, fgrp, tgrp)) != 0)
6660 6695 return (err);
6661 6696
6662 6697 /*
6663 6698 * The group might be reserved, but SRSs may not be set up, e.g.
6664 6699 * primary and its vlans using a reserved group.
6665 6700 */
6666 6701 if (fgrp->mrg_state == MAC_GROUP_STATE_RESERVED &&
6667 6702 MAC_GROUP_ONLY_CLIENT(fgrp) != NULL) {
6668 6703 mac_rx_srs_group_teardown(mcip->mci_flent, B_TRUE);
6669 6704 }
6670 6705 if (fgrp != MAC_DEFAULT_RX_GROUP(mip)) {
6671 6706 mgcp = fgrp->mrg_clients;
6672 6707 while (mgcp != NULL) {
6673 6708 gmcip = mgcp->mgc_client;
6674 6709 mgcp = mgcp->mgc_next;
6675 6710 mac_group_remove_client(fgrp, gmcip);
6676 6711 mac_group_add_client(tgrp, gmcip);
6677 6712 gmcip->mci_flent->fe_rx_ring_group = tgrp;
6678 6713 }
6679 6714 mac_release_rx_group(mcip, fgrp);
6680 6715 ASSERT(MAC_GROUP_NO_CLIENT(fgrp));
6681 6716 mac_set_group_state(fgrp, MAC_GROUP_STATE_REGISTERED);
6682 6717 } else {
6683 6718 mac_group_remove_client(fgrp, mcip);
6684 6719 mac_group_add_client(tgrp, mcip);
6685 6720 mcip->mci_flent->fe_rx_ring_group = tgrp;
6686 6721 /*
6687 6722 * If there are other clients (VLANs) sharing this address
6688 6723 * we should be here only for the primary.
6689 6724 */
6690 6725 if (mcip->mci_unicast->ma_nusers > 1) {
6691 6726 /*
6692 6727 * We need to move all the clients that are using
6693 6728 * this h/w address.
6694 6729 */
6695 6730 mgcp = fgrp->mrg_clients;
6696 6731 while (mgcp != NULL) {
6697 6732 gmcip = mgcp->mgc_client;
6698 6733 mgcp = mgcp->mgc_next;
6699 6734 if (mcip->mci_unicast == gmcip->mci_unicast) {
6700 6735 mac_group_remove_client(fgrp, gmcip);
6701 6736 mac_group_add_client(tgrp, gmcip);
6702 6737 gmcip->mci_flent->fe_rx_ring_group =
6703 6738 tgrp;
6704 6739 }
6705 6740 }
6706 6741 }
6707 6742 /*
6708 6743 * The default group will still take the multicast,
6709 6744 * broadcast traffic etc., so it won't go to
6710 6745 * MAC_GROUP_STATE_REGISTERED.
6711 6746 */
6712 6747 if (fgrp->mrg_state == MAC_GROUP_STATE_RESERVED)
6713 6748 mac_rx_group_unmark(fgrp, MR_CONDEMNED);
6714 6749 mac_set_group_state(fgrp, MAC_GROUP_STATE_SHARED);
6715 6750 }
6716 6751 next_state = mac_group_next_state(tgrp, &group_only_mcip,
6717 6752 MAC_DEFAULT_RX_GROUP(mip), B_TRUE);
6718 6753 mac_set_group_state(tgrp, next_state);
6719 6754 /*
6720 6755 * If the destination group is reserved, setup the SRSs etc.
6721 6756 */
6722 6757 if (tgrp->mrg_state == MAC_GROUP_STATE_RESERVED) {
6723 6758 mac_rx_srs_group_setup(mcip, mcip->mci_flent, SRST_LINK);
6724 6759 mac_fanout_setup(mcip, mcip->mci_flent,
6725 6760 MCIP_RESOURCE_PROPS(mcip), mac_rx_deliver, mcip, NULL,
6726 6761 NULL);
6727 6762 mac_rx_group_unmark(tgrp, MR_INCIPIENT);
6728 6763 } else {
6729 6764 mac_rx_switch_grp_to_sw(tgrp);
6730 6765 }
6731 6766 return (0);
6732 6767 }
6733 6768
6734 6769 /*
6735 6770 * Reserves a TX group for the specified share. Invoked by mac_tx_srs_setup()
6736 6771 * when a share was allocated to the client.
6737 6772 */
6738 6773 mac_group_t *
6739 6774 mac_reserve_tx_group(mac_client_impl_t *mcip, boolean_t move)
6740 6775 {
6741 6776 mac_impl_t *mip = mcip->mci_mip;
6742 6777 mac_group_t *grp = NULL;
6743 6778 int rv;
6744 6779 int i;
6745 6780 int err;
6746 6781 mac_group_t *defgrp;
6747 6782 mac_share_handle_t share = mcip->mci_share;
6748 6783 mac_resource_props_t *mrp = MCIP_RESOURCE_PROPS(mcip);
6749 6784 int nrings;
6750 6785 int defnrings;
6751 6786 boolean_t need_exclgrp = B_FALSE;
6752 6787 int need_rings = 0;
6753 6788 mac_group_t *candidate_grp = NULL;
6754 6789 mac_client_impl_t *gclient;
6755 6790 mac_resource_props_t *gmrp;
6756 6791 boolean_t txhw = mrp->mrp_mask & MRP_TX_RINGS;
6757 6792 boolean_t unspec = mrp->mrp_mask & MRP_TXRINGS_UNSPEC;
6758 6793 boolean_t isprimary;
6759 6794
6760 6795 isprimary = mcip->mci_flent->fe_type & FLOW_PRIMARY_MAC;
6761 6796 /*
6762 6797 * When we come here for a VLAN on the primary (dladm create-vlan),
6763 6798 * we need to pair it along with the primary (to keep it consistent
6764 6799 * with the RX side). So, we check if the primary is already assigned
6765 6800 * to a group and return the group if so. The other way is also
6766 6801 * true, i.e. the VLAN is already created and now we are plumbing
6767 6802 * the primary.
6768 6803 */
6769 6804 if (!move && isprimary) {
6770 6805 for (gclient = mip->mi_clients_list; gclient != NULL;
6771 6806 gclient = gclient->mci_client_next) {
6772 6807 if (gclient->mci_flent->fe_type & FLOW_PRIMARY_MAC &&
6773 6808 gclient->mci_flent->fe_tx_ring_group != NULL) {
6774 6809 return (gclient->mci_flent->fe_tx_ring_group);
6775 6810 }
6776 6811 }
6777 6812 }
6778 6813
6779 6814 if (mip->mi_tx_groups == NULL || mip->mi_tx_group_count == 0)
6780 6815 return (NULL);
6781 6816
6782 6817 /* For dynamic groups, default unspec to 1 */
6783 6818 if (txhw && unspec &&
6784 6819 mip->mi_tx_group_type == MAC_GROUP_TYPE_DYNAMIC) {
6785 6820 mrp->mrp_ntxrings = 1;
6786 6821 }
6787 6822 /*
6788 6823 * For static grouping we allow only specifying rings=0 and
6789 6824 * unspecified
6790 6825 */
6791 6826 if (txhw && mrp->mrp_ntxrings > 0 &&
6792 6827 mip->mi_tx_group_type == MAC_GROUP_TYPE_STATIC) {
6793 6828 return (NULL);
6794 6829 }
6795 6830
6796 6831 if (txhw) {
6797 6832 /*
6798 6833 * We have explicitly asked for a group (with ntxrings,
6799 6834 * if unspec).
6800 6835 */
6801 6836 if (unspec || mrp->mrp_ntxrings > 0) {
6802 6837 need_exclgrp = B_TRUE;
6803 6838 need_rings = mrp->mrp_ntxrings;
6804 6839 } else if (mrp->mrp_ntxrings == 0) {
6805 6840 /*
6806 6841 * We have asked for a software group.
6807 6842 */
6808 6843 return (NULL);
6809 6844 }
6810 6845 }
6811 6846 defgrp = MAC_DEFAULT_TX_GROUP(mip);
6812 6847 /*
6813 6848 * The number of rings that the default group can donate.
6814 6849 * We need to leave at least one ring - the default ring - in
6815 6850 * this group.
6816 6851 */
6817 6852 defnrings = defgrp->mrg_cur_count - 1;
6818 6853
6819 6854 /*
6820 6855 * Primary gets default group unless explicitly told not
6821 6856 * to (i.e. rings > 0).
6822 6857 */
6823 6858 if (isprimary && !need_exclgrp)
6824 6859 return (NULL);
6825 6860
6826 6861 nrings = (mrp->mrp_mask & MRP_TX_RINGS) != 0 ? mrp->mrp_ntxrings : 1;
6827 6862 for (i = 0; i < mip->mi_tx_group_count; i++) {
6828 6863 grp = &mip->mi_tx_groups[i];
6829 6864 if ((grp->mrg_state == MAC_GROUP_STATE_RESERVED) ||
6830 6865 (grp->mrg_state == MAC_GROUP_STATE_UNINIT)) {
6831 6866 /*
6832 6867 * Select a candidate for replacement if we don't
6833 6868 * get an exclusive group. A candidate group is one
6834 6869 * that didn't ask for an exclusive group, but got
6835 6870 * one and it has enough rings (combined with what
6836 6871 * the default group can donate) for the new MAC
6837 6872 * client.
6838 6873 */
6839 6874 if (grp->mrg_state == MAC_GROUP_STATE_RESERVED &&
6840 6875 candidate_grp == NULL) {
6841 6876 gclient = MAC_GROUP_ONLY_CLIENT(grp);
6842 6877 if (gclient == NULL)
6843 6878 gclient = mac_get_grp_primary(grp);
6844 6879 gmrp = MCIP_RESOURCE_PROPS(gclient);
6845 6880 if (gclient->mci_share == NULL &&
6846 6881 (gmrp->mrp_mask & MRP_TX_RINGS) == 0 &&
6847 6882 (unspec ||
6848 6883 (grp->mrg_cur_count + defnrings) >=
6849 6884 need_rings)) {
6850 6885 candidate_grp = grp;
6851 6886 }
6852 6887 }
6853 6888 continue;
6854 6889 }
6855 6890 /*
6856 6891 * If the default can't donate let's just walk and
6857 6892 * see if someone can vacate a group, so that we have
6858 6893 * enough rings for this.
6859 6894 */
6860 6895 if (mip->mi_tx_group_type != MAC_GROUP_TYPE_DYNAMIC ||
6861 6896 nrings <= defnrings) {
6862 6897 if (grp->mrg_state == MAC_GROUP_STATE_REGISTERED) {
6863 6898 rv = mac_start_group(grp);
6864 6899 ASSERT(rv == 0);
6865 6900 }
6866 6901 break;
6867 6902 }
6868 6903 }
6869 6904
6870 6905 /* The default group */
6871 6906 if (i >= mip->mi_tx_group_count) {
6872 6907 /*
6873 6908 * If we need an exclusive group and have identified a
6874 6909 * candidate group we switch the MAC client from the
6875 6910 * candidate group to the default group and give the
6876 6911 * candidate group to this client.
6877 6912 */
6878 6913 if (need_exclgrp && candidate_grp != NULL) {
6879 6914 /*
6880 6915 * Switch the MAC client from the candidate group
6881 6916 * to the default group.
6882 6917 */
6883 6918 grp = candidate_grp;
6884 6919 gclient = MAC_GROUP_ONLY_CLIENT(grp);
6885 6920 if (gclient == NULL)
6886 6921 gclient = mac_get_grp_primary(grp);
6887 6922 mac_tx_client_quiesce((mac_client_handle_t)gclient);
6888 6923 mac_tx_switch_group(gclient, grp, defgrp);
6889 6924 mac_tx_client_restart((mac_client_handle_t)gclient);
6890 6925
6891 6926 /*
6892 6927 * Give the candidate group with the specified number
6893 6928 * of rings to this MAC client.
6894 6929 */
6895 6930 ASSERT(grp->mrg_state == MAC_GROUP_STATE_REGISTERED);
6896 6931 rv = mac_start_group(grp);
6897 6932 ASSERT(rv == 0);
6898 6933
6899 6934 if (mip->mi_tx_group_type != MAC_GROUP_TYPE_DYNAMIC)
6900 6935 return (grp);
6901 6936
6902 6937 ASSERT(grp->mrg_cur_count == 0);
6903 6938 ASSERT(defgrp->mrg_cur_count > need_rings);
6904 6939
6905 6940 err = i_mac_group_allocate_rings(mip, MAC_RING_TYPE_TX,
6906 6941 defgrp, grp, share, need_rings);
6907 6942 if (err == 0) {
6908 6943 /*
6909 6944 * For a share i_mac_group_allocate_rings gets
6910 6945 * the rings from the driver, let's populate
6911 6946 * the property for the client now.
6912 6947 */
6913 6948 if (share != NULL) {
6914 6949 mac_client_set_rings(
6915 6950 (mac_client_handle_t)mcip, -1,
6916 6951 grp->mrg_cur_count);
6917 6952 }
6918 6953 mip->mi_tx_group_free--;
6919 6954 return (grp);
6920 6955 }
6921 6956 DTRACE_PROBE3(tx__group__reserve__alloc__rings, char *,
6922 6957 mip->mi_name, int, grp->mrg_index, int, err);
6923 6958 mac_stop_group(grp);
6924 6959 }
6925 6960 return (NULL);
6926 6961 }
6927 6962 /*
6928 6963 * We got an exclusive group, but it is not dynamic.
6929 6964 */
6930 6965 if (mip->mi_tx_group_type != MAC_GROUP_TYPE_DYNAMIC) {
6931 6966 mip->mi_tx_group_free--;
6932 6967 return (grp);
6933 6968 }
6934 6969
6935 6970 rv = i_mac_group_allocate_rings(mip, MAC_RING_TYPE_TX, defgrp, grp,
6936 6971 share, nrings);
6937 6972 if (rv != 0) {
6938 6973 DTRACE_PROBE3(tx__group__reserve__alloc__rings,
6939 6974 char *, mip->mi_name, int, grp->mrg_index, int, rv);
6940 6975 mac_stop_group(grp);
6941 6976 return (NULL);
6942 6977 }
6943 6978 /*
6944 6979 * For a share i_mac_group_allocate_rings gets the rings from the
6945 6980 * driver, let's populate the property for the client now.
6946 6981 */
6947 6982 if (share != NULL) {
6948 6983 mac_client_set_rings((mac_client_handle_t)mcip, -1,
6949 6984 grp->mrg_cur_count);
6950 6985 }
6951 6986 mip->mi_tx_group_free--;
6952 6987 return (grp);
6953 6988 }
6954 6989
6955 6990 void
6956 6991 mac_release_tx_group(mac_client_impl_t *mcip, mac_group_t *grp)
6957 6992 {
6958 6993 mac_impl_t *mip = mcip->mci_mip;
6959 6994 mac_share_handle_t share = mcip->mci_share;
6960 6995 mac_ring_t *ring;
6961 6996 mac_soft_ring_set_t *srs = MCIP_TX_SRS(mcip);
6962 6997 mac_group_t *defgrp;
6963 6998
6964 6999 defgrp = MAC_DEFAULT_TX_GROUP(mip);
6965 7000 if (srs != NULL) {
6966 7001 if (srs->srs_soft_ring_count > 0) {
6967 7002 for (ring = grp->mrg_rings; ring != NULL;
6968 7003 ring = ring->mr_next) {
6969 7004 ASSERT(mac_tx_srs_ring_present(srs, ring));
6970 7005 mac_tx_invoke_callbacks(mcip,
6971 7006 (mac_tx_cookie_t)
6972 7007 mac_tx_srs_get_soft_ring(srs, ring));
6973 7008 mac_tx_srs_del_ring(srs, ring);
6974 7009 }
6975 7010 } else {
6976 7011 ASSERT(srs->srs_tx.st_arg2 != NULL);
6977 7012 srs->srs_tx.st_arg2 = NULL;
6978 7013 mac_srs_stat_delete(srs);
6979 7014 }
6980 7015 }
6981 7016 if (share != NULL)
6982 7017 mip->mi_share_capab.ms_sremove(share, grp->mrg_driver);
6983 7018
6984 7019 /* move the ring back to the pool */
6985 7020 if (mip->mi_tx_group_type == MAC_GROUP_TYPE_DYNAMIC) {
6986 7021 while ((ring = grp->mrg_rings) != NULL)
6987 7022 (void) mac_group_mov_ring(mip, defgrp, ring);
6988 7023 }
6989 7024 mac_stop_group(grp);
6990 7025 mip->mi_tx_group_free++;
6991 7026 }
6992 7027
6993 7028 /*
6994 7029 * Disassociate a MAC client from a group, i.e go through the rings in the
6995 7030 * group and delete all the soft rings tied to them.
6996 7031 */
6997 7032 static void
6998 7033 mac_tx_dismantle_soft_rings(mac_group_t *fgrp, flow_entry_t *flent)
6999 7034 {
7000 7035 mac_client_impl_t *mcip = flent->fe_mcip;
7001 7036 mac_soft_ring_set_t *tx_srs;
7002 7037 mac_srs_tx_t *tx;
7003 7038 mac_ring_t *ring;
7004 7039
7005 7040 tx_srs = flent->fe_tx_srs;
7006 7041 tx = &tx_srs->srs_tx;
7007 7042
7008 7043 /* Single ring case we haven't created any soft rings */
7009 7044 if (tx->st_mode == SRS_TX_BW || tx->st_mode == SRS_TX_SERIALIZE ||
7010 7045 tx->st_mode == SRS_TX_DEFAULT) {
7011 7046 tx->st_arg2 = NULL;
7012 7047 mac_srs_stat_delete(tx_srs);
7013 7048 /* Fanout case, where we have to dismantle the soft rings */
7014 7049 } else {
7015 7050 for (ring = fgrp->mrg_rings; ring != NULL;
7016 7051 ring = ring->mr_next) {
7017 7052 ASSERT(mac_tx_srs_ring_present(tx_srs, ring));
7018 7053 mac_tx_invoke_callbacks(mcip,
7019 7054 (mac_tx_cookie_t)mac_tx_srs_get_soft_ring(tx_srs,
7020 7055 ring));
7021 7056 mac_tx_srs_del_ring(tx_srs, ring);
7022 7057 }
7023 7058 ASSERT(tx->st_arg2 == NULL);
7024 7059 }
7025 7060 }
7026 7061
7027 7062 /*
7028 7063 * Switch the MAC client from one group to another. This means we need
7029 7064 * to remove the MAC client, teardown the SRSs and revert the group state.
7030 7065 * Then, we add the client to the destination roup, set the SRSs etc.
7031 7066 */
7032 7067 void
7033 7068 mac_tx_switch_group(mac_client_impl_t *mcip, mac_group_t *fgrp,
7034 7069 mac_group_t *tgrp)
7035 7070 {
7036 7071 mac_client_impl_t *group_only_mcip;
7037 7072 mac_impl_t *mip = mcip->mci_mip;
7038 7073 flow_entry_t *flent = mcip->mci_flent;
7039 7074 mac_group_t *defgrp;
7040 7075 mac_grp_client_t *mgcp;
7041 7076 mac_client_impl_t *gmcip;
7042 7077 flow_entry_t *gflent;
7043 7078
7044 7079 defgrp = MAC_DEFAULT_TX_GROUP(mip);
7045 7080 ASSERT(fgrp == flent->fe_tx_ring_group);
7046 7081
7047 7082 if (fgrp == defgrp) {
7048 7083 /*
7049 7084 * If this is the primary we need to find any VLANs on
7050 7085 * the primary and move them too.
7051 7086 */
7052 7087 mac_group_remove_client(fgrp, mcip);
7053 7088 mac_tx_dismantle_soft_rings(fgrp, flent);
7054 7089 if (mcip->mci_unicast->ma_nusers > 1) {
7055 7090 mgcp = fgrp->mrg_clients;
7056 7091 while (mgcp != NULL) {
7057 7092 gmcip = mgcp->mgc_client;
7058 7093 mgcp = mgcp->mgc_next;
7059 7094 if (mcip->mci_unicast != gmcip->mci_unicast)
7060 7095 continue;
7061 7096 mac_tx_client_quiesce(
7062 7097 (mac_client_handle_t)gmcip);
7063 7098
7064 7099 gflent = gmcip->mci_flent;
7065 7100 mac_group_remove_client(fgrp, gmcip);
7066 7101 mac_tx_dismantle_soft_rings(fgrp, gflent);
7067 7102
7068 7103 mac_group_add_client(tgrp, gmcip);
7069 7104 gflent->fe_tx_ring_group = tgrp;
7070 7105 /* We could directly set this to SHARED */
7071 7106 tgrp->mrg_state = mac_group_next_state(tgrp,
7072 7107 &group_only_mcip, defgrp, B_FALSE);
7073 7108
7074 7109 mac_tx_srs_group_setup(gmcip, gflent,
7075 7110 SRST_LINK);
7076 7111 mac_fanout_setup(gmcip, gflent,
7077 7112 MCIP_RESOURCE_PROPS(gmcip), mac_rx_deliver,
7078 7113 gmcip, NULL, NULL);
7079 7114
7080 7115 mac_tx_client_restart(
7081 7116 (mac_client_handle_t)gmcip);
7082 7117 }
7083 7118 }
7084 7119 if (MAC_GROUP_NO_CLIENT(fgrp)) {
7085 7120 mac_ring_t *ring;
7086 7121 int cnt;
7087 7122 int ringcnt;
7088 7123
7089 7124 fgrp->mrg_state = MAC_GROUP_STATE_REGISTERED;
7090 7125 /*
7091 7126 * Additionally, we also need to stop all
7092 7127 * the rings in the default group, except
7093 7128 * the default ring. The reason being
7094 7129 * this group won't be released since it is
7095 7130 * the default group, so the rings won't
7096 7131 * be stopped otherwise.
7097 7132 */
7098 7133 ringcnt = fgrp->mrg_cur_count;
7099 7134 ring = fgrp->mrg_rings;
7100 7135 for (cnt = 0; cnt < ringcnt; cnt++) {
7101 7136 if (ring->mr_state == MR_INUSE &&
7102 7137 ring !=
7103 7138 (mac_ring_t *)mip->mi_default_tx_ring) {
7104 7139 mac_stop_ring(ring);
7105 7140 ring->mr_flag = 0;
7106 7141 }
7107 7142 ring = ring->mr_next;
7108 7143 }
7109 7144 } else if (MAC_GROUP_ONLY_CLIENT(fgrp) != NULL) {
7110 7145 fgrp->mrg_state = MAC_GROUP_STATE_RESERVED;
7111 7146 } else {
7112 7147 ASSERT(fgrp->mrg_state == MAC_GROUP_STATE_SHARED);
7113 7148 }
7114 7149 } else {
7115 7150 /*
7116 7151 * We could have VLANs sharing the non-default group with
7117 7152 * the primary.
7118 7153 */
7119 7154 mgcp = fgrp->mrg_clients;
7120 7155 while (mgcp != NULL) {
7121 7156 gmcip = mgcp->mgc_client;
7122 7157 mgcp = mgcp->mgc_next;
7123 7158 if (gmcip == mcip)
7124 7159 continue;
7125 7160 mac_tx_client_quiesce((mac_client_handle_t)gmcip);
7126 7161 gflent = gmcip->mci_flent;
7127 7162
7128 7163 mac_group_remove_client(fgrp, gmcip);
7129 7164 mac_tx_dismantle_soft_rings(fgrp, gflent);
7130 7165
7131 7166 mac_group_add_client(tgrp, gmcip);
7132 7167 gflent->fe_tx_ring_group = tgrp;
7133 7168 /* We could directly set this to SHARED */
7134 7169 tgrp->mrg_state = mac_group_next_state(tgrp,
7135 7170 &group_only_mcip, defgrp, B_FALSE);
7136 7171 mac_tx_srs_group_setup(gmcip, gflent, SRST_LINK);
7137 7172 mac_fanout_setup(gmcip, gflent,
7138 7173 MCIP_RESOURCE_PROPS(gmcip), mac_rx_deliver,
7139 7174 gmcip, NULL, NULL);
7140 7175
7141 7176 mac_tx_client_restart((mac_client_handle_t)gmcip);
7142 7177 }
7143 7178 mac_group_remove_client(fgrp, mcip);
7144 7179 mac_release_tx_group(mcip, fgrp);
7145 7180 fgrp->mrg_state = MAC_GROUP_STATE_REGISTERED;
7146 7181 }
7147 7182
7148 7183 /* Add it to the tgroup */
7149 7184 mac_group_add_client(tgrp, mcip);
7150 7185 flent->fe_tx_ring_group = tgrp;
7151 7186 tgrp->mrg_state = mac_group_next_state(tgrp, &group_only_mcip,
7152 7187 defgrp, B_FALSE);
7153 7188
7154 7189 mac_tx_srs_group_setup(mcip, flent, SRST_LINK);
7155 7190 mac_fanout_setup(mcip, flent, MCIP_RESOURCE_PROPS(mcip),
7156 7191 mac_rx_deliver, mcip, NULL, NULL);
7157 7192 }
7158 7193
7159 7194 /*
7160 7195 * This is a 1-time control path activity initiated by the client (IP).
7161 7196 * The mac perimeter protects against other simultaneous control activities,
7162 7197 * for example an ioctl that attempts to change the degree of fanout and
7163 7198 * increase or decrease the number of softrings associated with this Tx SRS.
7164 7199 */
7165 7200 static mac_tx_notify_cb_t *
7166 7201 mac_client_tx_notify_add(mac_client_impl_t *mcip,
7167 7202 mac_tx_notify_t notify, void *arg)
7168 7203 {
7169 7204 mac_cb_info_t *mcbi;
7170 7205 mac_tx_notify_cb_t *mtnfp;
7171 7206
7172 7207 ASSERT(MAC_PERIM_HELD((mac_handle_t)mcip->mci_mip));
7173 7208
7174 7209 mtnfp = kmem_zalloc(sizeof (mac_tx_notify_cb_t), KM_SLEEP);
7175 7210 mtnfp->mtnf_fn = notify;
7176 7211 mtnfp->mtnf_arg = arg;
7177 7212 mtnfp->mtnf_link.mcb_objp = mtnfp;
7178 7213 mtnfp->mtnf_link.mcb_objsize = sizeof (mac_tx_notify_cb_t);
7179 7214 mtnfp->mtnf_link.mcb_flags = MCB_TX_NOTIFY_CB_T;
7180 7215
7181 7216 mcbi = &mcip->mci_tx_notify_cb_info;
7182 7217 mutex_enter(mcbi->mcbi_lockp);
7183 7218 mac_callback_add(mcbi, &mcip->mci_tx_notify_cb_list, &mtnfp->mtnf_link);
7184 7219 mutex_exit(mcbi->mcbi_lockp);
7185 7220 return (mtnfp);
7186 7221 }
7187 7222
7188 7223 static void
7189 7224 mac_client_tx_notify_remove(mac_client_impl_t *mcip, mac_tx_notify_cb_t *mtnfp)
7190 7225 {
7191 7226 mac_cb_info_t *mcbi;
7192 7227 mac_cb_t **cblist;
7193 7228
7194 7229 ASSERT(MAC_PERIM_HELD((mac_handle_t)mcip->mci_mip));
7195 7230
7196 7231 if (!mac_callback_find(&mcip->mci_tx_notify_cb_info,
7197 7232 &mcip->mci_tx_notify_cb_list, &mtnfp->mtnf_link)) {
7198 7233 cmn_err(CE_WARN,
7199 7234 "mac_client_tx_notify_remove: callback not "
7200 7235 "found, mcip 0x%p mtnfp 0x%p", (void *)mcip, (void *)mtnfp);
7201 7236 return;
7202 7237 }
7203 7238
7204 7239 mcbi = &mcip->mci_tx_notify_cb_info;
7205 7240 cblist = &mcip->mci_tx_notify_cb_list;
7206 7241 mutex_enter(mcbi->mcbi_lockp);
7207 7242 if (mac_callback_remove(mcbi, cblist, &mtnfp->mtnf_link))
7208 7243 kmem_free(mtnfp, sizeof (mac_tx_notify_cb_t));
7209 7244 else
7210 7245 mac_callback_remove_wait(&mcip->mci_tx_notify_cb_info);
7211 7246 mutex_exit(mcbi->mcbi_lockp);
7212 7247 }
7213 7248
7214 7249 /*
7215 7250 * mac_client_tx_notify():
7216 7251 * call to add and remove flow control callback routine.
7217 7252 */
7218 7253 mac_tx_notify_handle_t
7219 7254 mac_client_tx_notify(mac_client_handle_t mch, mac_tx_notify_t callb_func,
7220 7255 void *ptr)
7221 7256 {
7222 7257 mac_client_impl_t *mcip = (mac_client_impl_t *)mch;
7223 7258 mac_tx_notify_cb_t *mtnfp = NULL;
7224 7259
7225 7260 i_mac_perim_enter(mcip->mci_mip);
7226 7261
7227 7262 if (callb_func != NULL) {
7228 7263 /* Add a notify callback */
7229 7264 mtnfp = mac_client_tx_notify_add(mcip, callb_func, ptr);
7230 7265 } else {
7231 7266 mac_client_tx_notify_remove(mcip, (mac_tx_notify_cb_t *)ptr);
7232 7267 }
7233 7268 i_mac_perim_exit(mcip->mci_mip);
7234 7269
7235 7270 return ((mac_tx_notify_handle_t)mtnfp);
7236 7271 }
7237 7272
7238 7273 void
7239 7274 mac_bridge_vectors(mac_bridge_tx_t txf, mac_bridge_rx_t rxf,
7240 7275 mac_bridge_ref_t reff, mac_bridge_ls_t lsf)
7241 7276 {
7242 7277 mac_bridge_tx_cb = txf;
7243 7278 mac_bridge_rx_cb = rxf;
7244 7279 mac_bridge_ref_cb = reff;
7245 7280 mac_bridge_ls_cb = lsf;
7246 7281 }
7247 7282
7248 7283 int
7249 7284 mac_bridge_set(mac_handle_t mh, mac_handle_t link)
7250 7285 {
7251 7286 mac_impl_t *mip = (mac_impl_t *)mh;
7252 7287 int retv;
7253 7288
7254 7289 mutex_enter(&mip->mi_bridge_lock);
7255 7290 if (mip->mi_bridge_link == NULL) {
7256 7291 mip->mi_bridge_link = link;
7257 7292 retv = 0;
7258 7293 } else {
7259 7294 retv = EBUSY;
7260 7295 }
7261 7296 mutex_exit(&mip->mi_bridge_lock);
7262 7297 if (retv == 0) {
7263 7298 mac_poll_state_change(mh, B_FALSE);
7264 7299 mac_capab_update(mh);
7265 7300 }
7266 7301 return (retv);
7267 7302 }
7268 7303
7269 7304 /*
7270 7305 * Disable bridging on the indicated link.
7271 7306 */
7272 7307 void
7273 7308 mac_bridge_clear(mac_handle_t mh, mac_handle_t link)
7274 7309 {
7275 7310 mac_impl_t *mip = (mac_impl_t *)mh;
7276 7311
7277 7312 mutex_enter(&mip->mi_bridge_lock);
7278 7313 ASSERT(mip->mi_bridge_link == link);
7279 7314 mip->mi_bridge_link = NULL;
7280 7315 mutex_exit(&mip->mi_bridge_lock);
7281 7316 mac_poll_state_change(mh, B_TRUE);
7282 7317 mac_capab_update(mh);
7283 7318 }
7284 7319
7285 7320 void
7286 7321 mac_no_active(mac_handle_t mh)
7287 7322 {
7288 7323 mac_impl_t *mip = (mac_impl_t *)mh;
7289 7324
7290 7325 i_mac_perim_enter(mip);
7291 7326 mip->mi_state_flags |= MIS_NO_ACTIVE;
7292 7327 i_mac_perim_exit(mip);
7293 7328 }
7294 7329
7295 7330 /*
7296 7331 * Walk the primary VLAN clients whenever the primary's rings property
7297 7332 * changes and update the mac_resource_props_t for the VLAN's client.
7298 7333 * We need to do this since we don't support setting these properties
7299 7334 * on the primary's VLAN clients, but the VLAN clients have to
7300 7335 * follow the primary w.r.t the rings property;
7301 7336 */
7302 7337 void
7303 7338 mac_set_prim_vlan_rings(mac_impl_t *mip, mac_resource_props_t *mrp)
7304 7339 {
7305 7340 mac_client_impl_t *vmcip;
7306 7341 mac_resource_props_t *vmrp;
7307 7342
7308 7343 for (vmcip = mip->mi_clients_list; vmcip != NULL;
7309 7344 vmcip = vmcip->mci_client_next) {
7310 7345 if (!(vmcip->mci_flent->fe_type & FLOW_PRIMARY_MAC) ||
7311 7346 mac_client_vid((mac_client_handle_t)vmcip) ==
7312 7347 VLAN_ID_NONE) {
7313 7348 continue;
7314 7349 }
7315 7350 vmrp = MCIP_RESOURCE_PROPS(vmcip);
7316 7351
7317 7352 vmrp->mrp_nrxrings = mrp->mrp_nrxrings;
7318 7353 if (mrp->mrp_mask & MRP_RX_RINGS)
7319 7354 vmrp->mrp_mask |= MRP_RX_RINGS;
7320 7355 else if (vmrp->mrp_mask & MRP_RX_RINGS)
7321 7356 vmrp->mrp_mask &= ~MRP_RX_RINGS;
7322 7357
7323 7358 vmrp->mrp_ntxrings = mrp->mrp_ntxrings;
7324 7359 if (mrp->mrp_mask & MRP_TX_RINGS)
7325 7360 vmrp->mrp_mask |= MRP_TX_RINGS;
7326 7361 else if (vmrp->mrp_mask & MRP_TX_RINGS)
7327 7362 vmrp->mrp_mask &= ~MRP_TX_RINGS;
7328 7363
7329 7364 if (mrp->mrp_mask & MRP_RXRINGS_UNSPEC)
7330 7365 vmrp->mrp_mask |= MRP_RXRINGS_UNSPEC;
7331 7366 else
7332 7367 vmrp->mrp_mask &= ~MRP_RXRINGS_UNSPEC;
7333 7368
7334 7369 if (mrp->mrp_mask & MRP_TXRINGS_UNSPEC)
7335 7370 vmrp->mrp_mask |= MRP_TXRINGS_UNSPEC;
7336 7371 else
7337 7372 vmrp->mrp_mask &= ~MRP_TXRINGS_UNSPEC;
7338 7373 }
7339 7374 }
7340 7375
7341 7376 /*
7342 7377 * We are adding or removing ring(s) from a group. The source for taking
7343 7378 * rings is the default group. The destination for giving rings back is
7344 7379 * the default group.
7345 7380 */
7346 7381 int
7347 7382 mac_group_ring_modify(mac_client_impl_t *mcip, mac_group_t *group,
7348 7383 mac_group_t *defgrp)
7349 7384 {
7350 7385 mac_resource_props_t *mrp = MCIP_RESOURCE_PROPS(mcip);
7351 7386 uint_t modify;
7352 7387 int count;
7353 7388 mac_ring_t *ring;
7354 7389 mac_ring_t *next;
7355 7390 mac_impl_t *mip = mcip->mci_mip;
7356 7391 mac_ring_t **rings;
7357 7392 uint_t ringcnt;
7358 7393 int i = 0;
7359 7394 boolean_t rx_group = group->mrg_type == MAC_RING_TYPE_RX;
7360 7395 int start;
7361 7396 int end;
7362 7397 mac_group_t *tgrp;
7363 7398 int j;
7364 7399 int rv = 0;
7365 7400
7366 7401 /*
7367 7402 * If we are asked for just a group, we give 1 ring, else
7368 7403 * the specified number of rings.
7369 7404 */
7370 7405 if (rx_group) {
7371 7406 ringcnt = (mrp->mrp_mask & MRP_RXRINGS_UNSPEC) ? 1:
7372 7407 mrp->mrp_nrxrings;
7373 7408 } else {
7374 7409 ringcnt = (mrp->mrp_mask & MRP_TXRINGS_UNSPEC) ? 1:
7375 7410 mrp->mrp_ntxrings;
7376 7411 }
7377 7412
7378 7413 /* don't allow modifying rings for a share for now. */
7379 7414 ASSERT(mcip->mci_share == NULL);
7380 7415
7381 7416 if (ringcnt == group->mrg_cur_count)
7382 7417 return (0);
7383 7418
7384 7419 if (group->mrg_cur_count > ringcnt) {
7385 7420 modify = group->mrg_cur_count - ringcnt;
7386 7421 if (rx_group) {
7387 7422 if (mip->mi_rx_donor_grp == group) {
7388 7423 ASSERT(mac_is_primary_client(mcip));
7389 7424 mip->mi_rx_donor_grp = defgrp;
7390 7425 } else {
7391 7426 defgrp = mip->mi_rx_donor_grp;
7392 7427 }
7393 7428 }
7394 7429 ring = group->mrg_rings;
7395 7430 rings = kmem_alloc(modify * sizeof (mac_ring_handle_t),
7396 7431 KM_SLEEP);
7397 7432 j = 0;
7398 7433 for (count = 0; count < modify; count++) {
7399 7434 next = ring->mr_next;
7400 7435 rv = mac_group_mov_ring(mip, defgrp, ring);
7401 7436 if (rv != 0) {
7402 7437 /* cleanup on failure */
7403 7438 for (j = 0; j < count; j++) {
7404 7439 (void) mac_group_mov_ring(mip, group,
7405 7440 rings[j]);
7406 7441 }
7407 7442 break;
7408 7443 }
7409 7444 rings[j++] = ring;
7410 7445 ring = next;
7411 7446 }
7412 7447 kmem_free(rings, modify * sizeof (mac_ring_handle_t));
7413 7448 return (rv);
7414 7449 }
7415 7450 if (ringcnt >= MAX_RINGS_PER_GROUP)
7416 7451 return (EINVAL);
7417 7452
7418 7453 modify = ringcnt - group->mrg_cur_count;
7419 7454
7420 7455 if (rx_group) {
7421 7456 if (group != mip->mi_rx_donor_grp)
7422 7457 defgrp = mip->mi_rx_donor_grp;
7423 7458 else
7424 7459 /*
7425 7460 * This is the donor group with all the remaining
7426 7461 * rings. Default group now gets to be the donor
7427 7462 */
7428 7463 mip->mi_rx_donor_grp = defgrp;
7429 7464 start = 1;
7430 7465 end = mip->mi_rx_group_count;
7431 7466 } else {
7432 7467 start = 0;
7433 7468 end = mip->mi_tx_group_count - 1;
7434 7469 }
7435 7470 /*
7436 7471 * If the default doesn't have any rings, lets see if we can
7437 7472 * take rings given to an h/w client that doesn't need it.
7438 7473 * For now, we just see if there is any one client that can donate
7439 7474 * all the required rings.
7440 7475 */
7441 7476 if (defgrp->mrg_cur_count < (modify + 1)) {
7442 7477 for (i = start; i < end; i++) {
7443 7478 if (rx_group) {
7444 7479 tgrp = &mip->mi_rx_groups[i];
7445 7480 if (tgrp == group || tgrp->mrg_state <
7446 7481 MAC_GROUP_STATE_RESERVED) {
7447 7482 continue;
7448 7483 }
7449 7484 mcip = MAC_GROUP_ONLY_CLIENT(tgrp);
7450 7485 if (mcip == NULL)
7451 7486 mcip = mac_get_grp_primary(tgrp);
7452 7487 ASSERT(mcip != NULL);
7453 7488 mrp = MCIP_RESOURCE_PROPS(mcip);
7454 7489 if ((mrp->mrp_mask & MRP_RX_RINGS) != 0)
7455 7490 continue;
7456 7491 if ((tgrp->mrg_cur_count +
7457 7492 defgrp->mrg_cur_count) < (modify + 1)) {
7458 7493 continue;
7459 7494 }
7460 7495 if (mac_rx_switch_group(mcip, tgrp,
7461 7496 defgrp) != 0) {
7462 7497 return (ENOSPC);
7463 7498 }
7464 7499 } else {
7465 7500 tgrp = &mip->mi_tx_groups[i];
7466 7501 if (tgrp == group || tgrp->mrg_state <
7467 7502 MAC_GROUP_STATE_RESERVED) {
7468 7503 continue;
7469 7504 }
7470 7505 mcip = MAC_GROUP_ONLY_CLIENT(tgrp);
7471 7506 if (mcip == NULL)
7472 7507 mcip = mac_get_grp_primary(tgrp);
7473 7508 mrp = MCIP_RESOURCE_PROPS(mcip);
7474 7509 if ((mrp->mrp_mask & MRP_TX_RINGS) != 0)
7475 7510 continue;
7476 7511 if ((tgrp->mrg_cur_count +
7477 7512 defgrp->mrg_cur_count) < (modify + 1)) {
7478 7513 continue;
7479 7514 }
7480 7515 /* OK, we can switch this to s/w */
7481 7516 mac_tx_client_quiesce(
7482 7517 (mac_client_handle_t)mcip);
7483 7518 mac_tx_switch_group(mcip, tgrp, defgrp);
7484 7519 mac_tx_client_restart(
7485 7520 (mac_client_handle_t)mcip);
7486 7521 }
7487 7522 }
7488 7523 if (defgrp->mrg_cur_count < (modify + 1))
7489 7524 return (ENOSPC);
7490 7525 }
7491 7526 if ((rv = i_mac_group_allocate_rings(mip, group->mrg_type, defgrp,
7492 7527 group, mcip->mci_share, modify)) != 0) {
7493 7528 return (rv);
7494 7529 }
7495 7530 return (0);
7496 7531 }
7497 7532
7498 7533 /*
7499 7534 * Given the poolname in mac_resource_props, find the cpupart
7500 7535 * that is associated with this pool. The cpupart will be used
7501 7536 * later for finding the cpus to be bound to the networking threads.
7502 7537 *
7503 7538 * use_default is set B_TRUE if pools are enabled and pool_default
7504 7539 * is returned. This avoids a 2nd lookup to set the poolname
7505 7540 * for pool-effective.
7506 7541 *
7507 7542 * returns:
7508 7543 *
7509 7544 * NULL - pools are disabled or if the 'cpus' property is set.
7510 7545 * cpupart of pool_default - pools are enabled and the pool
7511 7546 * is not available or poolname is blank
7512 7547 * cpupart of named pool - pools are enabled and the pool
7513 7548 * is available.
7514 7549 */
7515 7550 cpupart_t *
7516 7551 mac_pset_find(mac_resource_props_t *mrp, boolean_t *use_default)
7517 7552 {
7518 7553 pool_t *pool;
7519 7554 cpupart_t *cpupart;
7520 7555
7521 7556 *use_default = B_FALSE;
7522 7557
7523 7558 /* CPUs property is set */
7524 7559 if (mrp->mrp_mask & MRP_CPUS)
7525 7560 return (NULL);
7526 7561
7527 7562 ASSERT(pool_lock_held());
7528 7563
7529 7564 /* Pools are disabled, no pset */
7530 7565 if (pool_state == POOL_DISABLED)
7531 7566 return (NULL);
7532 7567
7533 7568 /* Pools property is set */
7534 7569 if (mrp->mrp_mask & MRP_POOL) {
7535 7570 if ((pool = pool_lookup_pool_by_name(mrp->mrp_pool)) == NULL) {
7536 7571 /* Pool not found */
7537 7572 DTRACE_PROBE1(mac_pset_find_no_pool, char *,
7538 7573 mrp->mrp_pool);
7539 7574 *use_default = B_TRUE;
7540 7575 pool = pool_default;
7541 7576 }
7542 7577 /* Pools property is not set */
7543 7578 } else {
7544 7579 *use_default = B_TRUE;
7545 7580 pool = pool_default;
7546 7581 }
7547 7582
7548 7583 /* Find the CPU pset that corresponds to the pool */
7549 7584 mutex_enter(&cpu_lock);
7550 7585 if ((cpupart = cpupart_find(pool->pool_pset->pset_id)) == NULL) {
7551 7586 DTRACE_PROBE1(mac_find_pset_no_pset, psetid_t,
7552 7587 pool->pool_pset->pset_id);
7553 7588 }
7554 7589 mutex_exit(&cpu_lock);
7555 7590
7556 7591 return (cpupart);
7557 7592 }
7558 7593
7559 7594 void
7560 7595 mac_set_pool_effective(boolean_t use_default, cpupart_t *cpupart,
7561 7596 mac_resource_props_t *mrp, mac_resource_props_t *emrp)
7562 7597 {
7563 7598 ASSERT(pool_lock_held());
7564 7599
7565 7600 if (cpupart != NULL) {
7566 7601 emrp->mrp_mask |= MRP_POOL;
7567 7602 if (use_default) {
7568 7603 (void) strcpy(emrp->mrp_pool,
7569 7604 "pool_default");
7570 7605 } else {
7571 7606 ASSERT(strlen(mrp->mrp_pool) != 0);
7572 7607 (void) strcpy(emrp->mrp_pool,
7573 7608 mrp->mrp_pool);
7574 7609 }
7575 7610 } else {
7576 7611 emrp->mrp_mask &= ~MRP_POOL;
7577 7612 bzero(emrp->mrp_pool, MAXPATHLEN);
7578 7613 }
7579 7614 }
7580 7615
7581 7616 struct mac_pool_arg {
7582 7617 char mpa_poolname[MAXPATHLEN];
7583 7618 pool_event_t mpa_what;
7584 7619 };
7585 7620
7586 7621 /*ARGSUSED*/
7587 7622 static uint_t
7588 7623 mac_pool_link_update(mod_hash_key_t key, mod_hash_val_t *val, void *arg)
7589 7624 {
7590 7625 struct mac_pool_arg *mpa = arg;
7591 7626 mac_impl_t *mip = (mac_impl_t *)val;
7592 7627 mac_client_impl_t *mcip;
7593 7628 mac_resource_props_t *mrp, *emrp;
7594 7629 boolean_t pool_update = B_FALSE;
7595 7630 boolean_t pool_clear = B_FALSE;
7596 7631 boolean_t use_default = B_FALSE;
7597 7632 cpupart_t *cpupart = NULL;
7598 7633
7599 7634 mrp = kmem_zalloc(sizeof (*mrp), KM_SLEEP);
7600 7635 i_mac_perim_enter(mip);
7601 7636 for (mcip = mip->mi_clients_list; mcip != NULL;
7602 7637 mcip = mcip->mci_client_next) {
7603 7638 pool_update = B_FALSE;
7604 7639 pool_clear = B_FALSE;
7605 7640 use_default = B_FALSE;
7606 7641 mac_client_get_resources((mac_client_handle_t)mcip, mrp);
7607 7642 emrp = MCIP_EFFECTIVE_PROPS(mcip);
7608 7643
7609 7644 /*
7610 7645 * When pools are enabled
7611 7646 */
7612 7647 if ((mpa->mpa_what == POOL_E_ENABLE) &&
7613 7648 ((mrp->mrp_mask & MRP_CPUS) == 0)) {
7614 7649 mrp->mrp_mask |= MRP_POOL;
7615 7650 pool_update = B_TRUE;
7616 7651 }
7617 7652
7618 7653 /*
7619 7654 * When pools are disabled
7620 7655 */
7621 7656 if ((mpa->mpa_what == POOL_E_DISABLE) &&
7622 7657 ((mrp->mrp_mask & MRP_CPUS) == 0)) {
7623 7658 mrp->mrp_mask |= MRP_POOL;
7624 7659 pool_clear = B_TRUE;
7625 7660 }
7626 7661
7627 7662 /*
7628 7663 * Look for links with the pool property set and the poolname
7629 7664 * matching the one which is changing.
7630 7665 */
7631 7666 if (strcmp(mrp->mrp_pool, mpa->mpa_poolname) == 0) {
7632 7667 /*
7633 7668 * The pool associated with the link has changed.
7634 7669 */
7635 7670 if (mpa->mpa_what == POOL_E_CHANGE) {
7636 7671 mrp->mrp_mask |= MRP_POOL;
7637 7672 pool_update = B_TRUE;
7638 7673 }
7639 7674 }
7640 7675
7641 7676 /*
7642 7677 * This link is associated with pool_default and
7643 7678 * pool_default has changed.
7644 7679 */
7645 7680 if ((mpa->mpa_what == POOL_E_CHANGE) &&
7646 7681 (strcmp(emrp->mrp_pool, "pool_default") == 0) &&
7647 7682 (strcmp(mpa->mpa_poolname, "pool_default") == 0)) {
7648 7683 mrp->mrp_mask |= MRP_POOL;
7649 7684 pool_update = B_TRUE;
7650 7685 }
7651 7686
7652 7687 /*
7653 7688 * Get new list of cpus for the pool, bind network
7654 7689 * threads to new list of cpus and update resources.
7655 7690 */
7656 7691 if (pool_update) {
7657 7692 if (MCIP_DATAPATH_SETUP(mcip)) {
7658 7693 pool_lock();
7659 7694 cpupart = mac_pset_find(mrp, &use_default);
7660 7695 mac_fanout_setup(mcip, mcip->mci_flent, mrp,
7661 7696 mac_rx_deliver, mcip, NULL, cpupart);
7662 7697 mac_set_pool_effective(use_default, cpupart,
7663 7698 mrp, emrp);
7664 7699 pool_unlock();
7665 7700 }
7666 7701 mac_update_resources(mrp, MCIP_RESOURCE_PROPS(mcip),
7667 7702 B_FALSE);
7668 7703 }
7669 7704
7670 7705 /*
7671 7706 * Clear the effective pool and bind network threads
7672 7707 * to any available CPU.
7673 7708 */
7674 7709 if (pool_clear) {
7675 7710 if (MCIP_DATAPATH_SETUP(mcip)) {
7676 7711 emrp->mrp_mask &= ~MRP_POOL;
7677 7712 bzero(emrp->mrp_pool, MAXPATHLEN);
7678 7713 mac_fanout_setup(mcip, mcip->mci_flent, mrp,
7679 7714 mac_rx_deliver, mcip, NULL, NULL);
7680 7715 }
7681 7716 mac_update_resources(mrp, MCIP_RESOURCE_PROPS(mcip),
7682 7717 B_FALSE);
7683 7718 }
7684 7719 }
7685 7720 i_mac_perim_exit(mip);
7686 7721 kmem_free(mrp, sizeof (*mrp));
7687 7722 return (MH_WALK_CONTINUE);
7688 7723 }
7689 7724
7690 7725 static void
7691 7726 mac_pool_update(void *arg)
7692 7727 {
7693 7728 mod_hash_walk(i_mac_impl_hash, mac_pool_link_update, arg);
7694 7729 kmem_free(arg, sizeof (struct mac_pool_arg));
7695 7730 }
7696 7731
7697 7732 /*
7698 7733 * Callback function to be executed when a noteworthy pool event
7699 7734 * takes place.
7700 7735 */
7701 7736 /* ARGSUSED */
7702 7737 static void
7703 7738 mac_pool_event_cb(pool_event_t what, poolid_t id, void *arg)
7704 7739 {
7705 7740 pool_t *pool;
7706 7741 char *poolname = NULL;
7707 7742 struct mac_pool_arg *mpa;
7708 7743
7709 7744 pool_lock();
7710 7745 mpa = kmem_zalloc(sizeof (struct mac_pool_arg), KM_SLEEP);
7711 7746
7712 7747 switch (what) {
7713 7748 case POOL_E_ENABLE:
7714 7749 case POOL_E_DISABLE:
7715 7750 break;
7716 7751
7717 7752 case POOL_E_CHANGE:
7718 7753 pool = pool_lookup_pool_by_id(id);
7719 7754 if (pool == NULL) {
7720 7755 kmem_free(mpa, sizeof (struct mac_pool_arg));
7721 7756 pool_unlock();
7722 7757 return;
7723 7758 }
7724 7759 pool_get_name(pool, &poolname);
7725 7760 (void) strlcpy(mpa->mpa_poolname, poolname,
7726 7761 sizeof (mpa->mpa_poolname));
7727 7762 break;
7728 7763
7729 7764 default:
7730 7765 kmem_free(mpa, sizeof (struct mac_pool_arg));
7731 7766 pool_unlock();
7732 7767 return;
7733 7768 }
7734 7769 pool_unlock();
7735 7770
7736 7771 mpa->mpa_what = what;
7737 7772
7738 7773 mac_pool_update(mpa);
7739 7774 }
7740 7775
7741 7776 /*
7742 7777 * Set effective rings property. This could be called from datapath_setup/
7743 7778 * datapath_teardown or set-linkprop.
7744 7779 * If the group is reserved we just go ahead and set the effective rings.
7745 7780 * Additionally, for TX this could mean the default group has lost/gained
7746 7781 * some rings, so if the default group is reserved, we need to adjust the
7747 7782 * effective rings for the default group clients. For RX, if we are working
7748 7783 * with the non-default group, we just need * to reset the effective props
7749 7784 * for the default group clients.
7750 7785 */
7751 7786 void
7752 7787 mac_set_rings_effective(mac_client_impl_t *mcip)
7753 7788 {
7754 7789 mac_impl_t *mip = mcip->mci_mip;
7755 7790 mac_group_t *grp;
7756 7791 mac_group_t *defgrp;
7757 7792 flow_entry_t *flent = mcip->mci_flent;
7758 7793 mac_resource_props_t *emrp = MCIP_EFFECTIVE_PROPS(mcip);
7759 7794 mac_grp_client_t *mgcp;
7760 7795 mac_client_impl_t *gmcip;
7761 7796
7762 7797 grp = flent->fe_rx_ring_group;
7763 7798 if (grp != NULL) {
7764 7799 defgrp = MAC_DEFAULT_RX_GROUP(mip);
7765 7800 /*
7766 7801 * If we have reserved a group, set the effective rings
7767 7802 * to the ring count in the group.
7768 7803 */
7769 7804 if (grp->mrg_state == MAC_GROUP_STATE_RESERVED) {
7770 7805 emrp->mrp_mask |= MRP_RX_RINGS;
7771 7806 emrp->mrp_nrxrings = grp->mrg_cur_count;
7772 7807 }
7773 7808
7774 7809 /*
7775 7810 * We go through the clients in the shared group and
7776 7811 * reset the effective properties. It is possible this
7777 7812 * might have already been done for some client (i.e.
7778 7813 * if some client is being moved to a group that is
7779 7814 * already shared). The case where the default group is
7780 7815 * RESERVED is taken care of above (note in the RX side if
7781 7816 * there is a non-default group, the default group is always
7782 7817 * SHARED).
7783 7818 */
7784 7819 if (grp != defgrp || grp->mrg_state == MAC_GROUP_STATE_SHARED) {
7785 7820 if (grp->mrg_state == MAC_GROUP_STATE_SHARED)
7786 7821 mgcp = grp->mrg_clients;
7787 7822 else
7788 7823 mgcp = defgrp->mrg_clients;
7789 7824 while (mgcp != NULL) {
7790 7825 gmcip = mgcp->mgc_client;
7791 7826 emrp = MCIP_EFFECTIVE_PROPS(gmcip);
7792 7827 if (emrp->mrp_mask & MRP_RX_RINGS) {
7793 7828 emrp->mrp_mask &= ~MRP_RX_RINGS;
7794 7829 emrp->mrp_nrxrings = 0;
7795 7830 }
7796 7831 mgcp = mgcp->mgc_next;
7797 7832 }
7798 7833 }
7799 7834 }
7800 7835
7801 7836 /* Now the TX side */
7802 7837 grp = flent->fe_tx_ring_group;
7803 7838 if (grp != NULL) {
7804 7839 defgrp = MAC_DEFAULT_TX_GROUP(mip);
7805 7840
7806 7841 if (grp->mrg_state == MAC_GROUP_STATE_RESERVED) {
7807 7842 emrp->mrp_mask |= MRP_TX_RINGS;
7808 7843 emrp->mrp_ntxrings = grp->mrg_cur_count;
7809 7844 } else if (grp->mrg_state == MAC_GROUP_STATE_SHARED) {
7810 7845 mgcp = grp->mrg_clients;
7811 7846 while (mgcp != NULL) {
7812 7847 gmcip = mgcp->mgc_client;
7813 7848 emrp = MCIP_EFFECTIVE_PROPS(gmcip);
7814 7849 if (emrp->mrp_mask & MRP_TX_RINGS) {
7815 7850 emrp->mrp_mask &= ~MRP_TX_RINGS;
7816 7851 emrp->mrp_ntxrings = 0;
7817 7852 }
7818 7853 mgcp = mgcp->mgc_next;
7819 7854 }
7820 7855 }
7821 7856
7822 7857 /*
7823 7858 * If the group is not the default group and the default
7824 7859 * group is reserved, the ring count in the default group
7825 7860 * might have changed, update it.
7826 7861 */
7827 7862 if (grp != defgrp &&
7828 7863 defgrp->mrg_state == MAC_GROUP_STATE_RESERVED) {
7829 7864 gmcip = MAC_GROUP_ONLY_CLIENT(defgrp);
7830 7865 emrp = MCIP_EFFECTIVE_PROPS(gmcip);
7831 7866 emrp->mrp_ntxrings = defgrp->mrg_cur_count;
7832 7867 }
7833 7868 }
7834 7869 emrp = MCIP_EFFECTIVE_PROPS(mcip);
7835 7870 }
7836 7871
7837 7872 /*
7838 7873 * Check if the primary is in the default group. If so, see if we
7839 7874 * can give it a an exclusive group now that another client is
7840 7875 * being configured. We take the primary out of the default group
7841 7876 * because the multicast/broadcast packets for the all the clients
7842 7877 * will land in the default ring in the default group which means
7843 7878 * any client in the default group, even if it is the only on in
7844 7879 * the group, will lose exclusive access to the rings, hence
7845 7880 * polling.
7846 7881 */
7847 7882 mac_client_impl_t *
7848 7883 mac_check_primary_relocation(mac_client_impl_t *mcip, boolean_t rxhw)
7849 7884 {
7850 7885 mac_impl_t *mip = mcip->mci_mip;
7851 7886 mac_group_t *defgrp = MAC_DEFAULT_RX_GROUP(mip);
7852 7887 flow_entry_t *flent = mcip->mci_flent;
7853 7888 mac_resource_props_t *mrp = MCIP_RESOURCE_PROPS(mcip);
7854 7889 uint8_t *mac_addr;
7855 7890 mac_group_t *ngrp;
7856 7891
7857 7892 /*
7858 7893 * Check if the primary is in the default group, if not
7859 7894 * or if it is explicitly configured to be in the default
7860 7895 * group OR set the RX rings property, return.
7861 7896 */
7862 7897 if (flent->fe_rx_ring_group != defgrp || mrp->mrp_mask & MRP_RX_RINGS)
7863 7898 return (NULL);
7864 7899
7865 7900 /*
7866 7901 * If the new client needs an exclusive group and we
7867 7902 * don't have another for the primary, return.
7868 7903 */
7869 7904 if (rxhw && mip->mi_rxhwclnt_avail < 2)
7870 7905 return (NULL);
7871 7906
7872 7907 mac_addr = flent->fe_flow_desc.fd_dst_mac;
7873 7908 /*
7874 7909 * We call this when we are setting up the datapath for
7875 7910 * the first non-primary.
7876 7911 */
7877 7912 ASSERT(mip->mi_nactiveclients == 2);
7878 7913 /*
7879 7914 * OK, now we have the primary that needs to be relocated.
7880 7915 */
7881 7916 ngrp = mac_reserve_rx_group(mcip, mac_addr, B_TRUE);
7882 7917 if (ngrp == NULL)
7883 7918 return (NULL);
7884 7919 if (mac_rx_switch_group(mcip, defgrp, ngrp) != 0) {
7885 7920 mac_stop_group(ngrp);
7886 7921 return (NULL);
7887 7922 }
7888 7923 return (mcip);
7889 7924 }
↓ open down ↓ |
4544 lines elided |
↑ open up ↑ |
XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX