I think I've made some interesting progress and discoveries. Currently the Sonos system is correctly and stably identified by the controllers in distinct VLANs.
I report my experience and apologize for having used an online translator.
To give a little context, I would like to point out that my network is broadly composed of a Mikrotik CCR2004, a UniFi Aggregation Pro, and several other UniFi switches and Access Points. Currently there is only a 2x SFP+ bonding connection between the CCR and the aggregation: all VLANs travel tagged on this bond; the CCR ethernet ports are unused. The other switches are connected to the aggregation and the Access Points to the other Switches.
Basically, a single bridge is configured on the CCR with the aforementioned bond as a port, with vlan-filtering enabled and various vlan sub-interfaces. A few months ago a WAN Failover with 2 ISPs was configured (based on
viewtopic.php?t=157048) and recently the WAN Load Balancing with 3 ISPs. The firewall is based on the official Mikrotik documentation (
https://help.mikrotik.com/docs/display/ ... d+Firewall) with various modifications for specific needs and to implement some queues; it currently does not provide any traffic restrictions between “trusted” VLANs.
There are a total of 10 Sonos speakers of the S2 version and they are connected to a common wifi network, but a specific dedicated VLAN is assigned to them via radius. Sonos controllers are wired and wifi and reside in two other VLANs. Some controllers are connected via Client-to-Site and Site-to-Site VPN via Wireguard tunnels installed on the CCR.
The VLAN segregation was made around last January and thanks to PIM/SM the Sonos system initially worked quite stably, only requiring authorization of multicast traffic between the trusted VLANs. Recently, however, more or less all local controllers have begun to have difficulty identifying the Sonos system. It seemed that system detection was difficult when two particular speakers had an unstable WiFi connection. In recent days, the system had become practically impossible except from a smartphone placed in the same VLAN as the speakers. Curiously, however, the controllers connected via Wireguard have never had any difficulties.
What I think I understood is the following.
1. After initial discovery of the system via multicast/SSDP, the controller “associates” with a particular speaker (visible in app settings) and can continue to communicate with the system via unicast and control the system even in the absence of routing multicast, at least for a certain period and if the associated speaker remains reachable on the network. This would be the reason why the wireguard remote clients, initially associated with speakers with a stable connection, have always worked and why the local clients, associated with speakers with an unstable connection, have had problems.
2. The behavior of the UniFi system regarding multicast has changed for some time. The description in the UniFi console suggests that IGMP snooping would be an optimization for multicast, in reality it now seems like a real necessity for forwarding multicast traffic. Even if all UniFi networks have snooping disabled, even if IGMP snooping is disabled in the CCR bridge, SSDP multicast traffic is not flooded to the speaker network, at least not M-SEARCH from the controller network . If IGMP snooping is activated on a UniFi network, the aggregation takes on the role of IGMP querier (this is what the CCR reports). On the CCR, IGMP snooping is actually an optimization and can be activated or deactivated: however, if it detects an external querier, the CCR will not send queries regardless of the settings.
To conclude, I believe I can conclude that the problems I had were not so much in the configuration of the Mikrotik CCR but in the configuration of the UniFi system, probably altered by an update of the same which initially went unnoticed for the reasons referred to in point 1 above.