Hi all, first time posting.
After updating to 24H2, my VMs lost access to the host's network shares.
Please note: this is different from other problems I found discussed in other places. This is not related to the problem with authentication and general network problems (169.x.x.x addresses and such).
Hours of painful troubleshooting and monitoring revealed that the host doesn't even seem to listen to SMB ports on the vEthernet (Default Switch). These ports look closed from the VMs. It's either that the server doesn't even bind to these virtual adapters, or (doubtfully) there's some extra firewall, that is not Windows Firewall, blocking access.
I can (1) reach the Internet from VMs, and (2) my shares on the host are still available to other hosts in the local network, and (3) the shares of the other hosts on the local network are still available to my VMs. The problem is specifically with the VMs connected to the Default Switch vEthernet unable to access the host's shares. Everything else works as it worked before. The Windows Firewall doesn't have to do anything with this, too (disabling it on both the host and the guest doesn't help).
For now, my workaround is connecting all my VMs to a custom External virtual switch (connected to one of my host PC's NICs). This is cumbersome, as I have to change the network every time I switch from Ethernet to Wi-Fi and back, and I'm not comfortable exposing everything to the VMs, and exposing my VMs like that. I'd much prefer NATting everything through the Default Switch as before.
WSL and Docker VMs that connect through their own Internal network vswitch, can't access host SMB shares as well.
Has anyone else encountered this? Is there a solution to make the host expose SMB through the default vEthernet again?
After updating to 24H2, my VMs lost access to the host's network shares.
Please note: this is different from other problems I found discussed in other places. This is not related to the problem with authentication and general network problems (169.x.x.x addresses and such).
Hours of painful troubleshooting and monitoring revealed that the host doesn't even seem to listen to SMB ports on the vEthernet (Default Switch). These ports look closed from the VMs. It's either that the server doesn't even bind to these virtual adapters, or (doubtfully) there's some extra firewall, that is not Windows Firewall, blocking access.
I can (1) reach the Internet from VMs, and (2) my shares on the host are still available to other hosts in the local network, and (3) the shares of the other hosts on the local network are still available to my VMs. The problem is specifically with the VMs connected to the Default Switch vEthernet unable to access the host's shares. Everything else works as it worked before. The Windows Firewall doesn't have to do anything with this, too (disabling it on both the host and the guest doesn't help).
For now, my workaround is connecting all my VMs to a custom External virtual switch (connected to one of my host PC's NICs). This is cumbersome, as I have to change the network every time I switch from Ethernet to Wi-Fi and back, and I'm not comfortable exposing everything to the VMs, and exposing my VMs like that. I'd much prefer NATting everything through the Default Switch as before.
WSL and Docker VMs that connect through their own Internal network vswitch, can't access host SMB shares as well.
Has anyone else encountered this? Is there a solution to make the host expose SMB through the default vEthernet again?
- Windows Build/Version
- Windows 11 Pro 24H2 / 26100.2314 / Windows Feature Experience Pack 1000.26100.32.0
My Computer
System One
-
- OS
- Windows 11
- Computer type
- Laptop
- Manufacturer/Model
- Lenovo X1 Carbon 9th Gen
- CPU
- i7-1165G7
- Memory
- 32GB
- Graphics Card(s)
- Iris XE + 4090 EGPU