streda 19. mája 2010

Typy virtualnych sietovych adapterov

vNIC Types on ESX

Four basic vNICs:

Two emulated types:
Vlance – emulation of physical AMD very old network device. Device, PCNET32 linux drivers - emulation of real physical device

E1000 – Intel Card emulation of real physical device

Reason – OS drivers on OS install CD


Other are VMware devices - designed with emulation in mind
Vmxnet2/enhanced- vmxnet2 (ESX3.5)
Vmxnet3 (vSphere)

“Flexible” vNIC: Morphablein Windows and Linux VMs
Combination Two devices in one:
virtual HW version 4 (ESX 3.x): vlance+ vmxnet2
virtual HW version 7 (ESX 4.0): vlance+ enhanced vmxnet2
Operates as vlanceinitially, but “morphs” into vmxnet2/enhanced vmxnet2 if VMware-tools is installed.

vNIC Features



Vlance – no features, very old!
Vmxnet3 – all features – highest performance,flexibility, RSS multiplexing traffic to multiple vCPUs in Windows 2k8
In the middle of functionalities there is vmxnet2 resp. enhanced vmxnet2 – TSO support, Jumbo Frames are the differences between those.

Notes:
vHWversion 4: ESX 3.x, ESX 4.0
vHWversion 7: ESX 4.0
* ESX 3.5 and later only


vNIC Selection on ESX

vmxnet3 gives the best overall performance today!!! Guest drivers Linux, Win, Solaris

Avoid vlanceif possible
Install VMware tools to morph it into vmxnet2/enhanced vmxnet2!!!!

e1000 vNIC
Good compromise between performance and driver support in Guest OS installation CD, and across Guest OS types, better usability during the install than vmxnet2,3. Performance is reasonable, but not as vmxnet3, or enhanced vmxnet2.

Why vmxnet3 not default? E1000 is default most of times of OSs - reason vmxnet not driver in Install CD, currently vmxnet driver is not on Install CD of OSs. That is reason why VMware recommend E1000 as default.

If Non TCP traffic - If larger Rx ring is needed
E1000 vNIC or vmxnet3 vNIC, they got Larger deafault ring sisez, most of time changeable from OSs.
Larger default rxring sizes; size adjustable in most cases.


Conclusion

TCP vNIC traffic does very well.
Very high aggregate throughput and packet rate achievable.
If your application predominantly uses TCP, you should not worry about impact vNIC networking unless you need many Gbps throughput per vNIC.
Few workload even come close to needing > 2Gbps or > 200k pks/s!

At higher data rate, UDP traffic may need larger vNIC Rx ring.
Larger receive socket size may also be needed.
Depending on packet rate, burst rate and loss rate tolerable, may need to watch CPU and memory over-commitment levels.

Low jitter and very low latency requirements
Where work is going on.
Early recommendation: use EPT or NPT support on processors AMD to vola RVI.

Lot of times people don’t have real application for relevant load demand!!!! Nothing worry about!!!! :)

MTU size – change Jumbo frames- make sure that JF are set on switches!!! It can cause problems. For example you can ping but no transfers.
Only vmxnet3, enhanced vmxnet2. Not E1000, apart of physical E1000 which supports Jumbo Frames, but not implemented in virtual E1000 yet.

Zdroj: Virtual Network Performance
http://www.vmworld.com/docs/DOC-3875

Žiadne komentáre: