Significantly decreased the overall migration time (time will vary depending on workload)
Increased number of concurrent vMotions:
ESX host: 4 on a 1 Gbps network and 8 on a 10 Gbps network
Datastore: 128 (both VMFS and NFS)
Maintenance mode evacuation time is greatly decreased due to above improvements
Re-write of the previous vMotion code
- Sends memory pages bundled together instead of one after the other
- Less network/ TCP/IP overhead
- Destination pre-allocates memory pages
- Multiple senders/ receivers
Not only a single world responsible for each vMotion thus limit based on host CPU
• Sends list of changed pages instead of bitmaps
• Sends list of changed pages instead of bitmaps
Performance improvement
• Throughput improved significantly for single vMotion
ESX 3.5 – ~1.0Gbps
ESX 4.0 – ~2.6Gbps
ESX 4.1 – max 8 Gbps
• Elapsed reduced by 50%+ on 10GigE tests.
Mix of different bandwidth pNICs not supported
No comments:
Post a Comment