Abstract:
This document describes briefly the differences between the two technologies enabling multipathing I/O over iSCSI, shows the appropriate configuration location on Windows GUI and closes with a typical misunderstanding about aggregation prerequisites. The information in this article is based on the author’s experience and an information found in the documents "Microsoft iSCSI Initiator Version 2.x Users Guide" (abbreviated here as MS iSCSI UG), and "A ‘Multivendor Post’ to help our mutual iSCSI customers using VMWare" (abbreviated here as Multivendor Post) – see online availability in section "Links".
The quiz:
Instead of starting with the theory we take a different approach and post two screenshots illustrating load balance policy settings on two different configuration locations:
Figure one shows the page reached by clicking Targets / Details / Sessions / Connections
Figure two shows the page reached by clicking Targets / Details / Devices / Advanced / MPIO
If you are aware that the shown policies can differ from each other and know what is for what, this article will not enrich you. If you are having concerns feel free to follow the text and answer the two-figures-quiz at the end.
The difference between MCS and MPIO (in a nutshell):
First we agree upon the common features within both technologies: both serve a multipathing for (iSCSI) I/O-operations utilizing multiple hardware (or OSI Level 1) components, such as Ethernet NICs or iSCSI HBAs. The purpose of multipathing is redundancy and aggregation – how this is implemented depends on the above depicted figures, i.e. through the decision which paths are active and which are passive (or standby, using Microsoft parlance). For the exact definition of the policies, such as round robin, weighted path, fail over only, etc. please refer to "MS iSCSI UG", p. 41.
Finally here come the condensed definitions for both technologies:
MCS allows the initiator to establish multiple TCP/IP connections to the same target within the same iSCSI session.
MPIO in contrast allows the initiator to establish multiple iSCSI sessions (each having single TCP/IP connection) to the same target, effectively aggregating the duplicate devices into a single device.
If you are not familiar with the terminology (initiator, target, session, connection, initiator port and network portal) please refer to "Multivendor Post" which provides very informative sketches to the iSCSI network architecture.
Now that we know that MCS means effectively several connections within a session and MPIO means multiple sessions the question is when to use what. Mainly you will have to concentrate on two perspectives – vendor support and load balance policy inheritance. The question – or rather schools of thought – about the speed and performance differences are factored out here, because in the author’s opinion these are almost equal and you will probably never get to the point of fully utilizing them. With this said consider the following simple rule of thumb: you can use MCS only when it is supported from the vendor’s SAN and you are not using hardware iSCSI HBAs. In any other case use MPIO. The second thought is – if considering the above conditions you are able to use MCS, but want to apply different load balancing policies to different targets (and effectively LUNs or groups of LUNs) you will still be better off using MPIO. This is because load balancing policies are session adherent. In other words when you are applying policy to MCS it is for the whole session, no matter how many connections are aggregated "beneath" it. On the other side when using MPIO you can set different policies for different LUNs, because the multipathing is using different iSCSI sessions.
Now you can answer the quiz:
The first figure reveals the MCS configuration and the load balance policy is "session-global" no matter how many paths are listed under "The session has the following connections". The second figure constitutes the load balance properties for each individual device under Device Properties / MPIO.
A last word of aggregation:
There are many articles in the web, manifesting the idea that you cannot achieve an aggregated bandwidth (or data transfer speed if you like) when using multipathing to a single target or single LUN. Well, generally speaking, this is not correct, but a probable statement in some situations. Perhaps one of the main sources of such rumors was the implementation of iSCSI software initiator under ESX(i) 3.X: which "only supports a single iSCSI session with a single TCP connection for each iSCSI target" (Multivendor Post). This effectively reduced the speed to the Ethernet limitations for each target, i.e. a single LUN. The second source of misunderstandings was the SAN vendors themselves usually configuring multiple links in an active-passive manner, eventually taking care only of redundancy. Such is the case for example with IBM: "IBM System Storage N series MPIO Support –Frequently Asked Questions", p. 10. Note that Microsoft provides active-active sessions with round-robin policy and there is no target limitation such as in ESX(i)’s case.
The last obstacle in the way of a possible performance improvement through multipathing could be the physical architecture behind the LUN and the actial I/O request. For example if you are using a single spindle in a SAN RAID, disposed as a pass-through LUN you could hardly have any good results. Further consider a read I/O of a large file, physically located on a single spindle. Many more examples can be given – the idea behind them all will be that if you are using a multipath aggregation to a single target and a single LUN, the only way to enhance the bandwidth above the Ethernet limits would be if the application using this LUN logical storage resource employs multithreaded data access and that the data is located either on multiple RAID spindles or, even better, already in the SAN’s RAM cache.
With all this said, enjoy your iSCSI implementation and feel free to experiment further with it. Just think of the different NICs (with TOE and iSCSI offloading or not?) switches and cables and jumbo frames and the flow control – all these necessitate thorough settings that can contribute more to the performance as just adding a second path.
Links:
Microsoft iSCSI Initiator Version 2.x Users Guide: http://download.microsoft.com/download/a/e/9/ae91dea1-66d9-417c-ade4-92d824b871af/uguide.doc
A ‘Multivendor Post’ to help our mutual iSCSI customers using VMWare: http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html
IBM System Storage N series MPIO Support –Frequently Asked Questions: http://www.redbooks.ibm.com/redpapers/pdfs/redp4213.pdf
Amazing article ! Explains a lot testing our iSCSI HA/lS topology