drbd-user May 2010 archive
Main Archive Page > Month Archives  > drbd-user archives
drbd-user: Re: [DRBD-user] bond for drbd identical performance w

Re: [DRBD-user] bond for drbd identical performance with one link down

From: Lee Riemer <lriemer_at_nospam>
Date: Thu May 20 2010 - 18:15:36 GMT
To: drbd-user@lists.linbit.com

Do you need multiple destination IPs to properly balance? It is my
understanding that a single stream will only traverse a single link.
Hence the MPIO requirement.

On 5/20/2010 1:07 PM, Bart Coninckx wrote:
> Hi,
>
> Admittedly not a DRBD issue per se, but I guess this list represents quite
> some experience in the area: I have two gigabit NICs bonded in balance-rr mode
> for DRBD sync. They are directly linked (no switch) to the other other pair in
> the other DRBD node.
>
> Before syncing things I was testing the performance and failover. Netperf
> shows for instance this:
>
>
> iscsi2:/etc/sysconfig/network # netperf -p 2222 -H 10.0.2.3
> TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to 10.0.2.3 (10.0.2.3)
> port 0 AF_INET
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 16384 16384 10.00 977.83
>
>
> Pulling one cable gives me about the same speed. I would expect it to be at
> least 20% slower. It seems the round robin does not speed up things.
>
> The bonds on both sides show up fine in /proc/net/bonding/bond0.
>
> Anyone any idea what I'm doing wrong?
>
> Cheers,
>
>
> Bart
> _______________________________________________
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
_______________________________________________
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user