Curing the long distance blues

23 05 2007

Now is the time to rejoice! I enabled ‘ISL R_RDY’ mode on the four ISLs between sites, enabled long distance with sufficient buffer credits for 30km at full packet size (my links are either 16km or 18km, depending on who I’m talking to) and sat back to watch the portperfshow. I thought I was going to wet my pants when I saw the speed on each link leap from 60mb/s to 150! I reset the port stats, and noticed that I was still getting a number of ‘out of buffer credits’ errors. I’m not sure if I can get stats on the average packet size going through (can anyone advise?), so I took a stab and increased to 40km and reset the stats again. My self pleasure went into overload as the speed topped 170mb/s!

To get a tape copy to run from the remote site to production, we deleted a couple of copy pool volumes in TSM and re-ran a copy storage pool with multiple drives, so I had tape traffic going in both directions, and soon the throughput was topping 200mb/s!

For once, I am happy, and needed a good hosing down to calm myself. I just need to get some long distance licences for the 4100s I have spare so I can move the other site links (we have a second remote site linked to another machine room) from the creaking 12000s and I will be truly joyful.

On the home front, so far there have been no further steaming gifts left in the living room, just the odd puddle!

Advertisements

Actions

Information

One response

2 08 2008
Rob M

Well, I’m gonna comment on this 1+ year old blog cause it helped me see the R_rdy light and may help another poor soul.

We have 2 fabrics, each with a silkworm 48000 and a 4900. 1st fabric connects 2 4G ISLs trunked over 8km through cisco ONS 15454s. The 2nd connects 2 4G ISLs trunked over 3km dark fiber. Eventually we will be going all cisco ONS…multiple paths.

So, we thought we had everything working just fine. Both fabrics were merged and IBM SVCs were replicating ~5TB sycronous. We then implemented a HACMP cluster across the ISLs and the ONS links just sucked for mounting heartbeat disks. I connected a windows host to a disk on the other side of the ONS gear exclusively and performance was horrible. This meant I was in troubleshooting hell…especially with IBM. The only noticeable difference in errors between our ONS ISLs and the dark fiber ISLs was that the ONS ISLs were getting massive amounts of buffer credit counters in the advanced tab of the ISL ports. We even put them on the same 3KM fiber an it still was horrible.

Anyway, after finding this post and with our Cisco guy on the Cisco support line…he said that we needed our ISLs in R_rdy mode. Of course they are I replied…not knowing what the hell this meant…but assuming it got turned on when the ISLs were in LE mode. (We don’t have extended fabric licenses cause we are under 10km.)

Turning on the R_rdy mode didn’t fix the problem….yet. One thing we noticed is that the before turning on the R_rdy mode, the distance Extension mode on the ONS didn’t work. The ISLs would connect but then degrade and then fail. So we did passthrough mode to make everything work…so we thought. Once we put the ISLs into R_rdy mode, it allowed us to put the ONS into distance extension mode and whoahh…nearly wet my pants as well ( a theme on this site) at the throughput we got…and no buffer credit errors!!!!

All is well. Now is truly the time to rejoice!!!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: