ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » IBM MQ Performance Monitoring » Performance tuning for Network Latency

Post new topic  Reply to topic Goto page 1, 2, 3  Next
 Performance tuning for Network Latency « View previous topic :: View next topic » 
Author Message
saurabh25281
PostPosted: Wed Jul 03, 2019 4:03 am    Post subject: Performance tuning for Network Latency Reply with quote

Centurion

Joined: 05 Nov 2006
Posts: 107
Location: Bangalore

Hi All,

We have MQ configured across 2 Datacenters.

When our client applications connect with MQ servers within the same DC, the response time is in milliseconds. This changes when the app try to connect to the other DC and the response time is just over a minute.

We figured out that the slow response is due to network latency. A simple ping within DC responds within 0.2ms whereas across DC it increases to 120ms.

Even reading 100 messages, should take atleast more than 12 seconds. Is there a way to read batch of messages into a single MQGET API? I don't see the equivalent of MQPUT1 for get operations. If not, how can i acheive the same to improve performance?

Our application performs the read operation in the below sequence
Code:
MQCONN
MQOPEN
   MQGET - multiple calls (1 call per message)
MQCLOSE
MQDISC


Regards
Saurabh
Back to top
View user's profile Send private message Send e-mail Yahoo Messenger
exerk
PostPosted: Wed Jul 03, 2019 4:30 am    Post subject: Reply with quote

Jedi Council

Joined: 02 Nov 2006
Posts: 6339

Consider this scenario:

1. An MQ Client application can get and process 10 messages per second when connected to a localised (same data centre) queue manager. The same client MQ application can get and process 1 message per second when connected to a remote (different data centre) because the network is choking the data transfer.

2. The MQ Client application is made more efficient, the queue managers 'tuned', and can now get and process 20 messages per second when connected to the localised (same data centre) queue manager, but only 1 message per second when connected to a remote (different data centre) because the network is still choking the data transfer.

So, will anything you do to improve application/MQ performance do anything to improve the network latency?
_________________
It's puzzling, I don't think I've ever seen anything quite like this before...and it's hard to soar like an eagle when you're surrounded by turkeys.
Back to top
View user's profile Send private message
Vitor
PostPosted: Wed Jul 03, 2019 5:18 am    Post subject: Re: Performance tuning for Network Latency Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

saurabh25281 wrote:
We figured out that the slow response is due to network latency. A simple ping within DC responds within 0.2ms whereas across DC it increases to 120ms.


I agree with my almost worthy associate; with that level of latency anything you do in the MQ layer is a Band Aid on a broken leg.

saurabh25281 wrote:

Our application performs the read operation in the below sequence
Code:
MQCONN
MQOPEN
   MQGET - multiple calls (1 call per message)
MQCLOSE
MQDISC



I'm assuming that by "1 call per message" you mean the application performs repeated MQGET calls until the queue is empty, because if you performed all those operations each time, getting a response in milliseconds from the local data center is good going. If you don't, adjust your code so everything except MQGET is performed once.

The best you could hope for is that your application is eligible for read ahead (see here).

But your remote link is 600 times slower than the local one. Nothing above the network layer is going to help that much.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
saurabh25281
PostPosted: Wed Jul 03, 2019 11:33 am    Post subject: Reply with quote

Centurion

Joined: 05 Nov 2006
Posts: 107
Location: Bangalore

Quote:
The best you could hope for is that your application is eligible for read ahead.

Thanks for the suggestion, will check this out.

Quote:
But your remote link is 600 times slower than the local one. Nothing above the network layer is going to help that much.

It might not help as we do not have the equivalent of MQPUT1. What might be the reasons for not having such features for MQGET, that allows all messages to be get in a single call.

This would have improved the performance for scenarios as mine, after configuring sufficient Buffer.

Regards
Saurabh
Back to top
View user's profile Send private message Send e-mail Yahoo Messenger
Vitor
PostPosted: Wed Jul 03, 2019 12:06 pm    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

saurabh25281 wrote:
It might not help as we do not have the equivalent of MQPUT1. What might be the reasons for not having such features for MQGET, that allows all messages to be get in a single call.


Don't forget that MQPUT1 is less efficient than MQPUT, as it connects, opens, puts, closes and disconnects for each put. It's massively more convenient for replying to things and simplifies the code, and that usually outweighs the performance costs.

saurabh25281 wrote:
This would have improved the performance for scenarios as mine, after configuring sufficient Buffer.


So let's assume a new function MQGETMANY, which does what you suggest. How much buffer is sufficient? You don't know how many messages are on the queue or their size. You can fix this by doing an MQGETMANY with browse; what do you do if there's been a failure (because you're using the DR site), it's taken a while to get switched over and their are more messages piled up than you have RAM in your VM?

You successfully do an MQGETMANY and pull 2000 messages into this buffer of yours. What do you want to happen if the 1500th message can't be processed because there's bad application data in the payload? Do you want that message put in the backout queue? All 1500? If you just want that one, is your application maintaining the pointer or the queue manager?

You successfully do an MQGETMANY and pull 2000 messages into this buffer of yours. You process all 2000 successfully. The 2001st message (which arrived after your call completed because that queue manager is now the live one) is going to sit on the queue for ages, violating it's SLA, even if you can multi-thread reading the messages out of this buffer (which is a whole raft of memory management problems) because it's going to take a while to commit 2000 messages on the remote queue.

These are just the most serious problems I can think of with an MQGETMANY. Never mind doing it over a link that latent (when this 2001st message I describe will have time to order some food and a few drinks before it's read off). I'm sure there are others.

You're trying to use a link with very high latency for a low SLA solution. This is not going to work. The business either needs to spring for a better link or accept that, in the event of a disaster, things will still work but there will be reduced service.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
hughson
PostPosted: Wed Jul 03, 2019 3:31 pm    Post subject: Reply with quote

Padawan

Joined: 09 May 2013
Posts: 1914
Location: Bay of Plenty, New Zealand

If your messages are non-persistent, you could look into Read Ahead. This is sort of like asking for multiple messages at once, but you don't have to code your application differently. It's a lower QoS so can only be used with non-persistent messages.

Cheers,
Morag
_________________
Morag Hughson @MoragHughson
IBM MQ Technical Education Specialist
Get your IBM MQ training here!
MQGem Software
Back to top
View user's profile Send private message Visit poster's website
gbaddeley
PostPosted: Wed Jul 03, 2019 4:27 pm    Post subject: Re: Performance tuning for Network Latency Reply with quote

Jedi

Joined: 25 Mar 2003
Posts: 2492
Location: Melbourne, Australia

saurabh25281 wrote:
We figured out that the slow response is due to network latency. A simple ping within DC responds within 0.2ms whereas across DC it increases to 120ms.

Wow, that's very high. What explanation is given by your Network support team? Usually its becase the network link does not have enough capacity for the traffic that it needs to carry, or there is an app that is using more capacity than expected.

Fight the cause. MQ generally has very good and optimised performance. There is not much you can do with MQ config or code if the network latency is a limiting resource.
_________________
Glenn
Back to top
View user's profile Send private message
saurabh25281
PostPosted: Thu Jul 04, 2019 7:53 am    Post subject: Reply with quote

Centurion

Joined: 05 Nov 2006
Posts: 107
Location: Bangalore

Hi All,

The DCs in question are located across continents (4600 miles apart) and the ping represents realistic response times, looking at sources in the public domain I would assume. Correct me If I am wrong on this.

Quote:
So let's assume a new function MQGETMANY, which does what you suggest. How much buffer is sufficient?

I should be able to choose the buffer size as per my application needs. We can do that for TCP buffer for Sender/Receiver channels.

Quote:
You don't know how many messages are on the queue or their size.

We are the application owner and know the average message size and hence can approximate the buffer size we want.

Quote:
what do you do if there's been a failure (because you're using the DR site), it's taken a while to get switched over and their are more messages piled up than you have RAM in your VM?

We can decide an acceptable number of messages which does not overrun the RAM by any measure. And the messages in RAM would still be there in Disk until there is a commit request from the client application, so there is no risk of loosing messages.

Quote:
You successfully do an MQGETMANY and pull 2000 messages into this buffer of yours. What do you want to happen if the 1500th message can't be processed because there's bad application data in the payload? Do you want that message put in the backout queue? All 1500?

It's our responsibility on how we handle failure. Let the application decide for themselves.

Quote:
You successfully do an MQGETMANY and pull 2000 messages into this buffer of yours. You process all 2000 successfully. The 2001st message (which arrived after your call completed because that queue manager is now the live one) is going to sit on the queue for ages, violating it's SLA, even if you can multi-thread reading the messages out of this buffer (which is a whole raft of memory management problems) because it's going to take a while to commit 2000 messages on the remote queue.

Like I mentioned earlier, the application can decide how much buffer it chooses and what is the optimum no. of messages that generates a good enough throughput and acceptable response time. Probable batch size can be 50 for e.g. for my application needs, same as Sender channel batch size.

Quote:
This is sort of like asking for multiple messages at once, but you don't have to code your application differently. It's a lower QoS so can only be used with non-persistent messages.

Doesn't the batch feature apply to Sender/Receiver channels? Do we provide a lower QoS for persistent messages in the case of Sender/Receiver channels?

Regards
Saurabh
Back to top
View user's profile Send private message Send e-mail Yahoo Messenger
exerk
PostPosted: Thu Jul 04, 2019 8:09 am    Post subject: Reply with quote

Jedi Council

Joined: 02 Nov 2006
Posts: 6339

saurabh25281 wrote:
...We can decide an acceptable number of messages which does not overrun the RAM by any measure. And the messages in RAM would still be there in Disk until there is a commit request from the client application, so there is no risk of loosing messages...

And just how long do you think it will take to roll back in the event of failure, bearing in mind your current latency? And then you have to do it all over again.
_________________
It's puzzling, I don't think I've ever seen anything quite like this before...and it's hard to soar like an eagle when you're surrounded by turkeys.
Back to top
View user's profile Send private message
bruce2359
PostPosted: Thu Jul 04, 2019 8:19 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9394
Location: US: west coast, almost. Otherwise, enroute.

saurabh25281 wrote:
The DCs in question are located across continents (4600 miles apart) and the ping represents realistic response times, looking at sources in the public domain I would assume. Correct me If I am wrong on this.

Default PING packet size is 32 bytes on Windows. How big is the MQ message you are sending? 32 bytes? Bigger?

Type "ping -s" and press enter. Windows users will need to use "-l" instead of "-s." The default packet size is 56 bytes for Linux and Mac pings, and 32 bytes in Windows. The actual packet size will be slightly larger than what you enter due to the addition of the ICMP header information attached to the ping.

I tried ping 192.168.100.1
It returned 1ms response time - from my local router.

Try: ping 192.168.100.1 -l 6400
3ms average response.

Replace my local router IP address with your down network IP. What's the response time with packet size 6400?

Try: ping 192.168.100.1 -l 64000
I get 11ms

Try: ping 192.169.100.1 -l 65500
The max size is 65500. I get 12ms response.

For fun, try: ping www.amazon.com -l 65500

Quote:
Pinging d3ag4hukkh62yn.cloudfront.net [13.32.110.194] with 65500 bytes of data:
Reply from 13.32.110.194: bytes=65500 time=94ms TTL=239
Reply from 13.32.110.194: bytes=65500 time=87ms TTL=239
Reply from 13.32.110.194: bytes=65500 time=85ms TTL=239
Reply from 13.32.110.194: bytes=65500 time=84ms TTL=239

Ping statistics for 13.32.110.194:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 84ms, Maximum = 94ms, Average = 87ms


For more fun ping www.telegraph.co.uk -l 65500
About 5,200 miles away.
Quote:
Pinging e8153.j.akamaiedge.net [23.65.45.156] with 65500 bytes of data:
Reply from 23.65.45.156: bytes=65500 time=84ms TTL=55
Reply from 23.65.45.156: bytes=65500 time=96ms TTL=55
Reply from 23.65.45.156: bytes=65500 time=87ms TTL=55
Reply from 23.65.45.156: bytes=65500 time=85ms TTL=55

Ping statistics for 23.65.45.156:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 84ms, Maximum = 96ms, Average = 88ms

So, how big are your messages?

Size matters.
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Thu Jul 04, 2019 4:23 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7717

Vitor wrote:

Don't forget that MQPUT1 is less efficient than MQPUT, as it connects, opens, puts, closes and disconnects for each put.


MQPUT1 is only open+put+close. Connect and disconnect are not included.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
gbaddeley
PostPosted: Thu Jul 04, 2019 4:29 pm    Post subject: Reply with quote

Jedi

Joined: 25 Mar 2003
Posts: 2492
Location: Melbourne, Australia

PeterPotkay wrote:
Vitor wrote:

Don't forget that MQPUT1 is less efficient than MQPUT, as it connects, opens, puts, closes and disconnects for each put.
MQPUT1 is only open+put+close. Connect and disconnect are not included.

Perhaps Vitor was invoking the nightmare of some app code that does conn / disc for every message...

Quote:
The DCs in question are located across continents (4600 miles apart) and the ping represents realistic response times

Note that ping only shows response times in the NICs and network path, it does not include the n/w software and TCP layers on the servers.
Do your links go through a VPN or other tunneling across public internet?
_________________
Glenn
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Thu Jul 04, 2019 8:20 pm    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20696
Location: LI,NY

saurabh25281 wrote:
Hi All,

The DCs in question are located across continents (4600 miles apart) and the ping represents realistic response times, looking at sources in the public domain I would assume. Correct me If I am wrong on this.


Well if you are looking at that kind of a distance and with the disruption of noise on the line, you should really be looking at alternative methods from tcp/udp.
Faspex comes to mind. Look at ASPERA. (no interest or participation in the before named company)...
_________________
MQ & Broker admin


Last edited by fjb_saper on Thu Jul 04, 2019 9:57 pm; edited 1 time in total
Back to top
View user's profile Send private message Send e-mail
hughson
PostPosted: Thu Jul 04, 2019 8:57 pm    Post subject: Reply with quote

Padawan

Joined: 09 May 2013
Posts: 1914
Location: Bay of Plenty, New Zealand

saurabh25281 wrote:
hughson wrote:
This is sort of like asking for multiple messages at once, but you don't have to code your application differently. It's a lower QoS so can only be used with non-persistent messages.

Doesn't the batch feature apply to Sender/Receiver channels? Do we provide a lower QoS for persistent messages in the case of Sender/Receiver channels?

I was actually referring to the Client Read Ahead feature when I said this. That is a feature for client connected applications, not sender/received channels. I had not realised you were talking about sender/received channels in your original question.

With respect to channel batches: no, the lower QoS on sender/receiver channels only applies to non-persistent messages - see NPMSPEED.

Cheers,
Morag
_________________
Morag Hughson @MoragHughson
IBM MQ Technical Education Specialist
Get your IBM MQ training here!
MQGem Software
Back to top
View user's profile Send private message Visit poster's website
saurabh25281
PostPosted: Thu Jul 04, 2019 10:37 pm    Post subject: Reply with quote

Centurion

Joined: 05 Nov 2006
Posts: 107
Location: Bangalore

Quote:

exerk wrote:
And just how long do you think it will take to roll back in the event of failure, bearing in mind your current latency? And then you have to do it all over again.

I would assume this to be in milliseconds, since the messages will be still on the server disk until a commit or rollback request is sent by the client.

Quote:
bruce wrote:
Try: ping 192.169.100.1 -s 65500

Thanks for the info and the response for 65500 byte was 128ms which is only 8ms more than the response for 64 byte ping.

This is evidence that our network bandwidth is not utilized fully. We could have packed a lot more of messages on our network, and improved our performance, if such feature was available with MQ.

TBH, this is a very basic feature that one expect from a mature product like MQ.
Back to top
View user's profile Send private message Send e-mail Yahoo Messenger
Display posts from previous:   
Post new topic  Reply to topic Goto page 1, 2, 3  Next Page 1 of 3

MQSeries.net Forum Index » IBM MQ Performance Monitoring » Performance tuning for Network Latency
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.