ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » General IBM MQ Support » Maximum Queue Depth

Post new topic  Reply to topic Goto page 1, 2  Next
 Maximum Queue Depth « View previous topic :: View next topic » 
Author Message
SAFraser
PostPosted: Thu Jul 28, 2011 3:10 pm    Post subject: Maximum Queue Depth Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

We are fighting with developers.

Current configuration:
Queue maxdepth = 400,000
App max enqueue rate = 160,000/minute
App max dequeue rate = 1,400/minute
Avg message size = 1K

Bad design? Uh, yeah. Using MQ as a storage facility? Sure thing.

The app team has a special run coming up where they want us to store 1.4 million messages in this queue.

We said, "That's a bad idea."
They said, "Why?"
We said, "Well, we are not a data storage facility. What if the queue gets corrupted?"
They said, "We will send the messages again."

We have the disk space for the queue itself. We are using circular logs. In production, we have a good size and number of logs. However, it is a busy queue manager and the logs can overwrite in a matter of hours in typical operation. In development, log numbers and sizes are much more limited.

Also, I've never seen enqueue rates this high. We saw a CPU spike in development, but no other apparent impact to system resources. The logs overwrote themselves within the same minute, though.

Two questions:

1. Should we be excited about this enqueue rate of 160,000/minute? We've seen posts about rates much higher than this, but we've never had such rates at our site before. What should we watch for?

2. What are the specific dangers of a 1.4 million maxdepth?
-- Disk space shortage in /var/mqm/qmgrs
-- Circular logs insufficient space and can't be overwritten
-- Queue corruption (?)

I think they will propose we resize our logs to compensate for their very bad design. Your thoughts will be appreciated.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Thu Jul 28, 2011 4:27 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7716

Unless they put all of the messages in one monster UOW, if you are using circular logs, then you can put billions of messages thru a queue and let the queue build up to millions and millions and you will be fine, as long as you have the disk space for the q file to get that big.

Quote:

1 . Should we be excited about this enqueue rate of 160,000/minute? We've seen posts about rates much higher than this, but we've never had such rates at our site before. What should we watch for?

I'm impressed. Actually , scratch that. I'm doubtful. That's like 3,000 messages a second. Did you actually witness this?

Quote:

2. What are the specific dangers of a 1.4 million maxdepth?
-- Disk space shortage in /var/mqm/qmgrs
-- Circular logs insufficient space and can't be overwritten
-- Queue corruption (?)


-- Only if the q file gets so big it fills up all the space, or exceeds the O/S's max file size allowed. On Unix, hope you have the largefiles option enabled.

-- Circular log space will only be an issue if this is one giant UOW. Otherwise the circular log will be circular, and even persistent messages will just overwrite the logs in a circular fashion.

-- Q corruption? Why? 1.4 million messages is not a lot of messages for MQ.


Quote:

Bad design? Uh, yeah. Using MQ as a storage facility? Sure thing.

They are using MQSeries EXACTLY what it was designed for, to queue messages when the producer outpaces the consumer. They are not talking about leaving them there for any length of time. They are going to consume as fast as they can and right away. I don't understand your concern, Shirley.


Make sure you have the disk space, have QPASA alert you when the queue hits 80% full, and sit back and enjoy MQ doing its thing. Post the actual enque rates once this happens.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
bruce2359
PostPosted: Thu Jul 28, 2011 5:31 pm    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9412
Location: US: west coast, almost. Otherwise, enroute.

I understand her concern. It's new, it's different, people will be watching (hovering and perching), she a sysadmin and wants it all to work. Way to go, Shirley!

1.4 million 1k messages is a trivial amount of disk space. I doubt the sum total of all the messages in a queue will approach anywhere near max-file-size for UNIX - whatever your flavor.

If the consumer app fails for some reason, 400,000 maxdepth may be insufficient. Increase to the real max for the application - 1.4 million + fudge-factor.

Overwriting 1 log segment in a matter of hours is not an issue? In a minute, maybe.

If this is an infrequent/rare occurrence, add secondary logs.
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Thu Jul 28, 2011 7:54 pm    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20702
Location: LI,NY

And ask about the scaling of the consuming application.
Can it run in parallel?
Can you have multiple instances processing?
Can you have it running on multiple servers, multiple instances processing via client channel?
Make sure the consuming app uses syncpoint in all gets (enhances performance).
Make sure the consuming app has no message affinity

Hope this helps too on the throughput of the consumer

You go dazzle them!!
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
gbaddeley
PostPosted: Thu Jul 28, 2011 8:10 pm    Post subject: Reply with quote

Jedi Knight

Joined: 25 Mar 2003
Posts: 2502
Location: Melbourne, Australia

Is this going to happen in the production environment?

It might be a good idea to do a stress & volume test run in a development or test environment which has similar set up and capacity, to prove MQ can handle it, and identify resource limitations. The most likely performance bottlenecks will be in the application programs.

Ensure they are not using 'trigger every', or get by msgid or correlid.
_________________
Glenn
Back to top
View user's profile Send private message
Vitor
PostPosted: Fri Jul 29, 2011 4:03 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

I fully endorse the comments of my most worthy associates above.

I too understand the concern, and the amount of hovering likely to be taking place. I recommend a large cooler of ice filled with trout.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
SAFraser
PostPosted: Mon Aug 01, 2011 11:59 am    Post subject: Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

Gentlemen,

Thank you for your thoughtful replies as well as your empathy.

This is indeed a special run (occurs twice a year). Due to recent increases in our client base, the record count is much higher than previous years. Extra instances of the consuming application will be started, but even so there is an enormous mismatch in enqueue vs. dequeue rates.

Our storage is SAN based, using the Veritas File System. largefiles are enabled by default (though the q file will not exceed 2Gb for this run).

We are testing in a lower environment now, and we hope to move to stress test tomorrow for a full run.

Enqueue rates per minute from this morning's tests:
avg 107,143: max 170,688
avg 75,000; max 156,395
avg 107,172; max 151,216

If disaster befalls us, you'll be among the first to know! Thanks again.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Mon Aug 08, 2011 1:58 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7716

So, how did it go?

Those enqueu rates were per minute? Message Size was ~ 1K?
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
SAFraser
PostPosted: Mon Aug 08, 2011 6:48 pm    Post subject: Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

Hey Peter, thanks for asking!

We ran a test this weekend in our Stress Test environment. We increased the queue manager logs so they are roughly the size of those in production.

The test team put 1.02 million messages into a single input queue. The enqueue rate averaged 157,000 per minute, with a peak of 183,000 per minute.

I swear, we saw it with our own eyes.

The queue manager did not even use all of its primary logs during the put operation.

This load test was not a perfect reflection of production. During the stress test, absolutely nothing else was running. In real production, while many components will be idle, there will still be some other message traffic.

However, it was close enough for government work. And this is, after all, government work.

Thanks, everyone, for all your advice!
Back to top
View user's profile Send private message
skoobee
PostPosted: Mon Aug 08, 2011 10:56 pm    Post subject: Reply with quote

Acolyte

Joined: 26 Nov 2010
Posts: 52

[quote]The queue manager did not even use all of its primary logs during the put operation.
[/quote]

This does not surprise me. With such a high enqueue rate, up to 3000 a second, these must be non-persistent msgs, and so do not get recorded in the logs.

Persistent msgs enqueue rates typically peak at about 300/second.
Back to top
View user's profile Send private message
zpat
PostPosted: Mon Aug 08, 2011 11:46 pm    Post subject: Reply with quote

Jedi Council

Joined: 19 May 2001
Posts: 5852
Location: UK

I did some recent tests and was seeing around 900 persistent messages (of 22kb average size) per second on a single queue manager. Using SAN.
Back to top
View user's profile Send private message
SAFraser
PostPosted: Tue Aug 09, 2011 3:32 am    Post subject: Reply with quote

Shaman

Joined: 22 Oct 2003
Posts: 742
Location: Austin, Texas, USA

skoobee wrote:

This does not surprise me. With such a high enqueue rate, up to 3000 a second, these must be non-persistent msgs, and so do not get recorded in the logs.

Persistent msgs enqueue rates typically peak at about 300/second.


Our messages are, in fact, persistent. Thus our concern about disk space and log levels. During each of our tests we monitored the transaction logs, the queue manager logs, system resources, and enqueue rates to assess the impact of such a high enqueue rate. And remember, these are very small 1K messages.

zpat wrote:

I did some recent tests and was seeing around 900 persistent messages (of 22kb average size) per second on a single queue manager. Using SAN.

We have a shiny new SAN and we've been very happy so far. Not even a bump in the I/O stats during the test. In fact, looking at all the system resources on the MQ server, we couldn't even tell when the test started.

On the topic of the message consumer.... the app team also tested that this past weekend. The consumer is a Java-based batch job (JVM created at a command line). The consumer does some heavy lifting in terms of database inquiry & updates; and, it is slow as molasses. (Which, for my international friends, is really slow.)

Anyway, after dumping 1.02 million messages into the input queue, they started 40 JVMs, gradually increasing to 100 JVMs, to consume the input. We were curious to see if they would reach a critical mass on the number of listeners on the queue causing performance to decline, but we saw no such impact. (Anyone ever face the situation of too many ipprocs on a queue? Slowing down performance when the listener count gets too high?)

Strangely, they got their dequeue rate to about 700/minute starting around midnight, then at 9:00 AM it plummeted to 400. It took about 10 hours for it to creep back up to 700. Nothing on the MQ server reflected this change in rate.

I pointed this out to the development staff who did not find it as interesting as I did. Huh.
Back to top
View user's profile Send private message
Vitor
PostPosted: Tue Aug 09, 2011 4:18 am    Post subject: Reply with quote

Grand High Poobah

Joined: 11 Nov 2005
Posts: 26093
Location: Texas, USA

SAFraser wrote:
The consumer is a Java-based batch job (JVM created at a command line). The consumer does some heavy lifting in terms of database inquiry & updates; and, it is slow as molasses. (Which, for my international friends, is really slow.)


Humph. Java. Bah humbug.

SAFraser wrote:
Strangely, they got their dequeue rate to about 700/minute starting around midnight, then at 9:00 AM it plummeted to 400. It took about 10 hours for it to creep back up to 700. Nothing on the MQ server reflected this change in rate.


Probably got their Singletons tangled or something.

SAFraser wrote:
I pointed this out to the development staff who did not find it as interesting as I did. Huh.


Sounds like developers. One little dip in queue manager performance and they hire billboards. Application runs like a snail on Vallium & it's a statistical fluke.

But I congratulate you on the performance of your queue manager.
_________________
Honesty is the best policy.
Insanity is the best defence.
Back to top
View user's profile Send private message
PeterPotkay
PostPosted: Tue Aug 09, 2011 1:10 pm    Post subject: Reply with quote

Poobah

Joined: 15 May 2001
Posts: 7716

SAFraser wrote:

The queue manager did not even use all of its primary logs during the put operation.

How did you determine that? With circular logs and regular MQCMITs, you can put terabytes and terabytes of data thru a queue. The QM will just keep using the same logs in a circular fashion.

But its probably a moot point, because unless your logs were tiny, 1 million messages at 1 K each could fit across just a few logs even if they were all in one gigantic unit of work.

Don't forget in a shared environment with multiple apps, one app pumping millions of messages and committing regularly has to use the same logs with the one app that puts one tiny non persistent message under syncpoint, and never commits it, so the QM keeps the checkpoint active. Meanwhile the logs roll forward and forward as the good app keeps pumping data and committing its transactions. Eventually the QM comes full circle back to that original check point for that dopey app that is not committing its one message.


Anyhoo, those are some impressive enqueu numbers. I'm assuming its a local app connecting in bindings mode and not client? Maybe even FASTPATH? Are the consumers local too, or maybe client?

SAFraser wrote:

Our messages are, in fact, persistent. Thus our concern about disk space and log levels.

When dealing with large volumes of messages where the producer will outpace the consumer(s), your concern about disk space and log size is exactly the same whether the messages are persistent or not. Persistent or not, eventually the messages will spill over to disk when the queue gets too deep. Persistent or not, the logs will be used if the messages are put or got under syncpoint.
_________________
Peter Potkay
Keep Calm and MQ On
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Tue Aug 09, 2011 2:36 pm    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20702
Location: LI,NY

SAFraser wrote:
Strangely, they got their dequeue rate to about 700/minute starting around midnight, then at 9:00 AM it plummeted to 400. It took about 10 hours for it to creep back up to 700. Nothing on the MQ server reflected this change in rate.

I pointed this out to the development staff who did not find it as interesting as I did. Huh.

I doubt very much that it has anything to do with Java or the client.
I guess it has more to do with the DB processing the client needs to do.

There may also be a magic number of clients beyond which the performance drops... I've seen that happen on DBs. Up to 500 clients you are fine. Go with 600 and the performance decreases...

So if you are looking at maximizing the throughput, they will need to take a critical look at the consuming application and see what it does on the DB and what the magic number of clients is before performance deterioration...

Have fun
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
Display posts from previous:   
Post new topic  Reply to topic Goto page 1, 2  Next Page 1 of 2

MQSeries.net Forum Index » General IBM MQ Support » Maximum Queue Depth
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.