ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » Clustering » AMQ9500 - No Repository storage

Post new topic  Reply to topic Goto page 1, 2, 3  Next
 AMQ9500 - No Repository storage « View previous topic :: View next topic » 
Author Message
Jeff.VT
PostPosted: Wed Dec 12, 2018 10:52 am    Post subject: AMQ9500 - No Repository storage Reply with quote

Acolyte

Joined: 02 Mar 2017
Posts: 68

I have a Case in with IBM, but as I wait for them I thought I'd ask you guys if you'd seen this before.

I honestly can find basically nothing from the internet on this issue.

I'm getting one of these every 5-10 seconds on both of my full repositories, and one of my partial repositories...

--------------------------

12/7/2018 10:09:00 - Process(3136.25) User(USER) Program(amqzlaa0.exe)
Host(HOSTNAME) Installation(INSTALLATION)
VRMF(9.0.5.0) QMgr(QMGRNAME)
Time(2018-12-07T16:09:00.867Z)
RemoteHost(IP)
ArithInsert1(4)
CommentInsert1(Pid(3136) RepositoryType(1), UsedBlocks(256))

No Repository storage

An operation failed because there was no storage available in the repository. An attempt was made to allocate 4 bytes from Pid(3136) RepositoryType(1), UsedBlocks(256).

Reconfigure the Queue Manager to allocate a larger repository.

-----------------------

It's been happening since 12/7, when I migrated our production outbound connectivity from Alias/Remote queues to Cluster queues.

It didn't happen on my test servers, but to be fair the traffic from test is orders of magnitude lower than production.

The queue managers it's happening on are my big message producers.

-------------

Some details... Maybe 1-5 million messages a day. The clustered alias queues are set up for "On Group". The cluster spans a wan around the world, but the connections are rather strong and stable. Windows Server 2012. Latest updates.

I don't SEE any missing messages, but it's hard to tell.

The only reference I find to it online suggests to restart the queue manager to resolve it. But we've done windows patching since it began and the queue manager was restarted since then.

I'm also reluctant to restart the repository queue managers as they're my main QMs and all hell will break loose as it bounces/fails over, and would require me to put in a change during a change freeze...

so... any ideas?

Thanks for any help you can offer. I'm quite out of ideas.


Last edited by Jeff.VT on Wed Dec 12, 2018 12:58 pm; edited 2 times in total
Back to top
View user's profile Send private message
Jeff.VT
PostPosted: Wed Dec 12, 2018 10:56 am    Post subject: Reply with quote

Acolyte

Joined: 02 Mar 2017
Posts: 68

No System queues are remotely close to being full.

Memory and CPU aren't maxed out

The clustered LUN Storage has plenty of space (though I will say my LOG file on the full Repo's is 15GB, which is much higher than I would have expected - and it's only 1GB on the partials).
Back to top
View user's profile Send private message
gbaddeley
PostPosted: Wed Dec 12, 2018 3:05 pm    Post subject: Reply with quote

Jedi

Joined: 25 Mar 2003
Posts: 2492
Location: Melbourne, Australia

Try starting MQ trace on amqzlaa0.exe and then stopping it say 30 seconds later. The trace will probably indicate what the processing is doing in the lead up to the error log. Its interesting that it does not have an AMQnnnn error number.
_________________
Glenn
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Thu Dec 13, 2018 5:50 am    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20696
Location: LI,NY

The other thing you might start contemplating:
Create 2 more qmgrs one on each of your FR servers.
Make those the only full repositories in your cluster.
A FR should not host any queues and it's sole use should be to hold the cluster's information.

Had you done that in the beginning you would not be in such a bind trying to recycle the FRs.

Also what is the MQ version of the FR's vs PRs (detail V.R.M.F)?

You did not specify the logging characteristics of your full repositories.
Do they have enough logging space?

Hope this helps
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
Jeff.VT
PostPosted: Thu Dec 13, 2018 6:16 am    Post subject: Reply with quote

Acolyte

Joined: 02 Mar 2017
Posts: 68

fjb_saper wrote:
The other thing you might start contemplating:
Create 2 more qmgrs one on each of your FR servers.
Make those the only full repositories in your cluster.
A FR should not host any queues and it's sole use should be to hold the cluster's information.

Had you done that in the beginning you would not be in such a bind trying to recycle the FRs.

Also what is the MQ version of the FR's vs PRs (detail V.R.M.F)?

You did not specify the logging characteristics of your full repositories.
Do they have enough logging space?

Hope this helps


Since finding this problem, I started this process exactly. But then quickly realized the FR's aren't the only ones getting this problem. All of my partial repositories are too - so I got ready to do this, but put it on hold until I heard from IBM.
Back to top
View user's profile Send private message
Jeff.VT
PostPosted: Thu Dec 13, 2018 6:20 am    Post subject: Reply with quote

Acolyte

Joined: 02 Mar 2017
Posts: 68

gbaddeley wrote:
Try starting MQ trace on amqzlaa0.exe and then stopping it say 30 seconds later. The trace will probably indicate what the processing is doing in the lead up to the error log. Its interesting that it does not have an AMQnnnn error number.


I'll give this a try.

I slept on it, and something I read on the cluster troubleshooting guide from IBM was:

IBM wrote:
Check that, for each partial repository queue manager, you have defined a single cluster-sender channel to one of the full repository queue managers. This channel acts as a "bootstrap" channel through which the partial repository queue manager initially joins the cluster.


1: What would happen if I put both FR's as hard-coded cluster-sender channels on the PRs?

2: Would it be a problem if I still had non-cluster-sender channels to the same queue managers that were in the new cluster for other tasks (because I hadn't gotten around to cleaning them up yet)?
Back to top
View user's profile Send private message
bruce2359
PostPosted: Thu Dec 13, 2018 6:40 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9394
Location: US: west coast, almost. Otherwise, enroute.

Jeff.VT wrote:
gbaddeley wrote:
Try starting MQ trace on amqzlaa0.exe and then stopping it say 30 seconds later. The trace will probably indicate what the processing is doing in the lead up to the error log. Its interesting that it does not have an AMQnnnn error number.


I'll give this a try.

I slept on it, and something I read on the cluster troubleshooting guide from IBM was:

IBM wrote:
Check that, for each partial repository queue manager, you have defined a single cluster-sender channel to one of the full repository queue managers. This channel acts as a "bootstrap" channel through which the partial repository queue manager initially joins the cluster.


1: What would happen if I put both FR's as hard-coded cluster-sender channels on the PRs?

2: Would it be a problem if I still had non-cluster-sender channels to the same queue managers that were in the new cluster for other tasks (because I hadn't gotten around to cleaning them up yet)?

Regarding 1) above: are you saying that you do NOT have manually defined clussdr channels from PR's to one FR?

Are your cluster channels all in RUNNING state? None in RETRY or STOPPED state?

Regarding 2) above: SDR/RCVR channels do not conflict with CLUSSDR/CLUSRCVR channels.

Since you didn't post an official-looking error message from your logs, I shall guess that you saw AMQ9500. Yes?
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Thu Dec 13, 2018 7:08 am    Post subject: Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20696
Location: LI,NY

Have you checked if any of the SYSTEM.CLUSTER.* queues is full?
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
Jeff.VT
PostPosted: Thu Dec 13, 2018 7:30 am    Post subject: Reply with quote

Acolyte

Joined: 02 Mar 2017
Posts: 68

Quote:
Regarding 1) above: are you saying that you do NOT have manually defined clussdr channels from PR's to one FR?


The opposite - I have both FRs manually defined in all my PRs.

Quote:
Are your cluster channels all in RUNNING state? None in RETRY or STOPPED state?


All are in Running

Quote:
Since you didn't post an official-looking error message from your logs, I shall guess that you saw AMQ9500. Yes?


I posted the only error I got - ya, it just says 9500 - I'm in windows...

I just did this trace and got:

Quote:
001CC699 08:35:18.838465 3136.25 CONN:000032 ---{ zlaProcessMessage
001CC69A 08:35:18.838474 3136.25 CONN:000032 Corresponding APPLICATION pid.tid (1640.527)
001CC69B 08:35:18.838489 3136.25 CONN:000032 ----{ zlaProcessMQIRequest
001CC69C 08:35:18.838498 3136.25 CONN:000032 -----{ zlaMQPUT
001CC69D 08:35:18.838504 3136.25 CONN:000032 ------{ zcpCreateMessage
001CC69E 08:35:18.838511 3136.25 CONN:000032 ------} zcpCreateMessage (rc=OK)
001CC69F 08:35:18.838518 3136.25 CONN:000032 ------{ zsqMQPUT
001CC6A0 08:35:18.838524 3136.25 CONN:000032 -------{ zsqVerMsgDescForPut
001CC6A1 08:35:18.838531 3136.25 CONN:000032 -------} zsqVerMsgDescForPut (rc=OK)
001CC6A2 08:35:18.838537 3136.25 CONN:000032 -------{ zsqVerOptForPutPut1
001CC6A3 08:35:18.838543 3136.25 CONN:000032 -------} zsqVerOptForPutPut1 (rc=OK)
001CC6A4 08:35:18.838550 3136.25 CONN:000032 -------{ zsqSetKernelPutParams
001CC6A5 08:35:18.838557 3136.25 CONN:000032 -------} zsqSetKernelPutParams (rc=OK)
001CC6A6 08:35:18.838563 3136.25 CONN:000032 -------{ kpiMQPUT
001CC6A7 08:35:18.838571 3136.25 CONN:000032 QMgrName (<QMNameRedacted> ), QName (<ClusterQueueNameRedacted> )
001CC6A8 08:35:18.838582 3136.25 CONN:000032 --------{ kqiPutIt
001CC6A9 08:35:18.838588 3136.25 CONN:000032 ---------{ kqiVerOptForPut
001CC6AA 08:35:18.838594 3136.25 CONN:000032 ---------} kqiVerOptForPut (rc=OK)
001CC6AB 08:35:18.838601 3136.25 CONN:000032 ---------{ apiSyncPointCheck
001CC6AC 08:35:18.838607 3136.25 CONN:000032 ----------{ atmSyncPointCheck
001CC6AD 08:35:18.838613 3136.25 CONN:000032 ----------} atmSyncPointCheck (rc=OK)
001CC6AE 08:35:18.838620 3136.25 CONN:000032 ---------} apiSyncPointCheck (rc=OK)
001CC6AF 08:35:18.838626 3136.25 CONN:000032 ---------{ kqiInitForPutPutList
001CC6B0 08:35:18.838632 3136.25 CONN:000032 ----------{ xcsQueryMTimeFn
001CC6B1 08:35:18.838639 3136.25 CONN:000032 ----------} xcsQueryMTimeFn (rc=OK)
001CC6B2 08:35:18.838651 3136.25 CONN:000032 ----------{ kqiSetContext
001CC6B3 08:35:18.838659 3136.25 CONN:000032 ----------} kqiSetContext (rc=OK)
001CC6B4 08:35:18.838669 3136.25 CONN:000032 ----------{ kqiResolveReplyQ
001CC6B5 08:35:18.838677 3136.25 CONN:000032 ----------} kqiResolveReplyQ (rc=OK)
001CC6B6 08:35:18.838683 3136.25 CONN:000032 ---------} kqiInitForPutPutList (rc=OK)
001CC6B7 08:35:18.838689 3136.25 CONN:000032 ---------{ kqiVerMsgForPutPutList
001CC6B8 08:35:18.838696 3136.25 CONN:000032 ---------} kqiVerMsgForPutPutList (rc=OK)
001CC6B9 08:35:18.838702 3136.25 CONN:000032 ---------{ kqiSetMsgID
001CC6BA 08:35:18.838709 3136.25 CONN:000032 Data: 0x5c08adce 0xeec8b226
001CC6BB 08:35:18.838721 3136.25 CONN:000032 ---------} kqiSetMsgID (rc=OK)
001CC6BC 08:35:18.838727 3136.25 CONN:000032 ---------{ kqiQPathCheckForPut
001CC6BD 08:35:18.838733 3136.25 CONN:000032 ---------} kqiQPathCheckForPut (rc=OK)
001CC6BE 08:35:18.838739 3136.25 CONN:000032 ---------{ kqiGroupResolve
001CC6BF 08:35:18.838747 3136.25 CONN:000032 *pUnsetXmitQ(1) doResolve(1)
001CC6C0 08:35:18.838757 3136.25 CONN:000032 ---------}! kqiGroupResolve (rc=Unknown(1))
001CC6C1 08:35:18.838773 3136.25 CONN:000032 ---------{ kqiFastnetSetResolvedQ
001CC6C2 08:35:18.838780 3136.25 CONN:000032 ----------{ kqiFastnetChooseQueue
001CC6C3 08:35:18.838786 3136.25 CONN:000032 -----------{ kqiFastnetChooseQueue2
001CC6C4 08:35:18.838794 3136.25 CONN:000032 ChooseFlags:0x0
001CC6C5 08:35:18.838805 3136.25 CONN:000032 ------------{ rfxChooseQ
001CC6C6 08:35:18.838812 3136.25 CONN:000032 -------------{ rfxQueryQCLQMGR
001CC6C7 08:35:18.838820 3136.25 CONN:000032 --------------{ zstHashCalculate
001CC6C8 08:35:18.838826 3136.25 CONN:000032 --------------} zstHashCalculate (rc=OK)
001CC6C9 08:35:18.838833 3136.25 CONN:000032 --------------{ rfxLINK
001CC6CA 08:35:18.838844 3136.25 CONN:000032 --------------} rfxLINK (rc=OK)
001CC6CB 08:35:18.838853 3136.25 CONN:000032 --------------{ rfxLINK
001CC6CC 08:35:18.838859 3136.25 CONN:000032 --------------} rfxLINK (rc=OK)
001CC6CD 08:35:18.838865 3136.25 CONN:000032 --------------{ rfxLINK
001CC6CE 08:35:18.838872 3136.25 CONN:000032 --------------} rfxLINK (rc=OK)
001CC6CF 08:35:18.838878 3136.25 CONN:000032 --------------{ rfxLINK
001CC6D0 08:35:18.838884 3136.25 CONN:000032 --------------} rfxLINK (rc=OK)
001CC6D1 08:35:18.838891 3136.25 CONN:000032 --------------{ rfxLINK
001CC6D2 08:35:18.838897 3136.25 CONN:000032 --------------} rfxLINK (rc=OK)
001CC6D3 08:35:18.838903 3136.25 CONN:000032 --------------{ rfxLINK
001CC6D4 08:35:18.838909 3136.25 CONN:000032 --------------} rfxLINK (rc=OK)
001CC6D5 08:35:18.838915 3136.25 CONN:000032 --------------{ rfxLINK
001CC6D6 08:35:18.838921 3136.25 CONN:000032 --------------} rfxLINK (rc=OK)
001CC6D7 08:35:18.838928 3136.25 CONN:000032 --------------{ rfxLINK
001CC6D8 08:35:18.838936 3136.25 CONN:000032 --------------} rfxLINK (rc=OK)
001CC6D9 08:35:18.838949 3136.25 CONN:000032 --------------{ xlsRequestMutex
001CC6DA 08:35:18.838961 3136.25 CONN:000032 MtxName: AMQRFNCA Id: 49
001CC6DB 08:35:18.838973 3136.25 CONN:000032 --------------} xlsRequestMutex (rc=OK)
001CC6DC 08:35:18.838979 3136.25 CONN:000032 --------------{ rfxEnlargeRegistration
001CC6DD 08:35:18.838986 3136.25 CONN:000032 ---------------{ rfiAllocCacheArea
001CC6DE 08:35:18.838993 3136.25 CONN:000032 ----------------{ rfxLINK
001CC6DF 08:35:18.838999 3136.25 CONN:000032 ----------------} rfxLINK (rc=OK)
001CC6E0 08:35:18.839005 3136.25 CONN:000032 ----------------{ rfxLINK

<500 lines of rfxLINK (rc=OK)>

001CC9F2 08:35:18.842695 3136.25 CONN:000032 ----------------} rfxLINK (rc=OK)
001CC9F4 08:35:18.842706 3136.25 CONN:000032 ----------------{ rfxLINK
001CC9F6 08:35:18.842717 3136.25 CONN:000032 ----------------} rfxLINK (rc=OK)
001CC9FB 08:35:18.842735 3136.25 CONN:000032 ----------------{ rrxError
001CC9FE 08:35:18.842751 3136.25 CONN:000032 RetCode = 20009211, rc1 = 0, rc2 = 0, Comment1 = '', Comment2 = '', Comment3= '', File= 'F:\build\slot1\p900_P\src\lib\remote\amqrfica.c', Line= '2368'
001CCA02 08:35:18.842769 3136.25 CONN:000032 ----------------}! rrxError (rc=rrcE_NO_STORAGE)
001CCA06 08:35:18.842785 3136.25 CONN:000032 ----------------{ xcsQueryValueForSubpool
001CCA09 08:35:18.842793 3136.25 CONN:000032 Data:-
001CCA09 08:35:18.842793 3136.25 CONN:000032 0x000000DB 1E72DF20 40 0C 00 00 : @...
001CCA0C 08:35:18.842807 3136.25 CONN:000032 ----------------} xcsQueryValueForSubpool (rc=OK)
001CCA0F 08:35:18.842818 3136.25 CONN:000032 ----------------{ xcsDisplayMessageForError
001CCA12 08:35:18.842827 3136.25 CONN:000032 hpool: 1::0::0-192, subpoolName: <null>, qmgrName: <null>, returncode: 268473600, mtype: 0xF0000002
001CCA17 08:35:18.842839 3136.25 CONN:000032 -----------------{ xcsSetMsgContext
001CCA19 08:35:18.842847 3136.25 CONN:000032 ------------------{ xcsQueryValue
001CCA1F 08:35:18.842867 3136.25 CONN:000032 Data:-
001CCA1F 08:35:18.842867 3136.25 CONN:000032 0x000000DB 1E72CDB0 61 6D 71 7A 6C 61 61 30 2E 65 78 65 : amqzlaa0.exe
001CCA22 08:35:18.842882 3136.25 CONN:000032 ------------------} xcsQueryValue (rc=OK)
001CCA25 08:35:18.842890 3136.25 CONN:000032 ------------------{ xcsUpdateMsgContext
001CCA27 08:35:18.842897 3136.25 CONN:000032 -------------------{ xcsQueryMTimeFn
001CCA2A 08:35:18.842903 3136.25 CONN:000032 -------------------} xcsQueryMTimeFn (rc=OK)
001CCA2D 08:35:18.842910 3136.25 CONN:000032 ------------------} xcsUpdateMsgContext (rc=OK)
001CCA30 08:35:18.842917 3136.25 CONN:000032 -----------------} xcsSetMsgContext (rc=OK)
001CCA32 08:35:18.842924 3136.25 CONN:000032 -----------------{ xcsWriteQmgrLogMessage
001CCA36 08:35:18.842933 3136.25 CONN:000032 local:FALSE msgid:10009500 a1:00000004 a2:00000000 c1:Pid(3136) Repository c2:(null) c3:(null)
001CCA3A 08:35:18.842946 3136.25 CONN:000032 ------------------{ xcsLookupNamedMemBlock
001CCA3D 08:35:18.842955 3136.25 CONN:000032 -------------------{ xcsEnumerateQuickCellBlock
001CCA40 08:35:18.842962 3136.25 CONN:000032 --------------------{ xcsGetMemFn
001CCA5E 08:35:18.843081 3136.25 CONN:000032 component:23 function:366 length:40 options:0 cbmindex:1 *pointer:000000DB225576C0
001CCA64 08:35:18.843101 3136.25 CONN:000032 --------------------} xcsGetMemFn (rc=OK)
001CCA67 08:35:18.843110 3136.25 CONN:000032 -------------------} xcsEnumerateQuickCellBlock (rc=OK)
001CCA69 08:35:18.843117 3136.25 CONN:000032 -------------------{ xcsFreeQuickCellEnumerator
001CCA6B 08:35:18.843125 3136.25 CONN:000032 --------------------{ xcsFreeMemFn
001CCA6E 08:35:18.843132 3136.25 CONN:000032 component:23 pointer:000000DB225576C0
001CCA73 08:35:18.843145 3136.25 CONN:000032 Data: 0x000000db 0x225576c0
001CCA78 08:35:18.843158 3136.25 CONN:000032 cbmindex:1
001CCA7C 08:35:18.843169 3136.25 CONN:000032 --------------------} xcsFreeMemFn (rc=OK)
001CCA7E 08:35:18.843176 3136.25 CONN:000032 -------------------} xcsFreeQuickCellEnumerator (rc=OK)
001CCA80 08:35:18.843183 3136.25 CONN:000032 ------------------} xcsLookupNamedMemBlock (rc=OK)
001CCA82 08:35:18.843191 3136.25 CONN:000032 Data: 0x00000000
001CCA87 08:35:18.843203 3136.25 CONN:000032 ------------------{ xcsAllocateQuickCell
001CCA8A 08:35:18.843212 3136.25 CONN:000032 hqc(1::0::0-1729856)
001CCA8D 08:35:18.843223 3136.25 CONN:000032 ------------------} xcsAllocateQuickCell (rc=OK)
001CCA8F 08:35:18.843231 3136.25 CONN:000032 ------------------{ xlsRequestMutex
001CCA92 08:35:18.843239 3136.25 CONN:000032 MtxName: xeeERRLOG Id: 6
001CCA95 08:35:18.843251 3136.25 CONN:000032 ------------------} xlsRequestMutex (rc=OK)
001CCA98 08:35:18.843258 3136.25 CONN:000032 ------------------{ xlsPostEvent
001CCA9B 08:35:18.843266 3136.25 CONN:000032 EventName: pcommBlock->msgEvent, Id:7 Flags: 0x1
001CCA9F 08:35:18.843280 3136.25 CONN:000032 Data: 0x00002208 0x00000003
001CCAA4 08:35:18.843297 3136.25 CONN:000032 Data: 0x00000009 0x00000910 0x00000001
001CCAAA 08:35:18.843316 3136.25 CONN:000032 ------------------} xlsPostEvent (rc=OK)
001CCAAC 08:35:18.843324 3136.25 CONN:000032 ------------------{ xlsReleaseMutex
001CCAB0 08:35:18.843336 3136.25 CONN:000032 MtxName: xeeERRLOG Id: 6
001CCAB5 08:35:18.843348 3136.25 CONN:000032 ------------------} xlsReleaseMutex (rc=OK)
001CCAB6 08:35:18.843355 3136.25 CONN:000032 -----------------} xcsWriteQmgrLogMessage (rc=OK)
001CCAB8 08:35:18.843362 3136.25 CONN:000032 ----------------} xcsDisplayMessageForError (rc=OK)
001CCABA 08:35:18.843369 3136.25 CONN:000032 ---------------}! rfiAllocCacheArea (rc=rrcE_NO_STORAGE)
001CCABC 08:35:18.843379 3136.25 CONN:000032 --------------}! rfxEnlargeRegistration (rc=rrcE_NO_STORAGE)
001CCABF 08:35:18.843389 3136.25 CONN:000032 --------------{ xlsReleaseMutex
001CCAC1 08:35:18.843396 3136.25 CONN:000032 MtxName: AMQRFNCA Id: 49
001CCAC4 08:35:18.843408 3136.25 CONN:000032 --------------} xlsReleaseMutex (rc=OK)
001CCAC6 08:35:18.843416 3136.25 CONN:000032 --------------{ xlsRequestMutex
001CCAC8 08:35:18.843423 3136.25 CONN:000032 MtxName: AMQRFNCA Id: 49
001CCACB 08:35:18.843434 3136.25 CONN:000032 --------------} xlsRequestMutex (rc=OK)
001CCACD 08:35:18.843441 3136.25 CONN:000032 --------------{ rfxEnlargeRegistration
001CCACF 08:35:18.843448 3136.25 CONN:000032 ---------------{ rfiAllocCacheArea
001CCAD1 08:35:18.843455 3136.25 CONN:000032 ----------------{ rfxLINK
001CCAD4 08:35:18.843462 3136.25 CONN:000032 ----------------} rfxLINK (rc=OK)
001CCAD6 08:35:18.843469 3136.25 CONN:000032 ----------------{ rfxLINK


There's like 500 lines above and below this of that rfxLINK (rc=OK)

It keeps referencing F:\, but I don't have an F: drive and no clue where it's getting that.

My installation is on D:\Program Files
My ProgramData is on C:\ProgramData
and my Queue Manager is on M:\<QMName>\
Back to top
View user's profile Send private message
Jeff.VT
PostPosted: Thu Dec 13, 2018 7:38 am    Post subject: Reply with quote

Acolyte

Joined: 02 Mar 2017
Posts: 68

fjb_saper wrote:
Have you checked if any of the SYSTEM.CLUSTER.* queues is full?


Did I edit the max queue depths for them? Or have they always been 9999999?

Quote:

6 : dis qs(system.cluster.*) where(CURDEPTH gt 0) all
AMQ8450I: Display queue status details.
QUEUE(SYSTEM.CLUSTER.REPOSITORY.QUEUE)
TYPE(QUEUE) CURDEPTH(630)
IPPROCS(1) LGETDATE(2018-12-13)
LGETTIME(09.12.04) LPUTDATE(2018-12-13)
LPUTTIME(09.12.04) MEDIALOG( )
MONQ(LOW) MSGAGE(642934)
OPPROCS(1) QTIME(999999999, 999999999)
UNCOM(NO)
Back to top
View user's profile Send private message
Jeff.VT
PostPosted: Thu Dec 13, 2018 8:06 am    Post subject: Reply with quote

Acolyte

Joined: 02 Mar 2017
Posts: 68

How long does IBM usually take to get around to Sev 2 tickets? It's been >24 hours now

I don't really know what this is doing to my environment. But after seeing the trace I'm at least fairly confident it's not dropping messages.
Back to top
View user's profile Send private message
bruce2359
PostPosted: Thu Dec 13, 2018 8:13 am    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9394
Location: US: west coast, almost. Otherwise, enroute.

Any o/s messages about file system space exhausted? Inodes?

Did you modify the o/s pre MQ install as required?
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
Jeff.VT
PostPosted: Thu Dec 13, 2018 9:05 am    Post subject: Reply with quote

Acolyte

Joined: 02 Mar 2017
Posts: 68



I don't see any windows system errors about lack of resources. That was the first thing I checked. Then I looked at the system queue depths.




Yes, I know I'm massively over-paying for my licenses... I know... I'm working on it. It's a whole thing.

And it's happening on FR's and PR's alike. It's *NOT* happening on queue managers that don't send to the cluster (but are rather destinations where the other QM's are sending).
Back to top
View user's profile Send private message
Jeff.VT
PostPosted: Thu Dec 13, 2018 11:52 am    Post subject: Reply with quote

Acolyte

Joined: 02 Mar 2017
Posts: 68

Could it be my logging settings?

My queue managers had these settings when I inherited them, and it's just how I've continued to build them.



I never really thought to question it.
Back to top
View user's profile Send private message
bruce2359
PostPosted: Thu Dec 13, 2018 1:42 pm    Post subject: Reply with quote

Poobah

Joined: 05 Jan 2008
Posts: 9394
Location: US: west coast, almost. Otherwise, enroute.

Jeff.VT wrote:
Quote:
Regarding 1) above: are you saying that you do NOT have manually defined clussdr channels from PR's to one FR?


The opposite - I have both FRs manually defined in all my PRs.


One more time for clarity.

Firstly,you have only two FR's?

Are you saying that on each PR you have manually defined CLUSSDR channels to BOTH FR's? Or, that you have only one CLUSSDR from PR to ONLY one of the FR's?

If you display from your PR's, you see CLUSSDRB channels to FR's?
_________________
I like deadlines. I like to wave as they pass by.
ב''ה
Lex Orandi, Lex Credendi, Lex Vivendi. As we Worship, So we Believe, So we Live.
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Goto page 1, 2, 3  Next Page 1 of 3

MQSeries.net Forum Index » Clustering » AMQ9500 - No Repository storage
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.