ASG
IBM
Zystems
Cressida
Icon
Netflexity
 
  MQSeries.net
Search  Search       Tech Exchange      Education      Certifications      Library      Info Center      SupportPacs      LinkedIn  Search  Search                                                                   FAQ  FAQ   Usergroups  Usergroups
 
Register  ::  Log in Log in to check your private messages
 
RSS Feed - WebSphere MQ Support RSS Feed - Message Broker Support

MQSeries.net Forum Index » IBM MQ Installation/Configuration Support » Unable to create RDQM queue manager on RHEL9

Post new topic  Reply to topic
 Unable to create RDQM queue manager on RHEL9 « View previous topic :: View next topic » 
Author Message
Mqdevops
PostPosted: Mon Oct 23, 2023 9:18 am    Post subject: Unable to create RDQM queue manager on RHEL9 Reply with quote

Newbie

Joined: 21 Oct 2023
Posts: 7

Folks i'm running into an issue building RDQM queue manager.
I'm building RDQM HA/DR. There are 6 servers, 3 PROD and 3 DR. I will share the content of the rdqm.ini file as well as the error i'm seeing when attempting to create the queue manager. Just background these 6 RHEL9 servers are running on KVM (libvirt) on RHEL9 server.

Code:

Name:        IBM MQ
Version:     9.3.0.10
Level:       p930-010-230816
BuildType:   IKAP - (Production)
Platform:    IBM MQ for Linux (x86-64 platform)
Mode:        64-bit
O/S:         Linux 5.14.0-284.30.1.el9_2.x86_64
O/S Details: Red Hat Enterprise Linux 9.2 (Plow)
InstName:    Installation1
InstDesc:   
Primary:     Yes
InstPath:    /opt/mqm
DataPath:    /var/mqm
MaxCmdLevel: 930
LicenseType: Production
rdqm.ini file
[root@rdqmprd01 ~]# cat /var/mqm/rdqm.ini
# The configuration in this file is not dynamic.
# The HA configuration is read when an HA group is created.
# The DR configuration is read when when a DR/HA queue manager is created.

Node:
Name=rdqmprd01
  HA_Replication=10.10.50.11
  DR_Replication=10.10.60.11
Node:
Name=rdqmprd02
  HA_Replication=10.10.50.12
  DR_Replication=10.10.60.12
Node:
Name=rdqmprd03
  HA_Replication=10.10.50.13
  DR_Replication=10.10.60.13

DRGroup:
  Name=DRREPLGRP
  DR_Replication=10.10.60.21
  DR_Replication=10.10.60.22
  DR_Replication=10.10.60.23


Code:

[root@rdqmprd01 ~]# df -h
Filesystem                             Size  Used Avail Use% Mounted on
devtmpfs                               4.0M     0  4.0M   0% /dev
tmpfs                                  4.8G   33M  4.7G   1% /dev/shm
tmpfs                                  1.9G  9.4M  1.9G   1% /run
/dev/mapper/rhel-root                   44G  5.6G   39G  13% /
/dev/mapper/vg_mq-varmqm                20G  176M   20G   1% /var/mqm
/dev/mapper/vg_mq-optmqm                20G  1.8G   19G   9% /opt/mqm
/dev/mapper/vg_mq-mqmtrace              10G  104M  9.9G   2% /var/mqm/trace
/dev/mapper/vg_mq-mqmlog                20G  175M   20G   1% /var/mqm/log
/dev/mapper/vg_mq-mqmerror              10G  104M  9.9G   2% /var/mqm/errors
/dev/vdb1                             1014M  406M  609M  40% /boot
contact admin:/mnt/contact admin/  812G  208G  605G  26% /software
tmpfs                                  764M   52K  764M   1% /run/user/42
tmpfs                                  764M   36K  764M   1% /run/user/0

[root@rdqmprd01 ~]# pvs
  PV         VG       Fmt  Attr PSize    PFree   
  /dev/vda   drbdpool lvm2 a--  <100.00g <100.00g
  /dev/vdb2  rhel     lvm2 a--   <49.00g       0
  /dev/vdc   vg_mq    lvm2 a--  <100.00g  <20.00g

[root@rdqmprd01 ~]# lvs
  LV       VG    Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root     rhel  -wi-ao---- <44.00g                                                   
  swap     rhel  -wi-ao----   5.00g                                                   
  mqmerror vg_mq -wi-ao----  10.00g                                                   
  mqmlog   vg_mq -wi-ao----  20.00g                                                   
  mqmtrace vg_mq -wi-ao----  10.00g                                                   
  optmqm   vg_mq -wi-ao----  20.00g                                                   
  varmqm   vg_mq -wi-ao----  20.00g 


Command used to create first qmgr and error
Code:

[root@rdqmprd01 ~]# sudo crtmqm -sx -rr p -rn DRREPLGRP -rp 7017 -fs 10G -lp 20 -ls 20 -lc -lf 16384 -h 1000 -u "TEST.DLQ" -p 1417 TEST
Creating replicated data queue manager configuration.
Secondary queue manager created on 'rdqmprd02'.
Secondary queue manager created on 'rdqmprd03'.
AMQ3817E: Replicated data subsystem call '/usr/sbin/drbdadm -- --force
--stacked create-md test.dr' failed with return code '10'.
You want me to create a v09 style flexible-size internal meta data block.
There appears to be a v09 flexible-size internal meta data block
already in place on /dev/drbd100 at byte offset 10736721920

Do you really want to overwrite the existing meta-data?
*** confirmation forced via --force option ***
initializing bitmap (320 KB) to all zero
ioctl(/dev/drbd100, BLKZEROOUT, [10736361472, 327680]) failed: Input/output error
initializing bitmap (320 KB) to all zero using pwrite
pwrite(5,...,327680,10736361472) in md_initialize_common:BM failed: Input/output error
Command 'drbdmeta 7017 v09 /dev/drbd100 internal create-md 1 --force' terminated with exit code 10
AMQ3812E: Failed to create replicated data queue manager configuration.
Secondary queue manager deleted on rdqmprd02.
Secondary queue manager deleted on rdqmprd03.


Code:

[root@rdqmprd01 ~]# rdqmstatus
Node:                                   rdqmprd01
OS kernel version:                      5.14.0-284.30.1
DRBD OS kernel version:                 5.14.0-284.30.1
DRBD version:                           9.1.15+ptf.1.g2ec62f6cb988
DRBD kernel module status:              Loaded

[root@rdqmprd01 ~]# rdqmstatus -n
Node rdqmprd01 is online
Node rdqmprd02 is online
Node rdqmprd03 is online
[root@rdqmprd01 ~]#


Code:

[root@rdqmprd01 ~]# crm status
Cluster Summary:
  * Stack: corosync
  * Current DC: rdqmprd02 (version 2.1.2.linbit-4.el9-ada5c3b36e2) - partition with quorum
  * Last updated: Mon Oct 23 08:43:47 2023
  * Last change:  Mon Oct 23 07:43:15 2023 by root via crm_attribute on rdqmprd03
  * 3 nodes configured
  * 0 resource instances configured

Node List:
  * Online: [ rdqmprd01 rdqmprd02 rdqmprd03 ]

Full List of Resources:
  * No resources


rdqm.log - i'm not sure what it is referring to by no such device.
Code:

2023-10-23 07:27:30.741: 24293 ---{ /usr/sbin/crm_attribute --node rdqmprd01.marsem.org --name test-name --quiet
2023-10-23 07:27:30.767: 24293 >>STDERR:
Error performing operation: No such device or address
2023-10-23 07:27:30.768: 24293 ---} rc=0 unixrc=105 /usr/sbin/crm_attribute --node rdqmprd01.marsem.org --name test-name --quiet

2023-10-23 07:27:30.768: 24293 ---{ /usr/sbin/crm_attribute --node rdqmprd02.marsem.org --name test-name --quiet
2023-10-23 07:27:30.805: 24293 >>STDERR:
Error performing operation: No such device or address
2023-10-23 07:27:30.805: 24293 ---} rc=0 unixrc=105 /usr/sbin/crm_attribute --node rdqmprd02.marsem.org --name test-name --quiet

2023-10-23 07:27:30.805: 24293 ---{ /usr/sbin/crm_attribute --node rdqmprd03.marsem.org --name test-name --quiet
2023-10-23 07:27:30.842: 24293 >>STDERR:

2023-10-23 07:28:54.032: 24293 >>STDOUT:
initializing activity log
2023-10-23 07:28:54.032: 24293 >>STDERR:
You want me to create a v09 style flexible-size internal meta data block.
There appears to be a v09 flexible-size internal meta data block
already in place on /dev/drbd100 at byte offset 10736721920

Do you really want to overwrite the existing meta-data?
*** confirmation forced via --force option ***
initializing bitmap (320 KB) to all zero
ioctl(/dev/drbd100, BLKZEROOUT, [10736361472, 327680]) failed: Input/output error
initializing bitmap (320 KB) to all zero using pwrite
pwrite(5,...,327680,10736361472) in md_initialize_common:BM failed: Input/output error
Command 'drbdmeta 7017 v09 /dev/drbd100 internal create-md 1 --force' terminated with exit code 10
2023-10-23 07:28:54.032: 24293 ---} rc=0 unixrc=10 /usr/sbin/drbdadm -- --force --stacked create-md test.dr
Back to top
View user's profile Send private message
Mqdevops
PostPosted: Tue Oct 24, 2023 3:36 pm    Post subject: I was hoping to get some feedback here guys Reply with quote

Newbie

Joined: 21 Oct 2023
Posts: 7

Any thoughts on what could be the problem?
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Sat Oct 28, 2023 12:13 am    Post subject: Re: I was hoping to get some feedback here guys Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20696
Location: LI,NY

Mqdevops wrote:
Any thoughts on what could be the problem?

Do not use the lower level commands.
After creating the rdqm.ini did you run rdqmadm -c ?
Then run the crtmqm
then run rdqmstatus and if you don't see the qmgr run
rdqmstatus -m QMNAME -a to look for failed resource action...
You can also run rdqmdr -m QMNAME -d to see the command to run in DR

Hope it helps
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
Mqdevops
PostPosted: Sat Oct 28, 2023 9:28 am    Post subject: Thank you for your reply fjb_saper Reply with quote

Newbie

Joined: 21 Oct 2023
Posts: 7

I'm not sure I follow what you mean by "Do not use the lower level commands."

Yes, I did run rdqmadm -c to initialize the cluster
When I run the crtmqm command it goes through the process of creating the queue manager all the way to the other nodes, but then it fails and starts deleting them as you can see in my output at the top.

[root@rdqmprd01 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
drbdpool 1 0 0 wz--n- <100.00g <100.00g
rhel 1 2 0 wz--n- <49.00g 0
vg_mq 1 5 0 wz--n- <100.00g <20.00g
[root@rdqmprd01 ~]#
Back to top
View user's profile Send private message
fjb_saper
PostPosted: Sat Oct 28, 2023 2:40 pm    Post subject: Re: Thank you for your reply fjb_saper Reply with quote

Grand High Poobah

Joined: 18 Nov 2003
Posts: 20696
Location: LI,NY

Mqdevops wrote:
I'm not sure I follow what you mean by "Do not use the lower level commands."

Yes, I did run rdqmadm -c to initialize the cluster
When I run the crtmqm command it goes through the process of creating the queue manager all the way to the other nodes, but then it fails and starts deleting them as you can see in my output at the top.

[root@rdqmprd01 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
drbdpool 1 0 0 wz--n- <100.00g <100.00g
rhel 1 2 0 wz--n- <49.00g 0
vg_mq 1 5 0 wz--n- <100.00g <20.00g
[root@rdqmprd01 ~]#


So run the corresponding crtmqm command with root and run it on each node in turn.
Maybe you will see a different error message?
_________________
MQ & Broker admin
Back to top
View user's profile Send private message Send e-mail
petroben
PostPosted: Sat Nov 04, 2023 2:45 am    Post subject: Reply with quote

Newbie

Joined: 04 Nov 2023
Posts: 2

It looks like you're encountering issues while setting up RDQM HA/DR with IBM MQ, specifically with DRBD replication and the creation of the queue manager.

The error message "Error performing operation: No such device or address" in your rdqm.log file might be related to an issue with device addresses, possibly in your configuration. This can be a complex problem to diagnose without more specific information about your environment.

To troubleshoot this issue, consider the following steps:

Double-check your configuration files, such as rdqm.ini, to ensure that all the device addresses and paths are correctly specified.

Ensure that DRBD is properly configured and that the devices, device names, and paths match your setup.

Verify that the DRBD kernel module and version are compatible with your kernel and configuration.

Review your cluster configuration (Pacemaker/Corosync) and check if there are any misconfigurations there that might be causing the issue.

Look for any potential issues with the block devices used in your setup (e.g., /dev/drbd100) and the underlying storage.

Check for any specific messages or logs related to DRBD that might provide more details about the "Input/output error" and "No such device or address" issues.

If the issue persists and you can't resolve it with the above suggestions, you may want to consider reaching out to IBM support or seeking assistance from experienced system administrators with expertise in IBM MQ and DRBD configurations. They can provide more tailored guidance based on the specific details of your environment.
_________________
Kyle
Best sports anime
Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic  Reply to topic Page 1 of 1

MQSeries.net Forum Index » IBM MQ Installation/Configuration Support » Unable to create RDQM queue manager on RHEL9
Jump to:  



You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
Protected by Anti-Spam ACP
 
 


Theme by Dustin Baccetti
Powered by phpBB © 2001, 2002 phpBB Group

Copyright © MQSeries.net. All rights reserved.