You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description of problem:
In python 3.12 readfp() was completely removed and this leads to broken georeplication:
Traceback (most recent call last):
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 325, in <module>
main()
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 251, in main
gconf.load(GLUSTERFS_CONFDIR + "/gsyncd.conf",
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncdconfig.py", line 469, in load
_gconf = Gconf(default_conf, custom_conf, args, extra_tmpl_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncdconfig.py", line 58, in __init__
self._load()
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncdconfig.py", line 184, in _load
conf.readfp(f)
^^^^^^^^^^^
debug2: channel 1: written 708 to efd 7
debug2: channel 1: rcvd ext data 90
AttributeError: 'RawConfigParser' object has no attribute 'readfp'. Did you mean: 'read'?
The full output of the command that failed:
Both /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py & /usr/libexec/glusterfs/python/syncdaemon/gsyncdconfig.py traceback with:
AttributeError: 'RawConfigParser' object has no attribute 'readfp'. Did you mean: 'read'?
Expected results:
Georep to work
**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/geo-replication/sourcevol_glusterdest_destvol/gsyncd.log:
[2024-08-28 22:03:01.21992] E [syncdutils(monitor):845:errlog] Popen: command returned error [{cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 geoaccount@glusterdest /usr/sbin/gluster --xml --remote-host=localhost volume info destvol}, {error=255}]
[2024-08-28 22:03:01.22327] E [syncdutils(monitor):363:log_raise_exception] <top>: FAIL:
Traceback (most recent call last):
File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 317, in main
func(args)
File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 60, in subcmd_monitor
return monitor.monitor(local, remote)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 360, in monitor
return Monitor().multiplex(*distribute(local, remote))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 319, in distribute
svol = Volinfo(secondary.volume, "localhost", prelude, primary=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 924, in __init__
po.terminate_geterr()
File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 894, in terminate_geterr
self.errfail()
File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 863, in errfail
self.errlog()
File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 853, in errlog
ls = l.split(b'\n')
^^^^^^^^^^^^^^
TypeError: must be str or None, not bytes
[2024-08-28 22:03:50.549852] I [gsyncdstatus(monitor):247:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}]
[2024-08-28 22:03:50.550049] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker [{brick=/bricks/vol1/brick1}, {secondary_node=glusterdest}]
[2024-08-28 22:03:50.609439] I [resource(worker /bricks/vol1/brick1):1388:connect_remote] SSH: Initializing SSH connection between primary and secondary...
[2024-08-28 22:03:50.885106] E [syncdutils(worker /bricks/vol1/brick1):325:log_raise_exception] <top>: connection to peer is broken
[2024-08-28 22:03:50.887006] E [syncdutils(worker /bricks/vol1/brick1):845:errlog] Popen: command returned error [{cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 2244 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-412ta4ms/79631bd47986f88da08bc1d5dca64585.sock geoaccount@glusterdest /nonexistent/gsyncd secondary sourcevol geoaccount@glusterdest::destvol --primary-node glustersource --primary-node-id f0ccb358-2aa3-4a45-9103-52432f030a26 --primary-brick /bricks/vol1/brick1 --local-node glusterdest --local-node-id 77c4b6aa-3653-40e9-8176-4c80bf287712 --secondary-timeout 120 --secondary-log-level INFO --secondary-gluster-log-level INFO --secondary-gluster-command-dir /usr/sbin --primary-dist-count 1}, {error=1}]
[2024-08-28 22:03:50.889987] I [monitor(monitor):218:monitor] Monitor: worker died before establishing connection [{brick=/bricks/vol1/brick1}]
- The operating system / glusterfs version:
rpm -qa | grep glusterfs-server
glusterfs-server-11.1-1.fc39.x86_64
The text was updated successfully, but these errors were encountered:
I've been searching for that solution for past few days. We are upgrading ubutnu to 24.04 and Gluster 11.1 and indeed Geo-replicaion is broken and i've tested by manually replacing these functions names from readfp to read_file and that seems to fix the problem. Thanks @hunter86bg for finding that. Hope it will get patched soon
Description of problem:
In python 3.12 readfp() was completely removed and this leads to broken georeplication:
The exact command to reproduce the issue:
The full output of the command that failed:
Both /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py & /usr/libexec/glusterfs/python/syncdaemon/gsyncdconfig.py traceback with:
Expected results:
Georep to work
**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/geo-replication/sourcevol_glusterdest_destvol/gsyncd.log:
- The operating system / glusterfs version:
rpm -qa | grep glusterfs-server
glusterfs-server-11.1-1.fc39.x86_64
The text was updated successfully, but these errors were encountered: