Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GEOREPLICATION] gsyncd is broken due to ConfigParser's readfp() being removed in Python 3.12 #4403

Open
hunter86bg opened this issue Aug 28, 2024 · 2 comments

Comments

@hunter86bg
Copy link
Contributor

Description of problem:
In python 3.12 readfp() was completely removed and this leads to broken georeplication:

Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 325, in <module>
    main()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 251, in main
    gconf.load(GLUSTERFS_CONFDIR + "/gsyncd.conf",
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncdconfig.py", line 469, in load
    _gconf = Gconf(default_conf, custom_conf, args, extra_tmpl_args,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncdconfig.py", line 58, in __init__
    self._load()
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncdconfig.py", line 184, in _load
    conf.readfp(f)
    ^^^^^^^^^^^
debug2: channel 1: written 708 to efd 7
debug2: channel 1: rcvd ext data 90
AttributeError: 'RawConfigParser' object has no attribute 'readfp'. Did you mean: 'read'?

The exact command to reproduce the issue:

gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create ssh-port 2222 push-pem

The full output of the command that failed:
Both /usr/libexec/glusterfs/python/syncdaemon/gsyncd.py & /usr/libexec/glusterfs/python/syncdaemon/gsyncdconfig.py traceback with:

AttributeError: 'RawConfigParser' object has no attribute 'readfp'. Did you mean: 'read'?

Expected results:
Georep to work

**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/geo-replication/sourcevol_glusterdest_destvol/gsyncd.log:

[2024-08-28 22:03:01.21992] E [syncdutils(monitor):845:errlog] Popen: command returned error [{cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 22 geoaccount@glusterdest /usr/sbin/gluster --xml --remote-host=localhost volume info destvol}, {error=255}]
[2024-08-28 22:03:01.22327] E [syncdutils(monitor):363:log_raise_exception] <top>: FAIL: 
Traceback (most recent call last):
  File "/usr/libexec/glusterfs/python/syncdaemon/gsyncd.py", line 317, in main
    func(args)
  File "/usr/libexec/glusterfs/python/syncdaemon/subcmds.py", line 60, in subcmd_monitor
    return monitor.monitor(local, remote)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 360, in monitor
    return Monitor().multiplex(*distribute(local, remote))
                                ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/libexec/glusterfs/python/syncdaemon/monitor.py", line 319, in distribute
    svol = Volinfo(secondary.volume, "localhost", prelude, primary=False)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 924, in __init__
    po.terminate_geterr()
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 894, in terminate_geterr
    self.errfail()
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 863, in errfail
    self.errlog()
  File "/usr/libexec/glusterfs/python/syncdaemon/syncdutils.py", line 853, in errlog
    ls = l.split(b'\n')
         ^^^^^^^^^^^^^^
TypeError: must be str or None, not bytes
[2024-08-28 22:03:50.549852] I [gsyncdstatus(monitor):247:set_worker_status] GeorepStatus: Worker Status Change [{status=Initializing...}]
[2024-08-28 22:03:50.550049] I [monitor(monitor):158:monitor] Monitor: starting gsyncd worker [{brick=/bricks/vol1/brick1}, {secondary_node=glusterdest}]
[2024-08-28 22:03:50.609439] I [resource(worker /bricks/vol1/brick1):1388:connect_remote] SSH: Initializing SSH connection between primary and secondary...
[2024-08-28 22:03:50.885106] E [syncdutils(worker /bricks/vol1/brick1):325:log_raise_exception] <top>: connection to peer is broken
[2024-08-28 22:03:50.887006] E [syncdutils(worker /bricks/vol1/brick1):845:errlog] Popen: command returned error [{cmd=ssh -oPasswordAuthentication=no -oStrictHostKeyChecking=no -i /var/lib/glusterd/geo-replication/secret.pem -p 2244 -oControlMaster=auto -S /tmp/gsyncd-aux-ssh-412ta4ms/79631bd47986f88da08bc1d5dca64585.sock geoaccount@glusterdest /nonexistent/gsyncd secondary sourcevol geoaccount@glusterdest::destvol --primary-node glustersource --primary-node-id f0ccb358-2aa3-4a45-9103-52432f030a26 --primary-brick /bricks/vol1/brick1 --local-node glusterdest --local-node-id 77c4b6aa-3653-40e9-8176-4c80bf287712 --secondary-timeout 120 --secondary-log-level INFO --secondary-gluster-log-level INFO --secondary-gluster-command-dir /usr/sbin --primary-dist-count 1}, {error=1}]
[2024-08-28 22:03:50.889987] I [monitor(monitor):218:monitor] Monitor: worker died before establishing connection [{brick=/bricks/vol1/brick1}]

- The operating system / glusterfs version:
rpm -qa | grep glusterfs-server
glusterfs-server-11.1-1.fc39.x86_64

@hunter86bg
Copy link
Contributor Author

@aravindavk , can you check the change ?

@sulphur
Copy link

sulphur commented Oct 3, 2024

I've been searching for that solution for past few days. We are upgrading ubutnu to 24.04 and Gluster 11.1 and indeed Geo-replicaion is broken and i've tested by manually replacing these functions names from readfp to read_file and that seems to fix the problem. Thanks @hunter86bg for finding that. Hope it will get patched soon

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants