Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How pod access service's ClusterIP in openshift environment #1131

Open
jeffrey4l opened this issue Apr 30, 2018 · 9 comments
Open

How pod access service's ClusterIP in openshift environment #1131

jeffrey4l opened this issue Apr 30, 2018 · 9 comments

Comments

@jeffrey4l
Copy link

Description

I am deploying openshift+contiv with vlan + bridge mode. So far the connectivity between two pods is perfect. But the service ip is not.

Technically, i can not found any doc or explanation how Pod could service's ClusterIP. So cloud anyone give me some info about this?

Expected Behavior

Service ClusterIP should be accessible from Pod.

Observed Behavior

Seems there is a OVS bridge holds all the traffics, like bellow

$ oc get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                 AGE
kubernetes   ClusterIP   172.30.0.1   <none>        443/TCP,53/UDP,53/TCP   7h

# ip a show dev contivh0
10: contivh0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether 02:02:0a:82:ff:fe brd ff:ff:ff:ff:ff:ff
    inet 10.130.255.254/16 scope global contivh0
       valid_lft forever preferred_lft forever
    inet6 fe80::2:aff:fe82:fffe/64 scope link 
       valid_lft forever preferred_lft forever

# ovs-vsctl show
fd7d2400-5497-4c82-9b67-8c911f015bc8
    Manager "ptcp:6640"
    Bridge contivVxlanBridge
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "contivh0"
            tag: 2
            Interface "contivh0"
                type: internal
    Bridge contivVlanBridge
        Controller "tcp:127.0.0.1:6634"
            is_connected: true
        fail_mode: secure
        Port "vvport1"
            tag: 2970
            Interface "vvport1"
        Port "vvport2"
            tag: 2970
            Interface "vvport2"
        Port "eth1"
            Interface "eth1"
    ovs_version: "2.9.0"

# iptables -t nat -S | grep -i contiv
-N CONTIV-NODEPORT
-A PREROUTING -m addrtype --dst-type LOCAL -j CONTIV-NODEPORT
-A POSTROUTING -s 10.130.0.0/16 ! -o contivh0 -j MASQUERADE

Access the kubernetes service ip from HOST is OK too. But failed to access it from Pod.

Your Environment

# netctl -v
netctl version 
Version: 1.2.0
GitCommit: f78851a
BuildTime: 12-14-2017.07-32-25.UTC

# oc version
oc v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://node1:8443
openshift v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657

# cat /etc/redhat-release 
CentOS Linux release 7.4.1708 (Core) 

# rpm -qa | grep openvswitch
openvswitch-2.9.0-3.el7.x86_6
@jeffrey4l
Copy link
Author

this may related to #1083

@jeffrey4l
Copy link
Author

Since i don't know what's the root cause, i also create a issue with the same description in openshift side openshift/openshift-ansible#8200

@vhosakot
Copy link
Member

vhosakot commented May 1, 2018

Hi @jeffrey4l! 😄

@jeffrey4l
Copy link
Author

hey @vhosakot , nice to meet you here.
btw, could you give me some help about this issue.:D

@Pamir
Copy link

Pamir commented May 2, 2018

@jeffrey4l With the same installation on vms, we have the same problem. We can not reach any of the services. Kubernetes manages ip address of the services called VIP. VIPs are managed by DNAT/SNAT. In this scenario switches does not know how to route packet. It should be something on the contiv side to manage VIPs.

@jeffrey4l
Copy link
Author

@Pamir yes.

i also found there are two ovs bridges created by netplugin even though i am using vlan+bridge mode, contivVlanBridge and contivVxlanBridge bridge.

And the service ip is added to contivVxlanBridge's contivh0 interface. When accessing the service from host, it works. but the pod network have no idea about this.

So i think the contivVxlanBridge and contivVlanBridge should be connection together and when pod is accessing the service ip subnet, forward the packets from contivVlanBridge to contivVxlanBridge rather then forward it to the default gateway. Then it should work.

But i have on idea how to configure this. :(

@vhosakot
Copy link
Member

vhosakot commented May 4, 2018

hey @vhosakot , nice to meet you here.
btw, could you give me some help about this issue.:D

@jeffrey4l sure, I'll look into the issue and reply here soon.

@liucimin
Copy link

liucimin commented Jun 4, 2018

The pod can access the cluster ip from the pod.
Because the netplugin is watching the kubernetes's api-server's services.
When u add a services,the netplugin will add a flow in the ovs.And the flow is point to the controller(netplugin-ofagent).Then if the pod access the service by using the ClusterIP + Port ,it will
first send the packet to the ofagent and then the ofagent use openflow to push a flow in the ovs.And at the same time,the pod can access the service.

@liucimin
Copy link

liucimin commented Jun 4, 2018

May be u can show the flows in the ovs.Such as the picture blow.
image

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants