Containers can't resolve the IP as akash-deployment-restrictions network policy prevents PODs from accessing kube-dns over 53/udp, 53/tcp in pod subnet

Not sure what’s the best place to report this, probably just having an issue opened on the Github is enough?

If having the github issue is enough, then please feel free to just close this topic. :slight_smile:

Thanks for submitting a GitHub issue! This is a known issue. We tried to run Handshake DNS on. Akash and ran into this issue. Port-mapping support is on the roadmap.

Thanks @andy01 ! The github issue should be enough.

What are you running that uses TCP/53?

Nothing really, I guess 53/udp is enough.

But I think it wouldn’t hurt to allow 53/tcp as well for some special cases, e.g.

There are two good reasons that we would want to allow both TCP and UDP port 53 connections to our DNS servers. One is DNSSEC and the second is IPv6.
More Allow Both TCP and UDP Port 53 to Your DNS Servers | Network World

But this is only for connecting to kube-dns (intra-cluster services), right? Or is it all outbound port 53?

The way I’ve suggested in the github issue would be allowing the PODs accessing only kube-dns over both 53/udp and 53/tcp ports.

(The next network policy rule, in the current Akash provider code, restricts all the egress POD traffic to the local CIDRs, which is good. Well, one of those (10.0.0.0/8) overlaps with the POD subnet I am using, 10.233.64.0/18, which prevents the PODs from talking to the kube-dns POD since it’s running in a different namespace. And the containers running within same POD are allowed to talk with each other as expected since they fall under same namespace, which gets permitted by this rule akash/builder.go at 7c39ea403433f7a4bc86a1b8c1539259926ee701 · ovrclk/akash · GitHub )

I’ve slightly updated the content of the issue [netpol] akash-deployment-restrictions prevents PODs from accessing kube-dns over 53/udp, 53/tcp in pod subnet · Issue #1339 · ovrclk/akash · GitHub with the above info ^^

One could probably argue that it would be better to allow both udp and tcp as you never know what type of PODs people are going to be running. (Personally, I don’t see any examples of such scenario from top of my head)

OTOH, allowing 53/tcp would not hurt either as kube-dns is listening over both 53/udp and 53/tcp.

I am also good with allowing only the 53/udp and maybe adding 53/tcp later should there be demand for it.

You want to talk to kube-dns to reference one of your services, is that right?

Just trying to gauge the priority of this. I thought that cross-service resolution worked right now (from service web, you can reach service db by name resolution.)

I think that would be a low priority.

After testing kubespray deployment over 3 nodes for some time today I learned that the cross-service resolution is not working for me because I do not use NodeLocal DNS Cache (doc) which kubespray employs by default (since kubespray 2.10 release on Oct 16, 2019) which listens over 169.254.25.10 IP by default and cross-service resolution is working as 53/UDP requests are permitted to all addresses within 169.254.0.0/16 range by the akash-deployment-restrictions Network Policy.

And that is not an issue with the kubespray deployments suggested by the current Akash documentation but rather with the custom Kubernetes deployments such as mine.

So, I guess the github [netpol] akash-deployment-restrictions prevents PODs from accessing kube-dns over 53/udp, 53/tcp in pod subnet · Issue #1339 · ovrclk/akash · GitHub issue can probably be closed, maybe with some follow-up on:

  • Do we really want to allow 53/UDP requests to ALL addresses within 169.254.0.0/16?
    I think we can still leverage namespaceSelector & podSelector for letting NodeLocal DNS Cache service explicitly instead of allowing 53/UDP across 169.254.0.0/16. As well as maybe to support NodeLocal DNS Cache-less deployments, such as kube-dns (coredns).

P.S.
I’ve been talking with Colin on Monday (July 26th), I’ll create a PR for updating the Akash deployment documentation since it’s missing the gVisor parts (for which kubespray actually has a toggle) and few other moments related to its enablement.

P.P.S.
I initially disliked the kubespray, but after been chewing it for awhile, I think it’s good to use it as it is supposed to be used for the Production Ready deployments, it is also getting curated and updated relatively frequently.

I’ve updated my write-up with the NodeLocal DNSCache installation steps so that the cross-service DNS communication is working.
As well as I’ve added few notes on how to enable the IPVS kube-proxy mode O(1) (instead of the iptables one O(n)), but, it seem to be being blocked by the “akash-deployment-restrictions” network policy currently.
One might try the IPVS mode with his kubespray (or I will do it one day later.).

1 Like