Provider: ingress wont setup in v0.14.0

I had been creating/using my providers fine in v0.12.2, but something in v0.14.0 has caused anything attached to my ingress controller to stop resolving. I have rebuilt my cluster dozens of times now trying any various methods listed in docs but not having any luck. Has anybody else gotten the ingress controller to work in v0.14.0 and how did you do it?

My big question:
are the docs at docs/README.md at master · ovrclk/docs · GitHub
up to date and reflective of the current process to setup the ingress controller in v0.14.0?

Things I have tried::
I have completely nuked my cluster many times now and tried to rebuild in the following ways, all resulting in errors (nginx 404, no resolution, or nginx 502) :

  1. following as the docs describe ^^ results in an nginx 404 when trying to resolve any deployments on :80
  2. I tried adding /networking and /akash-services per: akash/_docs/kustomize at mainnet/main · ovrclk/akash · GitHub but then no url resolves
  3. I tried following: akash/provider_migrate_to_hostname_operator.md at mainnet/main · ovrclk/akash · GitHub just to see if I could get anything to work but no urls resolve for this either.
  4. I tried mixing various forms of 1, 2, and 3 described above but really only get an nginx 404.
  5. After adding “kubernetes.io/ingress.class: nginx” as an annotation to akash-hostname-operator/ingress.yaml I can manage to get a nginx 502

Thanks in advance for any pointers/advice.

2 Likes

Hello,

Have you looked at this document to describe the v0.12.2 → v0.14.0 process ?

We probably need to update our provider docs as well, but that has been an ongoing project.

Hi Eric,
Yeah I tried that as well (its linked in attempt #3 in my original question).

Are you on our Discord? It’s going to be easier to engage in a call there and see what is going on. I’m Eric U in our Discord. Just DM me.

One in my team managed to make new deployment work with this annotation

annotations:
  kubernetes.io/ingress.class:  nginx

This is for specific deployment only.

Thanks

2 Likes

Add this annotation to every deployment’s ingress resource and then the deployment url will work.

I finally got this all squared away for new provider setups. Here is a sequence of events as a bash script, for example. Aside from new builds: The big thing in the provider_migrate_to_hostname_operator.md that was missing, at least for me, was that prior to step 7 (where we add akash-services namespace), I needed to have akash-services namespace present. If it was not present, I would get errors. How to add the akash-services namespace:

kubectl apply -f ./_docs/kustomize/networking;

Beside that, here is the rundown on how I got a new cluster ingress setup:

#!/bin/bash
AKASH_VERSION="v0.14.0" # feel free to fetch this from the github repo tags api
MY_NODE_LABEL="akash-clusterfoo-nodebar" # a label of our ingress node

### all the following activities happen from the base of the repo.
### assumes you checked out https://github.com/ovrclk/akash.git tag v0.14.0
### into $HOME/akashRepo

cd "${HOME}/akashRepo"

## first we apply the crd for all the things like manifests/etc
kubectl apply -f ./pkg/apis/akash.network/v1/crd.yaml --overwrite

## new things: we apply the provider crd and new ingress
kubectl apply -f ./pkg/apis/akash.network/v1/provider_hosts_crd.yaml
kubectl apply -f ./_run/ingress-nginx.yaml
kubectl apply -f ./_run/ingress-nginx-class.yaml

### akash-services name and component=akash-hostname-op might not be needed
# we remove the ingress label from any nodes in the event we are updating/resetting a cluster
kubectl label nodes --all akash.network/role- 
# and now apply the label to our ingress node
kubectl label nodes "$MY_NODE_LABEL" akash.network/role=ingress --overwrite

#note: networking is required to set akash-services namespace if it didnt already exist..
kubectl apply -f ./_docs/kustomize/networking;
kubectl kustomize ./_docs/kustomize/akash-services/ | kubectl apply -f -

# copy akash-hostname-operator from repo to tmp 
#since we will modify kustomization.yaml
mkdir -p "/tmp/akash_cluster_resources"
cd "/tmp/akash_cluster_resources"

# we need to add this to the kustomization.yaml
#########################################
#images:
#  - name: ghcr.io/ovrclk/akash:stable
#    newName: ghcr.io/ovrclk/akash
#    newTag: 0.14.0
#########################################

### we dont want the leading "v" from the AKASH_VERSION in the version we pass to the akash-hostname-operator image
VERSION_NO_V=$(echo "$AKASH_VERSION" | sed s'/v//g')

cp -r "${HOME}/akashRepo/_docs/kustomize/akash-hostname-operator/." ./akash-hostname-operator
echo "images:" >> ./akash-hostname-operator/kustomization.yaml
echo "  - name: ghcr.io/ovrclk/akash:stable" >> ./akash-hostname-operator/kustomization.yaml
echo "    newName: ghcr.io/ovrclk/akash" >> ./akash-hostname-operator/kustomization.yaml
echo "    newTag: $VERSION_NO_V" >> ./akash-hostname-operator/kustomization.yaml
kubectl kustomize ./akash-hostname-operator/ | kubectl apply -f - 

#and finally cleanup
rm -rf "/tmp/akash_cluster_resources"

1 Like