I just spent number of hours trying to deploy an app and wanted to share some issues that may already probably be known:
- The app is peer to peer and wants to tell peers about the IP:PORT it is running on, however it is impossible to tell the non-port 80 ephemeral port from within the container.
– I tried to leverage the port 80 mapping, but then the container external IP and the port 80 proxy IP no longer matches.
– I tried abusing the port 80 mapping, but since I need 2 ports to publish to peers I was always 1 short. If only there was a magical way for the container to be able to ask about itself/exports/port mappings via a metadata filesystem or http address like aws e2c metadata…
1a) I guess we need smarter live-reconfig container software, but that will take some time.
I didn’t find a way to read the docker console logs (docker logs … container) but it sure would be good to know if my container ever booted.
I assume if I run SSL, it will be on a random ephemeral port since only port 80 is mapped. I also assume that shoving TLS into port 80 won’t work well. That kind of makes it hard to use cloudflare or other ssl terminators that don’t have a port mapping feature.
While yes, it’s true I can make cloudflare ssl terminate the proxied port 80, it’s less than ideal to leave cloudflare->akash in full plain-text.
I found it so strange to create a deployment, pay for a lease, then have to send-manifest the same YML file to the provider. – if I have secrets in the deploy.yml, am I blindly sending strange parts to strange people? Better to have a separate deploy-resource-request.yml then a provider-runtime-manifest.yml?
Minor thing, but it seems various commands reply with JSON and YML without much of a pattern. Maybe blockchain vs provider? no idea. Would be cool to eventually make it all either JSON/YML, then later both (AWS API allows a preference specification). I’m just thinking this is a small barrier to entry for some folks who will already struggle digging through layers of nested objects to dig out key/values.
I’m ashamed to say that even though I’ve worked with JSON/YML for years that by the time I wrote a program to auto-parse the responses to get to the next step, the system provided bids were closed and I failed to make a lease. Then I had to figure out why my 5AKT were no longer with me (newbie issue), and how in the world to get my 5AKT back (thought I lost 5AKT due to late lease). All is well, but just sharing my confusion. If bid times were open longer for fools like me, I guess I could have learned how to shut down my deploy the next day. Which reminds me, why must I close my deploy just to get new bids? Wouldn’t it be nicer to just be able to request new bids? Why tell me the bids are closed when clearly I’m asking for bids in order to make a leasing decision? Would be better to provide a historical bid API if I was interested in why I failed. I’m sure there’s good reason for things to be the way the are right now. I just completely missed it on this first pass.
Anyways, I just wanted to share my experience so that devs/team know where I stumbled/know what might be important to others.
I’m sure I would have never noticed any of this if I just was deploying port 80 HTTP sites… which are… admittedly getting very rare these days.
Not a urgent request to fix/change anything.