Hi,
I have a distributed 2 node cluster names like:
server1:
code1@<aws internal ip 1>
server2:
code2@<aws internal ip 2>
Then i git cloned this project on server1. Started the app and everything is running. I've started everything with ./_build/default/rel/observerweb/bin/observerweb console and went on http://127.0.0.1:8080
I can see the interface, but my session is named like observerweb@127.0.0.1 and if I would like to connect to code1@<aws internal ip 1>, I guess my shell would have to be named like observerweb@<aws internal ip 1>, right?
I keep getting Connection failed.
I managed to connect with prior changes to vm.config. But now I am getting errors due to my access from outside of AWS. So my node is registerd with internal IP, but I am accessing the web through external (elastic IP).
Error something like this...
=ERROR REPORT==== 12-Mar-2018::14:27:01 ===
Ranch listener http had connection process started with cowboy_protocol:start_link/4 at <0.1344.0> exit with reason: {[{reason,{badrpc,nodedown}},{mfa,{observerweb_handler,handle,2}},{stacktrace,[{observerweb,try_rpc,4,[{file,"/opt/observer_web/observerweb/_build/default/lib/observerweb/src/observerweb.erl"},{line,30}]},{
Can you suggest, how to use this with AWS Ec2, where internal and external IPs are there?
Best,
Tomaz