this is a follow up question from this post. We're having problems accessing an mapr filesystem on a remote network with the mapr client. It looks like the the client can connect to the CLDB on port 7222. But then the connection to a service on port 5660 fails, because it uses the internal-IP of it.
This is similar to a standard hadoop-distribution where on can connect to the namenode, but the namenode gives out the internal ips of the datanodes.
However, is there a way to configure MapR in a way that it uses the external ip for the 5660 services ?
best regards Johannes
asked 05 Dec '11, 09:37
On your client machine you will have to instruct MapR client to use only internal IPs exposed by FileSystem. You can do this by exporting environment variable MAPR_SUBNETS . You can add it to your bashrc or /etc/profile.
Format is similar to subnet notation (a.b.c.d/shift) and you could specify list of upto 4 subnets
Let us know if that helps you.
answered 05 Dec '11, 09:45
I wonder how this should work. To describe my problem more exactly.. The client is on my laptop and the mapr cluster is on ec2. So my laptop doesn't know the internal ips or subnets of the ec2 instances.
The same problem exists with plain hadoop. There i could workaround the problem with a custom SocketFactory which translates internal-ips into external ones (the mapping of those i retrieve from the amazon webservice).
Any ideas ? Johannes
answered 10 Jan '12, 02:53