Link to home
Start Free TrialLog in
Avatar of Jasmin shahrzad
Jasmin shahrzad

asked on

login to container from remote server

i have a lxd container . i create a fast ip for then ubuntu 18 container. i can write lxc exec "container_name" bash den i have login as root
how can i login to container from remote server (not container host) container has a subnet as my domain.
i open port 22 on sshd_config container and set PasswordAuthentication to yes in sshd_config.
i can ping container ip from remote but ssh to container_ip says port 22 no route to host.
from container host i open in firewall  like that  iptables -t nat -A PREROUTING -d host_ip -p tcp --dport 2222 -j DNAT --to container_ip:22
it's not working
Avatar of David Favor
David Favor
Flag of United States of America image

LXC (old) requires complex iptables rules.

LXD (new) has different approach.

1) Inside container create an /etc/netplan/60-public-init.yaml containing something like this...

network:
    version: 2
    ethernets:
        eth0:
          addresses:
          - 66.70.203.96/32

Open in new window


2) Restart your container.

3) At the machine level create the route, something like this.

ip -4 route add 144.217.33.224/27 dev lxdbr0

Open in new window


4) Your IP + route will be replaced by your own network setup.

At this point all will be working.

5) There's one caveat.

When LXD updates via SNAP, routes are lost. The... hook provided by LXD (which requires adding the route add command to every container) fails to correctly revive lost routes 100% of the time, so I normally just run a script, that loops + sleeps for 1 second, which checks for the route + if it disappears, issue the route add command again.

Summary: All the LXC iptables cruft of yesteryear is thankfully gone now.
Avatar of Jasmin shahrzad
Jasmin shahrzad

ASKER

Thanks. I need to understand how port forwarding works. assume i have a host 10.1.1.12 and lxd container 10.1.1.23
and i want to routing port 8090. how do i ?
If you're trying to use port forwarding, you can't really use iptables, as iptables doesn't really understand high level protocols like HTTP, which I'm guessing your using due to your port number of 8090.

To easily port forward any protocol, use HAProxy, rather than iptables.
Aside: Using any type of port or protocol forwarding will at minimum cut your throughput by 50%, sometimes much more.

If you can live with this, then use port/protocol forwarding.

If drastic throughput cuts are unacceptable, the simple solution is...

Use OVH as your provisioner, which provides IP ranges for $3/address one time setup + no monthly.

You'll save massive setup + management time using real IPs, rather than proxy schemes... so your big consideration is...

What's cheaper, one time $3/IP or the hours you'll invest setting up + maintaining HAProxy, to keep it working.
ASKER CERTIFIED SOLUTION
Avatar of noci
noci

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Thanks noci. It's working now. i join my containers to my domain now. and users can login to application containers with domain password.
i gave developers sudores via visudo and add developers to that file . but when i run fx. sudo apt-get update give me following  error:
sudo: no tty present and no askpass program specified.
how to fix this?
As an alternative we can write sudo -S "command" . but people are confused, where they use -S and where not.
Sounds like you've resolved your routing problem.

Be sure to provide an update saying how you resolved your routing problem, to assist people who read this question in the future.

Routing has no effect on the sudo/tty problem.

Best to open a new question for your ssh/tty problem, as this is a completely different topic + to debug this may take a while.
ok i do.
what i did was i copy default profile to my profile
and in my profile write macvlan for briddged and eth0 (my ip name) for parent.
then rename
50.clud-init.yaml to old then i create a 01-netcfg.yaml   like that
network:
    version2
     renderer: networkd
     ethernets:
              eth0:
                  dhcp4: no
                  addresses:  [my-ip/24]
                  gateway4:  my_gateway
                  nameservers:
                             addressese : [my_dns,mydns2,...]
                             search: [my_domain.local]

and in container under file /etc/ssh/sshd_config remove # for port 22 and passwordauthentication to yes (default is no)
i restart ssh and sshd services den add a users in container(just for test) and from other server ssh my_user@ip.
1) Never, ever, ever touch 50.cloud-init.yaml as the system owns this + if you touch it, then you'll likely break all the 10.X.X.X internal package flow.

Also this file is owned by both Cloud Init + the Packaging system + will certainly be randomly overwritten, so any contents will eventually be lost.

So restore this file back to...

# This file is generated from information provided by
# the datasource.  Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
    version: 2
    ethernets:
        eth0:
            dhcp4: true

Open in new window


2) Create a another file for you config. I normally use 60-public-init.yaml or something similar, which contains...

network:
    version: 2
    ethernets:
        eth0:
          addresses:
          - 167.114.29.137/32

Open in new window


3) Since you've changed the the default LXD networking from bridge to macvlan, I'm unsure what this means.

There's no real point to doing this any more as the bridge system now works as fast as macvlan.

If you must change the default behavior of Ubuntu + LXD to use macvlan, likely best visit https://lists.linuxcontainers.org/listinfo + post your question to the users list.

Tip: LXD networking works 100% out of the box, so long as you don't change anything.

If you make changes, like using macvlan, be sure to verify after every SNAP LXD update + every APT package update, that your container networking is still working, as minor updates can crash systems where networking has changed...

This is another reason, I use out of the box LXD networking these days.

4) Check: You've used a /24 mask, which tells this container to talk to all 254 IPs on this the related Class C network... which is likely wrong.

If you do this, then your packet flow to all containers + possibly your machine also... will be... super funky... meaning, no way to tell what's going on.

Use a mask /32 if you're using a single IP or the correct mask if your container actually answers for multiple IPs.