[Lxc-users] Internal IP address not always assigned
Mertz, Jean
jean at mertz.fm
Wed Apr 17 12:45:58 UTC 2013
Thank you for your detailed explanation. I've finally found out what the
problem was, it was related to iptable rules being too strict for the
container to retrieve an IP address. I don't know how to solve this, or
which rule causes this, but for completeness sake, here are my iptable
rules:
root at ip-10-33-165-95:~# iptables -L
Chain INPUT (policy DROP)
target prot opt source destination
system all -- anywhere anywhere
ssh all -- anywhere anywhere
http all -- anywhere anywhere
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain http (1 references)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:5000
ACCEPT tcp -- anywhere anywhere tcp dpt:http-alt
ACCEPT tcp -- anywhere anywhere tcp dpt:http
ACCEPT tcp -- anywhere anywhere tcp dpt:https
Chain ssh (1 references)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp dpt:ssh
Chain system (1 references)
target prot opt source destination
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate
RELATED,ESTABLISHED
Any idea how I can get an IP assigned to the container without having to
remove all my iptable rules?
On Wed, Apr 17, 2013 at 7:13 AM, Masood Mortazavi <masoodmortazavi at gmail.com
> wrote:
> It is not clear at all what exactly you are trying to do?
>
> What are you trying to connect to what, exactly?
>
> Lxcbr0 bridge needs to be on some networki nterface. What interface is
> that?
>
> Along with the information provided below, one would need to analyze
> ifconfig output along with your lxc's config file.
>
> Even if you don't feel comfortable providing these on a public forum, it
> may be beneficial to study them closely.
>
> Finally, it is highly likely that there is only a single network interface
> usable by the EC2 machine you have.
>
> Since it looks like you don't have budgetted IP addresses available to
> you, my guess is that you have a Zen based MAC address assigned to your
> container.
>
> What all this means is that if your EC2s happen to be provisioned on the
> same HW, by Amazon (something they will most likely do), the bridging will
> work across containers on co-resident EC2s, and when they are not
> co-resident, it won't.
>
> Since you cannot determine or force the residency of your EC2, the only
> remaining choice seems for you to purchase the fixed IP services Amazon had
> to offer for the relevant containers.
>
> The other choice -- my guess is -- to keep trying and churning until they
> become co-resident.(I am guessing all this based on a black box
> architectural view.)
>
>
> On Monday, April 15, 2013, Jean Mertz wrote:
>
>> Ben,
>>
>> Thank you for your input. However, this does not work for me.
>>
>> $ host worker 10.0.3.1
>> Using domain server:
>> Name: 10.0.3.1
>> Address: 10.0.3.1#53
>> Aliases:
>>
>> Host worker not found: 3(NXDOMAIN)
>>
>>
>> I did notice this error when starting the container non-deamonized:
>>
>> <30>udevd[136]: starting version 175
>> error: unexpectedly disconnected from boot status daemon
>>
>> --
>> Jean Mertz
>>
>> Op maandag 15 april 2013, om 23:00 heeft Ben Butler-Cole het volgende
>> geschreven:
>>
>> Hi Jean
>>
>> You should be able to get the container's IP address from
>>
>> host worker 10.0.3.1
>>
>> I use this on 12.04 and I believe that it should work for other versions.
>>
>> -Ben
>>
>>
>>
>> On 15 April 2013 20:56, Mertz, Jean <jean at mertz.fm> wrote:
>>
>> Hello,
>>
>> I've been trying to set up an EC2 hosted network of LXC containers to use
>> with
>> our company's Jenkins CI infrastructure. I've been successful at creating
>> and
>> running lxc containers, but it appears that assigning IP addresses behaves
>> radically.
>>
>> I tested this on *EC2 Ubuntu 12.04, 12.10 and 13.04*. All three gave
>> somewhat
>> equal results, meaning IP addresses aren't always assigned to the
>> containers,
>> but they do work sometimes, so the setup seems correct.
>>
>> Here are the steps I tried:
>>
>> - Boot up EC2 instance
>> - sudo -i
>> - apt-get update
>> - apt-get upgrade
>> - apt-get install xlc
>> - Create container, I tried several ways:
>> - lxc-create -n worker -t ubuntu
>> - lxc-create -n worker -t ubuntu-cloud
>> - lxc-create -n worker -t ubuntu-cloud -- -C
>> - lxc-start -n worker -d
>>
>> After this, I've always managed to get into the worker instance using
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxcontainers.org/pipermail/lxc-users/attachments/20130417/bd76c017/attachment.html>
More information about the lxc-users
mailing list