lxc.container.conf man page on Oracle

Man page or keyword search:  
man Server   33470 pages
apropos Keyword Search (all sections)
Output format
Oracle logo
[printable version]

LXC.CONTAINER.CONF(5)					 LXC.CONTAINER.CONF(5)

NAME
       lxc.container.conf - LXC container configuration file

DESCRIPTION
       The  linux  containers (lxc) are always created before being used. This
       creation defines a set of system resources to be virtualized / isolated
       when  a	process is using the container. By default, the pids, sysv ipc
       and mount  points  are  virtualized  and	 isolated.  The	 other	system
       resources  are  shared  across  containers,  until  they are explicitly
       defined in the configuration file. For example, if there is no  network
       configuration,  the  network  will be shared between the creator of the
       container and the container itself, but if the network is specified,  a
       new network stack is created for the container and the container can no
       longer use the network of its ancestor.

       The configuration file defines the different  system  resources	to  be
       assigned	 for  the container. At present, the utsname, the network, the
       mount points, the root file system, the user namespace, and the control
       groups are supported.

       Each  option in the configuration file has the form key = value fitting
       in one line. The '#' character means the line is a comment.

   CONFIGURATION
       In order to ease administration of multiple related containers,	it  is
       possible	 to  have a container configuration file cause another file to
       be loaded. For instance, network configuration can be  defined  in  one
       common file which is included by multiple containers. Then, if the con‐
       tainers are moved to another  host,  only  one  file  may  need	to  be
       updated.

       lxc.include
	      Specify  the  file  to be included. The included file must be in
	      the same valid lxc configuration file format.

   ARCHITECTURE
       Allows one to set the architecture for the container. For example,  set
       a  32bits  architecture	for  a	container running 32bits binaries on a
       64bits host. This fixes the container scripts which rely on the	archi‐
       tecture to do some work like downloading the packages.

       lxc.arch
	      Specify the architecture for the container.

	      Valid options are x86, i686, x86_64, amd64

   HOSTNAME
       The  utsname  section defines the hostname to be set for the container.
       That means the container can set its own hostname without changing  the
       one from the system. That makes the hostname private for the container.

       lxc.utsname
	      specify the hostname for the container

   HALT SIGNAL
       Allows  one  to	specify signal name or number, sent by lxc-stop to the
       container's init process to cleanly shutdown the	 container.  Different
       init  systems  could  use  different  signals to perform clean shutdown
       sequence. This option allows the signal	to  be	specified  in  kill(1)
       fashion,	 e.g.  SIGPWR,	SIGRTMIN+14,  SIGRTMAX-10 or plain number. The
       default signal is SIGPWR.

       lxc.haltsignal
	      specify the signal used to halt the container

   STOP SIGNAL
       Allows one to specify signal  name  or  number,	sent  by  lxc-stop  to
       forcibly shutdown the container. This option allows signal to be speci‐
       fied in kill(1) fashion,	 e.g.  SIGKILL,	 SIGRTMIN+14,  SIGRTMAX-10  or
       plain number.  The default signal is SIGKILL.

       lxc.stopsignal
	      specify the signal used to stop the container

   NETWORK
       The  network section defines how the network is virtualized in the con‐
       tainer. The network virtualization acts at layer two. In order  to  use
       the  network virtualization, parameters must be specified to define the
       network interfaces of the container. Several virtual interfaces can  be
       assigned and used in a container even if the system has only one physi‐
       cal network interface.

       lxc.network.type
	      specify what kind of network virtualization to be used  for  the
	      container.  Each	time  a	 lxc.network.type field is found a new
	      round of network configuration begins. In this way, several net‐
	      work  virtualization  types  can	be specified for the same con‐
	      tainer, as well as assigning several network interfaces for  one
	      container. The different virtualization types can be:

	      none:  will  cause  the  container  to  share the host's network
	      namespace. This means the host network devices are usable in the
	      container.  It  also  means  that if both the container and host
	      have upstart as init, 'halt' in a container (for instance)  will
	      shut down the host.

	      empty: will create only the loopback interface.

	      veth: a peer network device is created with one side assigned to
	      the container and the other side is attached to a bridge	speci‐
	      fied  by	the  lxc.network.link. If the bridge is not specified,
	      then the veth pair device will be created but  not  attached  to
	      any  bridge. Otherwise, the bridge has to be setup before on the
	      system, lxc won't handle any configuration outside of  the  con‐
	      tainer.  By  default  lxc	 choose	 a name for the network device
	      belonging to the outside of the container, this name is  handled
	      by  lxc,	but  if you wish to handle this name yourself, you can
	      tell lxc to set a specific name with  the	 lxc.network.veth.pair
	      option.

	      vlan: a vlan interface is linked with the interface specified by
	      the lxc.network.link and assigned to  the	 container.  The  vlan
	      identifier is specified with the option lxc.network.vlan.id.

	      macvlan: a macvlan interface is linked with the interface speci‐
	      fied by the lxc.network.link  and	 assigned  to  the  container.
	      lxc.network.macvlan.mode specifies the mode the macvlan will use
	      to communicate between  different	 macvlan  on  the  same	 upper
	      device.  The accepted modes are private, the device never commu‐
	      nicates with any other device on the same	 upper_dev  (default),
	      vepa,  the  new Virtual Ethernet Port Aggregator (VEPA) mode, it
	      assumes that the adjacent bridge returns all frames  where  both
	      source  and  destination are local to the macvlan port, i.e. the
	      bridge is set up as a reflective relay. Broadcast frames	coming
	      in  from	the upper_dev get flooded to all macvlan interfaces in
	      VEPA mode, local frames are not delivered locally, or bridge, it
	      provides	the  behavior  of  a  simple  bridge between different
	      macvlan interfaces on the same port. Frames from	one  interface
	      to  another  one	get  delivered	directly  and are not sent out
	      externally. Broadcast frames get flooded	to  all	 other	bridge
	      ports  and  to  the  external interface, but when they come back
	      from a reflective relay, we don't deliver them again.  Since  we
	      know  all	 the  MAC  addresses, the macvlan bridge mode does not
	      require learning or STP like the bridge module does.

	      phys: an already existing interface specified  by	 the  lxc.net‐
	      work.link is assigned to the container.

       lxc.network.flags
	      specify an action to do for the network.

	      up: activates the interface.

       lxc.network.link
	      specify the interface to be used for real network traffic.

       lxc.network.mtu
	      specify the maximum transfer unit for this interface.

       lxc.network.name
	      the interface name is dynamically allocated, but if another name
	      is needed because the configuration files being used by the con‐
	      tainer use a generic name, eg. eth0, this option will rename the
	      interface in the container.

       lxc.network.hwaddr
	      the interface mac address is dynamically allocated by default to
	      the  virtual  interface,	but  in	 some cases, this is needed to
	      resolve a mac address conflict or to always have the same	 link-
	      local ipv6 address.  Any "x" in address will be replaced by ran‐
	      dom value, this allows setting hwaddr templates.

       lxc.network.ipv4
	      specify the ipv4 address to assign to the virtualized interface.
	      Several lines specify several ipv4 addresses.  The address is in
	      format x.y.z.t/m, eg. 192.168.1.123/24.  The  broadcast  address
	      should  be  specified  on	 the  same  line, right after the ipv4
	      address.

       lxc.network.ipv4.gateway
	      specify the ipv4 address to use as the gateway inside  the  con‐
	      tainer.  The  address  is in format x.y.z.t, eg.	192.168.1.123.
	      Can also have the special value auto, which means	 to  take  the
	      primary  address	from the bridge interface (as specified by the
	      lxc.network.link option) and use that as the  gateway.  auto  is
	      only available when using the veth and macvlan network types.

       lxc.network.ipv6
	      specify the ipv6 address to assign to the virtualized interface.
	      Several lines specify several ipv6 addresses.  The address is in
	      format x::y/m, eg. 2003:db8:1:0:214:1234:fe0b:3596/64

       lxc.network.ipv6.gateway
	      specify  the  ipv6 address to use as the gateway inside the con‐
	      tainer. The address is in format x::y, eg.  2003:db8:1:0::1  Can
	      also  have  the special value auto, which means to take the pri‐
	      mary address from the bridge  interface  (as  specified  by  the
	      lxc.network.link	option)	 and  use that as the gateway. auto is
	      only available when using the veth and macvlan network types.

       lxc.network.script.up
	      add a configuration option to specify a script  to  be  executed
	      after  creating  and  configuring the network used from the host
	      side. The following arguments are passed	to  the	 script:  con‐
	      tainer  name  and config section name (net) Additional arguments
	      depend on the config section employing a script hook;  the  fol‐
	      lowing  are  used by the network system: execution context (up),
	      network type (empty/veth/macvlan/phys), Depending on the network
	      type,  other  arguments  may  be	passed: veth/macvlan/phys. And
	      finally (host-sided) device name.

	      Standard output from the script is logged at debug level.	 Stan‐
	      dard  error is not logged, but can be captured by the hook redi‐
	      recting its standard error to standard output.

       lxc.network.script.down
	      add a configuration option to specify a script  to  be  executed
	      before  destroying the network used from the host side. The fol‐
	      lowing arguments are passed to the script:  container  name  and
	      config  section  name  (net)  Additional arguments depend on the
	      config section employing a script hook; the following  are  used
	      by  the  network	system: execution context (down), network type
	      (empty/veth/macvlan/phys), Depending on the network type,	 other
	      arguments	 may  be passed: veth/macvlan/phys. And finally (host-
	      sided) device name.

	      Standard output from the script is logged at debug level.	 Stan‐
	      dard  error is not logged, but can be captured by the hook redi‐
	      recting its standard error to standard output.

   NEW PSEUDO TTY INSTANCE (DEVPTS)
       For stricter isolation the container can have its own private  instance
       of the pseudo tty.

       lxc.pts
	      If  set, the container will have a new pseudo tty instance, mak‐
	      ing this private to it. The value specifies the  maximum	number
	      of  pseudo  ttys	allowed for a pts instance (this limitation is
	      not implemented yet).

   CONTAINER SYSTEM CONSOLE
       If the container is configured with a root filesystem and  the  inittab
       file  is	 setup	to  use the console, you may want to specify where the
       output of this console goes.

       lxc.console
	      Specify a path to a file where the console output will be	 writ‐
	      ten. The keyword 'none' will simply disable the console. This is
	      dangerous once if have a rootfs with a console device file where
	      the application can write, the messages will fall in the host.

   CONSOLE THROUGH THE TTYS
       This  option  is	 useful	 if  the  container  is configured with a root
       filesystem and the inittab file is setup to launch a getty on the ttys.
       The  option  specifies  the number of ttys to be available for the con‐
       tainer. The number of gettys in	the  inittab  file  of	the  container
       should not be greater than the number of ttys specified in this option,
       otherwise the excess getty sessions will die and	 respawn  indefinitely
       giving annoying messages on the console or in /var/log/messages.

       lxc.tty
	      Specify the number of tty to make available to the container.

   CONSOLE DEVICES LOCATION
       LXC  consoles  are provided through Unix98 PTYs created on the host and
       bind-mounted over the expected devices in the container.	  By  default,
       they are bind-mounted over /dev/console and /dev/ttyN. This can prevent
       package upgrades in the guest. Therefore you can	 specify  a  directory
       location	 (under	 /dev  under which LXC will create the files and bind-
       mount over them. These will then be symbolically linked to /dev/console
       and  /dev/ttyN.	 A  package  upgrade can then succeed as it is able to
       remove and replace the symbolic links.

       lxc.devttydir
	      Specify a directory under /dev under which to  create  the  con‐
	      tainer console devices.

   /DEV DIRECTORY
       By  default,  lxc creates a few symbolic links (fd,stdin,stdout,stderr)
       in the container's /dev directory but  does  not	 automatically	create
       device  node  entries. This allows the container's /dev to be set up as
       needed in the container rootfs. If lxc.autodev is set to 1, then	 after
       mounting the container's rootfs LXC will mount a fresh tmpfs under /dev
       (limited to 100k) and fill in a minimal set of initial  devices.	  This
       is  generally required when starting a container containing a "systemd"
       based "init" but may be optional at other times. Additional devices  in
       the  containers	/dev  directory	 may be created through the use of the
       lxc.hook.autodev hook.

       lxc.autodev
	      Set this to 1 to have LXC mount and populate a minimal /dev when
	      starting the container.

   ENABLE KMSG SYMLINK
       Enable  creating /dev/kmsg as symlink to /dev/console. This defaults to
       1.

       lxc.kmsg
	      Set this to 0 to disable /dev/kmsg symlinking.

   MOUNT POINTS
       The mount points section specifies the different places to be  mounted.
       These  mount points will be private to the container and won't be visi‐
       ble by the processes running outside of the container. This  is	useful
       to mount /etc, /var or /home for examples.

       lxc.mount
	      specify  a  file	location  in  the fstab format, containing the
	      mount information. The mount target location  can	 and  in  most
	      cases  should  be a relative path, which will become relative to
	      the mounted container root. For instance,

	      proc proc proc nodev,noexec,nosuid 0 0

	      Will mount  a  proc  filesystem  under  the  container's	/proc,
	      regardless  of  where  the  root	filesystem comes from. This is
	      resilient to block device backed filesystems  as	well  as  con‐
	      tainer cloning.

	      Note that when mounting a filesystem from an image file or block
	      device the third field  (fs_vfstype)  cannot  be	auto  as  with
	      mount(8) but must be explicitly specified.

       lxc.mount.entry
	      specify  a mount point corresponding to a line in the fstab for‐
	      mat.

       lxc.mount.auto
	      specify which standard kernel file systems should	 be  automati‐
	      cally mounted. This may dramatically simplify the configuration.
	      The file systems are:

	      · proc:mixed (or proc): mount /proc as read-write,  but  remount
		/proc/sys  and	/proc/sysrq-trigger  read-only	for security /
		container isolation purposes.

	      · proc:rw: mount /proc as read-write

	      · sys:ro (or sys): mount /sys as read-only for security  /  con‐
		tainer isolation purposes.

	      · sys:rw: mount /sys as read-write

	      · cgroup:mixed: mount a tmpfs to /sys/fs/cgroup, create directo‐
		ries for all hierarchies to which the container is added, cre‐
		ate  subdirectories  there  with  the  name of the cgroup, and
		bind-mount the container's own	cgroup	into  that  directory.
		The  container	will be able to write to its own cgroup direc‐
		tory, but not the parents, since they will be remounted	 read-
		only

	      · cgroup:ro:  similar  to	 cgroup:mixed,	but everything will be
		mounted read-only.

	      · cgroup:rw: similar to cgroup:mixed,  but  everything  will  be
		mounted read-write. Note that the paths leading up to the con‐
		tainer's own cgroup will be writable, but will not be a cgroup
		filesystem but just part of the tmpfs of /sys/fs/cgroup

	      · cgroup	(without specifier): defaults to cgroup:rw if the con‐
		tainer retains the CAP_SYS_ADMIN capability, cgroup:mixed oth‐
		erwise.

	      · cgroup-full:mixed:  mount  a  tmpfs  to /sys/fs/cgroup, create
		directories for all hierarchies	 to  which  the	 container  is
		added,	bind-mount  the	 hierarchies from the host to the con‐
		tainer and make everything read-only  except  the  container's
		own  cgroup.  Note  that  compared  to cgroup, where all paths
		leading up to the  container's	own  cgroup  are  just	simple
		directories	 in	the	underlying     tmpfs,	  here
		/sys/fs/cgroup/$hierarchy will contain the host's full	cgroup
		hierarchy,   albeit  read-only	outside	 the  container's  own
		cgroup.	 This may leak quite a bit  of	information  into  the
		container.

	      · cgroup-full:ro:	 similar  to cgroup-full:mixed, but everything
		will be mounted read-only.

	      · cgroup-full:rw: similar to cgroup-full:mixed,  but  everything
		will  be  mounted read-write. Note that in this case, the con‐
		tainer may escape its own cgroup. (Note also that if the  con‐
		tainer	has  CAP_SYS_ADMIN  support  and  can mount the cgroup
		filesystem itself, it may do so anyway.)

	      · cgroup-full (without specifier): defaults to cgroup-full:rw if
		the  container	retains	 the CAP_SYS_ADMIN capability, cgroup-
		full:mixed otherwise.

       Note that if automatic mounting of the cgroup  filesystem  is  enabled,
       the  tmpfs  under /sys/fs/cgroup will always be mounted read-write (but
       for  the	 :mixed	  and	:ro   cases,   the   individual	  hierarchies,
       /sys/fs/cgroup/$hierarchy, will be read-only). This is in order to work
       around a quirk in Ubuntu's mountall(8) command that will cause contain‐
       ers  to	wait for user input at boot if /sys/fs/cgroup is mounted read-
       only and the container can't remount it read-write due  to  a  lack  of
       CAP_SYS_ADMIN.

       Examples:

		  lxc.mount.auto = proc sys cgroup
		  lxc.mount.auto = proc:rw sys:rw cgroup-full:rw

   ROOT FILE SYSTEM
       The root file system of the container can be different than that of the
       host system.

       lxc.rootfs
	      specify the root file system for the container.  It  can	be  an
	      image file, a directory or a block device. If not specified, the
	      container shares its root file system with the host.

       lxc.rootfs.mount
	      where to recursively bind lxc.rootfs before pivoting. This is to
	      ensure  success of the pivot_root(8) syscall. Any directory suf‐
	      fices, the default should generally work.

       lxc.rootfs.options
	      extra mount options to use when mounting the rootfs.

       lxc.pivotdir
	      where to pivot the original root file system  under  lxc.rootfs,
	      specified relatively to that. The default is mnt.	 It is created
	      if necessary, and also removed after unmounting everything  from
	      it during container setup.

   CONTROL GROUP
       The  control group section contains the configuration for the different
       subsystem. lxc does not check the correctness of	 the  subsystem	 name.
       This  has  the disadvantage of not detecting configuration errors until
       the container is started, but  has  the	advantage  of  permitting  any
       future subsystem.

       lxc.cgroup.[subsystem name]
	      specify the control group value to be set. The subsystem name is
	      the literal name of the control group subsystem.	The  permitted
	      names  and  the  syntax  of their values is not dictated by LXC,
	      instead it depends on the features of the Linux  kernel  running
	      at the time the container is started, eg. lxc.cgroup.cpuset.cpus

   CAPABILITIES
       The  capabilities can be dropped in the container if this one is run as
       root.

       lxc.cap.drop
	      Specify the capability to be dropped in the container. A	single
	      line  defining  several  capabilities with a space separation is
	      allowed. The format is the lower case of the capability  defini‐
	      tion  without  the  "CAP_"  prefix, eg. CAP_SYS_MODULE should be
	      specified as sys_module. See capabilities(7),

       lxc.cap.keep
	      Specify the capability to be kept in the	container.  All	 other
	      capabilities will be dropped.

   APPARMOR PROFILE
       If  lxc	was compiled and installed with apparmor support, and the host
       system has apparmor enabled, then the apparmor profile under which  the
       container  should  be  run can be specified in the container configura‐
       tion. The default is lxc-container-default.

       lxc.aa_profile
	      Specify the apparmor profile under which the container should be
	      run. To specify that the container should be unconfined, use

	      lxc.aa_profile = unconfined

   SELINUX CONTEXT
       If  lxc	was  compiled and installed with SELinux support, and the host
       system has SELinux enabled, then the SELinux context  under  which  the
       container  should  be  run can be specified in the container configura‐
       tion. The default is  unconfined_t,  which  means  that	lxc  will  not
       attempt to change contexts.

       lxc.se_context
	      Specify  the SELinux context under which the container should be
	      run or unconfined_t. For example

	      lxc.se_context = unconfined_u:unconfined_r:lxc_t:s0-s0:c0.c1023

   SECCOMP CONFIGURATION
       A container can be started with a reduced set of available system calls
       by loading a seccomp profile at startup. The seccomp configuration file
       must begin with a version number on the first line, a  policy  type  on
       the second line, followed by the configuration.

       Versions 1 and 2 are currently supported. In version 1, the policy is a
       simple whitelist. The second line therefore must read "whitelist", with
       the  rest  of the file containing one (numeric) sycall number per line.
       Each syscall number is whitelisted,  while  every  unlisted  number  is
       blacklisted for use in the container

       In  version  2, the policy may be blacklist or whitelist, supports per-
       rule and per-policy default actions, and supports per-architecture sys‐
       tem call resolution from textual names.

       An  example  blacklist  policy,	in  which all system calls are allowed
       except for mknod, which will simply do nothing and return 0  (success),
       looks like:

       2
       blacklist
       mknod errno 0

       lxc.seccomp
	      Specify  a  file	containing  the	 seccomp configuration to load
	      before the container starts.

   UID MAPPINGS
       A container can be started in a private user namespace  with  user  and
       group  id mappings. For instance, you can map userid 0 in the container
       to userid 200000 on the host. The root user in the  container  will  be
       privileged  in  the container, but unprivileged on the host. Normally a
       system container will want a range  of  ids,  so	 you  would  map,  for
       instance,  user	and group ids 0 through 20,000 in the container to the
       ids 200,000 through 220,000.

       lxc.id_map
	      Four values must be provided. First a character, either 'u',  or
	      'g', to specify whether user or group ids are being mapped. Next
	      is the first userid as seen in the user namespace	 of  the  con‐
	      tainer. Next is the userid as seen on the host. Finally, a range
	      indicating the number of consecutive ids to map.

   CONTAINER HOOKS
       Container hooks are programs or scripts which can be executed at	 vari‐
       ous times in a container's lifetime.

       When  a	container hook is executed, information is passed both as com‐
       mand line arguments and through environment variables.	The  arguments
       are:

       · Container name.

       · Section (always 'lxc').

       · The hook type (i.e. 'clone' or 'pre-mount').

       · Additional  arguments	In the case of the clone hook, any extra argu‐
	 ments passed to lxc-clone will appear as  further  arguments  to  the
	 hook.

       The following environment variables are set:

       · LXC_NAME: is the container's name.

       · LXC_ROOTFS_MOUNT: the path to the mounted root filesystem.

       · LXC_CONFIG_FILE: the path to the container configuration file.

       · LXC_SRC_NAME:	in  the	 case  of the clone hook, this is the original
	 container's name.

       · LXC_ROOTFS_PATH: this is the lxc.rootfs entry for the container. Note
	 this  is  likely  not	where  the  mounted rootfs is to be found, use
	 LXC_ROOTFS_MOUNT for that.

       Standard output from the hooks is  logged  at  debug  level.   Standard
       error  is  not  logged, but can be captured by the hook redirecting its
       standard error to standard output.

       lxc.hook.pre-start
	      A hook to be run in the host's namespace	before	the  container
	      ttys, consoles, or mounts are up.

       lxc.hook.pre-mount
	      A	 hook to be run in the container's fs namespace but before the
	      rootfs has been set up. This  allows  for	 manipulation  of  the
	      rootfs,  i.e.  to	 mount an encrypted filesystem. Mounts done in
	      this hook will not be reflected on the host (apart  from	mounts
	      propagation),  so they will be automatically cleaned up when the
	      container shuts down.

       lxc.hook.mount
	      A hook to be run in the container's namespace after mounting has
	      been done, but before the pivot_root.

       lxc.hook.autodev
	      A hook to be run in the container's namespace after mounting has
	      been done and after any mount hooks have	run,  but  before  the
	      pivot_root, if lxc.autodev == 1.	The purpose of this hook is to
	      assist in populating the /dev directory of  the  container  when
	      using  the autodev option for systemd based containers. The con‐
	      tainer's /dev directory is relative to  the  ${LXC_ROOTFS_MOUNT}
	      environment variable available when the hook is run.

       lxc.hook.start
	      A hook to be run in the container's namespace immediately before
	      executing the container's init. This requires the program to  be
	      available in the container.

       lxc.hook.post-stop
	      A hook to be run in the host's namespace after the container has
	      been shut down.

       lxc.hook.clone
	      A hook to be run when the container is cloned to a new one.  See
	      lxc-clone(1) for more information.

   CONTAINER HOOKS ENVIRONMENT VARIABLES
       A  number  of  environment  variables are made available to the startup
       hooks to provide configuration information and assist in the  function‐
       ing  of the hooks. Not all variables are valid in all contexts. In par‐
       ticular, all paths are relative to the host system and,	as  such,  not
       valid during the lxc.hook.start hook.

       LXC_NAME
	      The  LXC	name  of the container. Useful for logging messages in
	      common log environments. [-n]

       LXC_CONFIG_FILE
	      Host relative path to the	 container  configuration  file.  This
	      gives  the  container to reference the original, top level, con‐
	      figuration file for the container in order to locate  any	 addi‐
	      tional  configuration  information not otherwise made available.
	      [-f]

       LXC_CONSOLE
	      The path to the console output of the  container	if  not	 NULL.
	      [-c] [lxc.console]

       LXC_CONSOLE_LOGPATH
	      The path to the console log output of the container if not NULL.
	      [-L]

       LXC_ROOTFS_MOUNT
	      The mount location to which the container	 is  initially	bound.
	      This  will be the host relative path to the container rootfs for
	      the container instance being started and is where changes should
	      be made for that instance.  [lxc.rootfs.mount]

       LXC_ROOTFS_PATH
	      The  host	 relative  path	 to  the container root which has been
	      mounted to the rootfs.mount location.  [lxc.rootfs]

   LOGGING
       Logging can be configured on a per-container basis. By default, depend‐
       ing  upon how the lxc package was compiled, container startup is logged
       only at the ERROR level, and logged to a file named after the container
       (with  '.log' appended) either under the container path, or under /con‐
       tainer.

       Both the default log level and the log file can	be  specified  in  the
       container  configuration	 file,	overriding  the default behavior. Note
       that the configuration file entries can in turn be  overridden  by  the
       command line options to lxc-start.

       lxc.loglevel
	      The  level  at  which to log. The log level is an integer in the
	      range of 0..8 inclusive, where a lower number means more verbose
	      debugging.  In  particular  0  = trace, 1 = debug, 2 = info, 3 =
	      notice, 4 = warn, 5 = error, 6 = critical, 7 = alert,  and  8  =
	      fatal.  If unspecified, the level defaults to 5 (error), so that
	      only errors and above are logged.

	      Note that when a script (such as either a hook script or a  net‐
	      work  interface up or down script) is called, the script's stan‐
	      dard output is logged at level 1, debug.

       lxc.logfile
	      The file to which logging info should be written.

   AUTOSTART
       The autostart options support marking which containers should be	 auto-
       started	and  in	 what  order.  These  options may be used by LXC tools
       directly or by external tooling provided by the distributions.

       lxc.start.auto
	      Whether the container should be auto-started.  Valid values  are
	      0 (off) and 1 (on).

       lxc.start.delay
	      How  long	 to  wait  (in seconds) after the container is started
	      before starting the next one.

       lxc.start.order
	      An integer used to sort  the  containers	when  auto-starting  a
	      series of containers at once.

       lxc.group
	      A	 multi-value  key (can be used multiple times) to put the con‐
	      tainer in a container group.  Those  groups  can	then  be  used
	      (amongst other things) to start a series of related containers.

   AUTOSTART AND SYSTEM BOOT
       Each  container can be part of any number of groups or no group at all.
       Two groups are special. One is the NULL group, i.e. the container  does
       not belong to any group. The other group is the "onboot" group.

       When  the  system  boots	 with  the  LXC service enabled, it will first
       attempt to boot any containers with lxc.start.auto == 1 that is a  mem‐
       ber   of	  the  "onboot"	 group.	 The  startup  will  be	 in  order  of
       lxc.start.order.	 If an lxc.start.delay has been specified, that	 delay
       will  be	 honored before attempting to start the next container to give
       the current container time to begin initialization and reduce overload‐
       ing  the host system. After starting the members of the "onboot" group,
       the LXC system will proceed to boot containers with lxc.start.auto == 1
       which are not members of any group (the NULL group) and proceed as with
       the onboot group.

EXAMPLES
       In addition to the few examples given below, you will find  some	 other
       examples of configuration file in /usr/share/doc/lxc/examples

   NETWORK
       This  configuration  sets up a container to use a veth pair device with
       one side plugged to a bridge br0 (which has been configured  before  on
       the system by the administrator). The virtual network device visible in
       the container is renamed to eth0.

	    lxc.utsname = myhostname
	    lxc.network.type = veth
	    lxc.network.flags = up
	    lxc.network.link = br0
	    lxc.network.name = eth0
	    lxc.network.hwaddr = 4a:49:43:49:79:bf
	    lxc.network.ipv4 = 10.2.3.5/24 10.2.3.255
	    lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3597

   UID/GID MAPPING
       This configuration will map both user and group ids in the range 0-9999
       in the container to the ids 100000-109999 on the host.

	    lxc.id_map = u 0 100000 10000
	    lxc.id_map = g 0 100000 10000

   CONTROL GROUP
       This  configuration  will setup several control groups for the applica‐
       tion, cpuset.cpus restricts usage of the defined cpu, cpus.share prior‐
       itize  the  control  group,  devices.allow  makes  usable the specified
       devices.

	    lxc.cgroup.cpuset.cpus = 0,1
	    lxc.cgroup.cpu.shares = 1234
	    lxc.cgroup.devices.deny = a
	    lxc.cgroup.devices.allow = c 1:3 rw
	    lxc.cgroup.devices.allow = b 8:0 rw

   COMPLEX CONFIGURATION
       This example show a complex  configuration  making  a  complex  network
       stack,  using the control groups, setting a new hostname, mounting some
       locations and a changing root file system.

	    lxc.utsname = complex
	    lxc.network.type = veth
	    lxc.network.flags = up
	    lxc.network.link = br0
	    lxc.network.hwaddr = 4a:49:43:49:79:bf
	    lxc.network.ipv4 = 10.2.3.5/24 10.2.3.255
	    lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3597
	    lxc.network.ipv6 = 2003:db8:1:0:214:5432:feab:3588
	    lxc.network.type = macvlan
	    lxc.network.flags = up
	    lxc.network.link = eth0
	    lxc.network.hwaddr = 4a:49:43:49:79:bd
	    lxc.network.ipv4 = 10.2.3.4/24
	    lxc.network.ipv4 = 192.168.10.125/24
	    lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3596
	    lxc.network.type = phys
	    lxc.network.flags = up
	    lxc.network.link = dummy0
	    lxc.network.hwaddr = 4a:49:43:49:79:ff
	    lxc.network.ipv4 = 10.2.3.6/24
	    lxc.network.ipv6 = 2003:db8:1:0:214:1234:fe0b:3297
	    lxc.cgroup.cpuset.cpus = 0,1
	    lxc.cgroup.cpu.shares = 1234
	    lxc.cgroup.devices.deny = a
	    lxc.cgroup.devices.allow = c 1:3 rw
	    lxc.cgroup.devices.allow = b 8:0 rw
	    lxc.mount = /etc/fstab.complex
	    lxc.mount.entry = /lib /root/myrootfs/lib none ro,bind 0 0
	    lxc.rootfs = /mnt/rootfs.complex
	    lxc.cap.drop = sys_module mknod setuid net_raw
	    lxc.cap.drop = mac_override

SEE ALSO
       chroot(1), pivot_root(8), fstab(5), capabilities(7)

SEE ALSO
       lxc(7), lxc-create(1), lxc-destroy(1), lxc-start(1), lxc-stop(1),  lxc-
       execute(1), lxc-console(1), lxc-monitor(1), lxc-wait(1), lxc-cgroup(1),
       lxc-ls(1), lxc-info(1), lxc-freeze(1), lxc-unfreeze(1),	lxc-attach(1),
       lxc.conf(5)

AUTHOR
       Daniel Lezcano <daniel.lezcano@free.fr>

			  Thu Jul 3 13:01:56 PDT 2014	 LXC.CONTAINER.CONF(5)
[top]

List of man pages available for Oracle

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net