2 // vim: set syntax=asciidoc:
5 == Buildroot configuration
7 All the configuration options in +make *config+ have a help text
8 providing details about the option.
10 The +make *config+ commands also offer a search tool. Read the help
11 message in the different frontend menus to know how to use it:
13 * in _menuconfig_, the search tool is called by pressing +/+;
14 * in _xconfig_, the search tool is called by pressing +Ctrl+ + +f+.
16 The result of the search shows the help message of the matching items.
17 In _menuconfig_, numbers in the left column provide a shortcut to the
18 corresponding entry. Just type this number to directly jump to the
19 entry, or to the containing menu in case the entry is not selectable due
20 to a missing dependency.
22 Although the menu structure and the help text of the entries should be
23 sufficiently self-explanatory, a number of topics require additional
24 explanation that cannot easily be covered in the help text and are
25 therefore covered in the following sections.
27 === Cross-compilation toolchain
29 A compilation toolchain is the set of tools that allows you to compile
30 code for your system. It consists of a compiler (in our case, +gcc+),
31 binary utils like assembler and linker (in our case, +binutils+) and a
32 C standard library (for example
33 http://www.gnu.org/software/libc/libc.html[GNU Libc],
34 http://www.uclibc-ng.org/[uClibc-ng]).
36 The system installed on your development station certainly already has
37 a compilation toolchain that you can use to compile an application
38 that runs on your system. If you're using a PC, your compilation
39 toolchain runs on an x86 processor and generates code for an x86
40 processor. Under most Linux systems, the compilation toolchain uses
41 the GNU libc (glibc) as the C standard library. This compilation
42 toolchain is called the "host compilation toolchain". The machine on
43 which it is running, and on which you're working, is called the "host
44 system" footnote:[This terminology differs from what is used by GNU
45 configure, where the host is the machine on which the application will
46 run (which is usually the same as target)].
48 The compilation toolchain is provided by your distribution, and
49 Buildroot has nothing to do with it (other than using it to build a
50 cross-compilation toolchain and other tools that are run on the
53 As said above, the compilation toolchain that comes with your system
54 runs on and generates code for the processor in your host system. As
55 your embedded system has a different processor, you need a
56 cross-compilation toolchain - a compilation toolchain that runs on
57 your _host system_ but generates code for your _target system_ (and
58 target processor). For example, if your host system uses x86 and your
59 target system uses ARM, the regular compilation toolchain on your host
60 runs on x86 and generates code for x86, while the cross-compilation
61 toolchain runs on x86 and generates code for ARM.
63 Buildroot provides two solutions for the cross-compilation toolchain:
65 * The *internal toolchain backend*, called +Buildroot toolchain+ in
66 the configuration interface.
68 * The *external toolchain backend*, called +External toolchain+ in
69 the configuration interface.
71 The choice between these two solutions is done using the +Toolchain
72 Type+ option in the +Toolchain+ menu. Once one solution has been
73 chosen, a number of configuration options appear, they are detailed in
74 the following sections.
76 [[internal-toolchain-backend]]
77 ==== Internal toolchain backend
79 The _internal toolchain backend_ is the backend where Buildroot builds
80 by itself a cross-compilation toolchain, before building the userspace
81 applications and libraries for your target embedded system.
83 This backend supports several C libraries:
84 http://www.uclibc-ng.org[uClibc-ng],
85 http://www.gnu.org/software/libc/libc.html[glibc] and
86 http://www.musl-libc.org[musl].
88 Once you have selected this backend, a number of options appear. The
89 most important ones allow to:
91 * Change the version of the Linux kernel headers used to build the
92 toolchain. This item deserves a few explanations. In the process of
93 building a cross-compilation toolchain, the C library is being
94 built. This library provides the interface between userspace
95 applications and the Linux kernel. In order to know how to "talk"
96 to the Linux kernel, the C library needs to have access to the
97 _Linux kernel headers_ (i.e. the +.h+ files from the kernel), which
98 define the interface between userspace and the kernel (system
99 calls, data structures, etc.). Since this interface is backward
100 compatible, the version of the Linux kernel headers used to build
101 your toolchain do not need to match _exactly_ the version of the
102 Linux kernel you intend to run on your embedded system. They only
103 need to have a version equal or older to the version of the Linux
104 kernel you intend to run. If you use kernel headers that are more
105 recent than the Linux kernel you run on your embedded system, then
106 the C library might be using interfaces that are not provided by
109 * Change the version of the GCC compiler, binutils and the C library.
111 * Select a number of toolchain options (uClibc only): whether the
112 toolchain should have RPC support (used mainly for NFS),
113 wide-char support, locale support (for internationalization),
114 C++ support or thread support. Depending on which options you choose,
115 the number of userspace applications and libraries visible in
116 Buildroot menus will change: many applications and libraries require
117 certain toolchain options to be enabled. Most packages show a comment
118 when a certain toolchain option is required to be able to enable
119 those packages. If needed, you can further refine the uClibc
120 configuration by running +make uclibc-menuconfig+. Note however that
121 all packages in Buildroot are tested against the default uClibc
122 configuration bundled in Buildroot: if you deviate from this
123 configuration by removing features from uClibc, some packages may no
126 It is worth noting that whenever one of those options is modified,
127 then the entire toolchain and system must be rebuilt. See
130 Advantages of this backend:
132 * Well integrated with Buildroot
133 * Fast, only builds what's necessary
135 Drawbacks of this backend:
137 * Rebuilding the toolchain is needed when doing +make clean+, which
138 takes time. If you're trying to reduce your build time, consider
139 using the _External toolchain backend_.
141 [[external-toolchain-backend]]
142 ==== External toolchain backend
144 The _external toolchain backend_ allows to use existing pre-built
145 cross-compilation toolchains. Buildroot knows about a number of
146 well-known cross-compilation toolchains (from
147 http://www.linaro.org[Linaro] for ARM,
148 http://www.mentor.com/embedded-software/sourcery-tools/sourcery-codebench/editions/lite-edition/[Sourcery
149 CodeBench] for ARM, x86-64, PowerPC, and MIPS, and is capable of
150 downloading them automatically, or it can be pointed to a custom
151 toolchain, either available for download or installed locally.
153 Then, you have three solutions to use an external toolchain:
155 * Use a predefined external toolchain profile, and let Buildroot
156 download, extract and install the toolchain. Buildroot already knows
157 about a few CodeSourcery and Linaro toolchains. Just select the
158 toolchain profile in +Toolchain+ from the available ones. This is
159 definitely the easiest solution.
161 * Use a predefined external toolchain profile, but instead of having
162 Buildroot download and extract the toolchain, you can tell Buildroot
163 where your toolchain is already installed on your system. Just
164 select the toolchain profile in +Toolchain+ through the available
165 ones, unselect +Download toolchain automatically+, and fill the
166 +Toolchain path+ text entry with the path to your cross-compiling
169 * Use a completely custom external toolchain. This is particularly
170 useful for toolchains generated using crosstool-NG or with Buildroot
171 itself. To do this, select the +Custom toolchain+ solution in the
172 +Toolchain+ list. You need to fill the +Toolchain path+, +Toolchain
173 prefix+ and +External toolchain C library+ options. Then, you have
174 to tell Buildroot what your external toolchain supports. If your
175 external toolchain uses the 'glibc' library, you only have to tell
176 whether your toolchain supports C\++ or not and whether it has
177 built-in RPC support. If your external toolchain uses the 'uClibc'
178 library, then you have to tell Buildroot if it supports RPC,
179 wide-char, locale, program invocation, threads and C++.
180 At the beginning of the execution, Buildroot will tell you if
181 the selected options do not match the toolchain configuration.
183 Our external toolchain support has been tested with toolchains from
184 CodeSourcery and Linaro, toolchains generated by
185 http://crosstool-ng.org[crosstool-NG], and toolchains generated by
186 Buildroot itself. In general, all toolchains that support the
187 'sysroot' feature should work. If not, do not hesitate to contact the
190 We do not support toolchains or SDK generated by OpenEmbedded or
191 Yocto, because these toolchains are not pure toolchains (i.e. just the
192 compiler, binutils, the C and C++ libraries). Instead these toolchains
193 come with a very large set of pre-compiled libraries and
194 programs. Therefore, Buildroot cannot import the 'sysroot' of the
195 toolchain, as it would contain hundreds of megabytes of pre-compiled
196 libraries that are normally built by Buildroot.
198 We also do not support using the distribution toolchain (i.e. the
199 gcc/binutils/C library installed by your distribution) as the
200 toolchain to build software for the target. This is because your
201 distribution toolchain is not a "pure" toolchain (i.e. only with the
202 C/C++ library), so we cannot import it properly into the Buildroot
203 build environment. So even if you are building a system for a x86 or
204 x86_64 target, you have to generate a cross-compilation toolchain with
205 Buildroot or crosstool-NG.
207 If you want to generate a custom toolchain for your project, that can
208 be used as an external toolchain in Buildroot, our recommendation is
209 definitely to build it with http://crosstool-ng.org[crosstool-NG]. We
210 recommend to build the toolchain separately from Buildroot, and then
211 _import_ it in Buildroot using the external toolchain backend.
213 Advantages of this backend:
215 * Allows to use well-known and well-tested cross-compilation
218 * Avoids the build time of the cross-compilation toolchain, which is
219 often very significant in the overall build time of an embedded
222 Drawbacks of this backend:
224 * If your pre-built external toolchain has a bug, may be hard to get a
225 fix from the toolchain vendor, unless you build your external
226 toolchain by yourself using Crosstool-NG.
228 ===== External toolchain wrapper
230 When using an external toolchain, Buildroot generates a wrapper program,
231 that transparently passes the appropriate options (according to the
232 configuration) to the external toolchain programs. In case you need to
233 debug this wrapper to check exactly what arguments are passed, you can
234 set the environment variable +BR2_DEBUG_WRAPPER+ to either one of:
236 * +0+, empty or not set: no debug
238 * +1+: trace all arguments on a single line
240 * +2+: trace one argument per line
244 On a Linux system, the +/dev+ directory contains special files, called
245 _device files_, that allow userspace applications to access the
246 hardware devices managed by the Linux kernel. Without these _device
247 files_, your userspace applications would not be able to use the
248 hardware devices, even if they are properly recognized by the Linux
251 Under +System configuration+, +/dev management+, Buildroot offers four
252 different solutions to handle the +/dev+ directory :
254 * The first solution is *Static using device table*. This is the old
255 classical way of handling device files in Linux. With this method,
256 the device files are persistently stored in the root filesystem
257 (i.e. they persist across reboots), and there is nothing that will
258 automatically create and remove those device files when hardware
259 devices are added or removed from the system. Buildroot therefore
260 creates a standard set of device files using a _device table_, the
261 default one being stored in +system/device_table_dev.txt+ in the
262 Buildroot source code. This file is processed when Buildroot
263 generates the final root filesystem image, and the _device files_
264 are therefore not visible in the +output/target+ directory. The
265 +BR2_ROOTFS_STATIC_DEVICE_TABLE+ option allows to change the
266 default device table used by Buildroot, or to add an additional
267 device table, so that additional _device files_ are created by
268 Buildroot during the build. So, if you use this method, and a
269 _device file_ is missing in your system, you can for example create
270 a +board/<yourcompany>/<yourproject>/device_table_dev.txt+ file
271 that contains the description of your additional _device files_,
272 and then you can set +BR2_ROOTFS_STATIC_DEVICE_TABLE+ to
273 +system/device_table_dev.txt
274 board/<yourcompany>/<yourproject>/device_table_dev.txt+. For more
275 details about the format of the device table file, see
276 xref:makedev-syntax[].
278 * The second solution is *Dynamic using devtmpfs only*. _devtmpfs_ is
279 a virtual filesystem inside the Linux kernel that has been
280 introduced in kernel 2.6.32 (if you use an older kernel, it is not
281 possible to use this option). When mounted in +/dev+, this virtual
282 filesystem will automatically make _device files_ appear and
283 disappear as hardware devices are added and removed from the
284 system. This filesystem is not persistent across reboots: it is
285 filled dynamically by the kernel. Using _devtmpfs_ requires the
286 following kernel configuration options to be enabled:
287 +CONFIG_DEVTMPFS+ and +CONFIG_DEVTMPFS_MOUNT+. When Buildroot is in
288 charge of building the Linux kernel for your embedded device, it
289 makes sure that those two options are enabled. However, if you
290 build your Linux kernel outside of Buildroot, then it is your
291 responsibility to enable those two options (if you fail to do so,
292 your Buildroot system will not boot).
294 * The third solution is *Dynamic using devtmpfs + mdev*. This method
295 also relies on the _devtmpfs_ virtual filesystem detailed above (so
296 the requirement to have +CONFIG_DEVTMPFS+ and
297 +CONFIG_DEVTMPFS_MOUNT+ enabled in the kernel configuration still
298 apply), but adds the +mdev+ userspace utility on top of it. +mdev+
299 is a program part of BusyBox that the kernel will call every time a
300 device is added or removed. Thanks to the +/etc/mdev.conf+
301 configuration file, +mdev+ can be configured to for example, set
302 specific permissions or ownership on a device file, call a script
303 or application whenever a device appears or disappear,
304 etc. Basically, it allows _userspace_ to react on device addition
305 and removal events. +mdev+ can for example be used to automatically
306 load kernel modules when devices appear on the system. +mdev+ is
307 also important if you have devices that require a firmware, as it
308 will be responsible for pushing the firmware contents to the
309 kernel. +mdev+ is a lightweight implementation (with fewer
310 features) of +udev+. For more details about +mdev+ and the syntax
311 of its configuration file, see
312 http://git.busybox.net/busybox/tree/docs/mdev.txt.
314 * The fourth solution is *Dynamic using devtmpfs + eudev*. This
315 method also relies on the _devtmpfs_ virtual filesystem detailed
316 above, but adds the +eudev+ userspace daemon on top of it. +eudev+
317 is a daemon that runs in the background, and gets called by the
318 kernel when a device gets added or removed from the system. It is a
319 more heavyweight solution than +mdev+, but provides higher
320 flexibility. +eudev+ is a standalone version of +udev+, the
321 original userspace daemon used in most desktop Linux distributions,
322 which is now part of Systemd. For more details, see
323 http://en.wikipedia.org/wiki/Udev.
325 The Buildroot developers recommendation is to start with the *Dynamic
326 using devtmpfs only* solution, until you have the need for userspace
327 to be notified when devices are added/removed, or if firmwares are
328 needed, in which case *Dynamic using devtmpfs + mdev* is usually a
331 Note that if +systemd+ is chosen as init system, /dev management will
332 be performed by the +udev+ program provided by +systemd+.
336 The _init_ program is the first userspace program started by the
337 kernel (it carries the PID number 1), and is responsible for starting
338 the userspace services and programs (for example: web server,
339 graphical applications, other network servers, etc.).
341 Buildroot allows to use three different types of init systems, which
342 can be chosen from +System configuration+, +Init system+:
344 * The first solution is *BusyBox*. Amongst many programs, BusyBox has
345 an implementation of a basic +init+ program, which is sufficient
346 for most embedded systems. Enabling the +BR2_INIT_BUSYBOX+ will
347 ensure BusyBox will build and install its +init+ program. This is
348 the default solution in Buildroot. The BusyBox +init+ program will
349 read the +/etc/inittab+ file at boot to know what to do. The syntax
350 of this file can be found in
351 http://git.busybox.net/busybox/tree/examples/inittab (note that
352 BusyBox +inittab+ syntax is special: do not use a random +inittab+
353 documentation from the Internet to learn about BusyBox
354 +inittab+). The default +inittab+ in Buildroot is stored in
355 +system/skeleton/etc/inittab+. Apart from mounting a few important
356 filesystems, the main job the default inittab does is to start the
357 +/etc/init.d/rcS+ shell script, and start a +getty+ program (which
358 provides a login prompt).
360 * The second solution is *systemV*. This solution uses the old
361 traditional _sysvinit_ program, packed in Buildroot in
362 +package/sysvinit+. This was the solution used in most desktop
363 Linux distributions, until they switched to more recent
364 alternatives such as Upstart or Systemd. +sysvinit+ also works with
365 an +inittab+ file (which has a slightly different syntax than the
366 one from BusyBox). The default +inittab+ installed with this init
367 solution is located in +package/sysvinit/inittab+.
369 * The third solution is *systemd*. +systemd+ is the new generation
370 init system for Linux. It does far more than traditional _init_
371 programs: aggressive parallelization capabilities, uses socket and
372 D-Bus activation for starting services, offers on-demand starting
373 of daemons, keeps track of processes using Linux control groups,
374 supports snapshotting and restoring of the system state,
375 etc. +systemd+ will be useful on relatively complex embedded
376 systems, for example the ones requiring D-Bus and services
377 communicating between each other. It is worth noting that +systemd+
378 brings a fairly big number of large dependencies: +dbus+, +udev+
379 and more. For more details about +systemd+, see
380 http://www.freedesktop.org/wiki/Software/systemd.
382 The solution recommended by Buildroot developers is to use the
383 *BusyBox init* as it is sufficient for most embedded
384 systems. *systemd* can be used for more complex situations.