Table of Contents
math.h not
foundpython setup.py bdist_wheel cannot
create .whlinstall_data /
data_files problemsconfiguration.nix?Table of Contents
The Nix Packages collection (Nixpkgs) is a set of thousands of packages for the Nix package manager, released under a permissive MIT/X11 license. Packages are available for several platforms, and can be used with the Nix package manager on most GNU/Linux distributions as well as NixOS.
This manual primarily describes how to write packages for the Nix Packages collection (Nixpkgs). Thus it’s mainly for packagers and developers who want to add packages to Nixpkgs. If you like to learn more about the Nix package manager and the Nix expression language, then you are kindly referred to the Nix manual.
Nix expressions describe how to build packages from source and are collected in the nixpkgs repository. Also included in the collection are Nix expressions for NixOS modules. With these expressions the Nix package manager can build binary packages.
Packages, including the Nix packages collection, are distributed
through
channels.
The collection is distributed for users of Nix on non-NixOS
distributions through the channel nixpkgs.
Users of NixOS generally use one of the nixos-*
channels, e.g. nixos-16.03, which includes all
packages and modules for the stable NixOS 16.03. The purpose of
stable NixOS releases are generally only given security updates.
More up to date packages and modules are available via the
nixos-unstable channel.
Both nixos-unstable and
nixpkgs follow the master
branch of the Nixpkgs repository, although both do lag the
master branch by generally
a couple of
days. Updates to a channel are distributed as soon as all
tests for that channel pass, e.g.
this
table shows the status of tests for the
nixpkgs channel.
The tests are conducted by a cluster called
Hydra, which also
builds binary packages from the Nix expressions in Nixpkgs for
x86_64-linux, i686-linux and
x86_64-darwin. The binaries are made available
via a binary cache.
The current Nix expressions of the channels are available in the
nixpkgs-channels
repository, which has branches corresponding to the available
channels. There is also the
Nixpkgs Monitor
which keeps track of updates and security vulnerabilities.
To add a package to Nixpkgs:
Checkout the Nixpkgs source tree:
$ git clone git://github.com/NixOS/nixpkgs.git $ cd nixpkgs
Find a good place in the Nixpkgs tree to add the Nix
expression for your package. For instance, a library package
typically goes into
pkgs/development/libraries/,
while a web browser goes into
pkgnamepkgs/applications/networking/browsers/.
See Section 12.3, “File naming and organisation” for some hints on the tree
organisation. Create a directory for your package, e.g.
pkgname
$ mkdir pkgs/development/libraries/libfoo
In the package directory, create a Nix expression — a piece
of code that describes how to build the package. In this case, it
should be a function that is called with the
package dependencies as arguments, and returns a build of the
package in the Nix store. The expression should usually be called
default.nix.
$ emacs pkgs/development/libraries/libfoo/default.nix $ git add pkgs/development/libraries/libfoo/default.nix
You can have a look at the existing Nix expressions under
pkgs/ to see how it’s done. Here are some
good ones:
GNU Hello: pkgs/applications/misc/hello/default.nix.
Trivial package, which specifies some meta
attributes which is good practice.
GNU cpio: pkgs/tools/archivers/cpio/default.nix.
Also a simple package. The generic builder in
stdenv does everything for you. It has
no dependencies beyond stdenv.
GNU Multiple Precision arithmetic library (GMP): pkgs/development/libraries/gmp/5.1.x.nix.
Also done by the generic builder, but has a dependency on
m4.
Pan, a GTK-based newsreader: pkgs/applications/networking/newsreaders/pan/default.nix.
Has an optional dependency on gtkspell,
which is only built if spellCheck is
true.
Apache HTTPD: pkgs/servers/http/apache-httpd/2.4.nix.
A bunch of optional features, variable substitutions in the
configure flags, a post-install hook, and miscellaneous
hackery.
Thunderbird: pkgs/applications/networking/mailreaders/thunderbird/default.nix.
Lots of dependencies.
JDiskReport, a Java utility: pkgs/tools/misc/jdiskreport/default.nix
(and the builder).
Nixpkgs doesn’t have a decent stdenv for
Java yet so this is pretty ad-hoc.
XML::Simple, a Perl module: pkgs/top-level/perl-packages.nix
(search for the XMLSimple attribute).
Most Perl modules are so simple to build that they are
defined directly in perl-packages.nix;
no need to make a separate file for them.
Adobe Reader: pkgs/applications/misc/adobe-reader/default.nix.
Shows how binary-only packages can be supported. In
particular the builder
uses patchelf to set the RUNPATH and ELF
interpreter of the executables so that the right libraries
are found at runtime.
Some notes:
All meta
attributes are optional, but it’s still a good idea to
provide at least the description,
homepage and license.
You can use nix-prefetch-url (or similar nix-prefetch-git, etc)
url to get the SHA-256 hash of
source distributions. There are similar commands as nix-prefetch-git and
nix-prefetch-hg available in nix-prefetch-scripts package.
A list of schemes for mirror://
URLs can be found in pkgs/build-support/fetchurl/mirrors.nix.
The exact syntax and semantics of the Nix expression language, including the built-in function, are described in the Nix manual in the chapter on writing Nix expressions.
Add a call to the function defined in the previous step to
pkgs/top-level/all-packages.nix
with some descriptive name for the variable,
e.g. libfoo.
$ emacs pkgs/top-level/all-packages.nix
The attributes in that file are sorted by category (like “Development / Libraries”) that more-or-less correspond to the directory structure of Nixpkgs, and then by attribute name.
To test whether the package builds, run the following command from the root of the nixpkgs source tree:
$ nix-build -A libfoo
where libfoo should be the variable name
defined in the previous step. You may want to add the flag
-K to keep the temporary build directory in case
something fails. If the build succeeds, a symlink
./result to the package in the Nix store is
created.
If you want to install the package into your profile (optional), do
$ nix-env -f . -iA libfoo
Optionally commit the new package and open a pull request, or send a patch to
nix-dev@cs.uu.nl.
Table of Contents
The standard build environment in the Nix Packages collection
provides an environment for building Unix packages that does a lot of
common build tasks automatically. In fact, for Unix packages that use
the standard ./configure; make; make install build
interface, you don’t need to write a build script at all; the standard
environment does everything automatically. If
stdenv doesn’t do what you need automatically, you
can easily customise or override the various build phases.
stdenvTo build a package with the standard environment, you use the
function stdenv.mkDerivation, instead of the
primitive built-in function derivation, e.g.
stdenv.mkDerivation {
name = "libfoo-1.2.3";
src = fetchurl {
url = http://example.org/libfoo-1.2.3.tar.bz2;
sha256 = "0x2g1jqygyr5wiwg4ma1nd7w4ydpy82z9gkcv8vh2v8dn3y58v5m";
};
}
(stdenv needs to be in scope, so if you write this
in a separate Nix expression from
pkgs/all-packages.nix, you need to pass it as a
function argument.) Specifying a name and a
src is the absolute minimum you need to do. Many
packages have dependencies that are not provided in the standard
environment. It’s usually sufficient to specify those dependencies in
the buildInputs attribute:
stdenv.mkDerivation {
name = "libfoo-1.2.3";
...
buildInputs = [libbar perl ncurses];
}
This attribute ensures that the bin
subdirectories of these packages appear in the PATH
environment variable during the build, that their
include subdirectories are searched by the C
compiler, and so on. (See Section 3.6, “Package setup hooks” for
details.)
Often it is necessary to override or modify some aspect of the build. To make this easier, the standard environment breaks the package build into a number of phases, all of which can be overridden or modified individually: unpacking the sources, applying patches, configuring, building, and installing. (There are some others; see Section 3.4, “Phases”.) For instance, a package that doesn’t supply a makefile but instead has to be compiled “manually” could be handled like this:
stdenv.mkDerivation {
name = "fnord-4.5";
...
buildPhase = ''
gcc foo.c -o foo
'';
installPhase = ''
mkdir -p $out/bin
cp foo $out/bin
'';
}
(Note the use of ''-style string literals, which
are very convenient for large multi-line script fragments because they
don’t need escaping of " and \,
and because indentation is intelligently removed.)
There are many other attributes to customise the build. These are listed in Section 3.3, “Attributes”.
While the standard environment provides a generic builder, you can still supply your own build script:
stdenv.mkDerivation {
name = "libfoo-1.2.3";
...
builder = ./builder.sh;
}where the builder can do anything it wants, but typically starts with
source $stdenv/setup
to let stdenv set up the environment (e.g., process
the buildInputs). If you want, you can still use
stdenv’s generic builder:
source $stdenv/setup
buildPhase() {
echo "... this is my custom build phase ..."
gcc foo.c -o foo
}
installPhase() {
mkdir -p $out/bin
cp foo $out/bin
}
genericBuild
stdenvThe standard environment provides the following packages:
The GNU C Compiler, configured with C and C++ support.
GNU coreutils (contains a few dozen standard Unix commands).
GNU findutils (contains find).
GNU diffutils (contains diff, cmp).
GNU sed.
GNU grep.
GNU awk.
GNU tar.
gzip, bzip2 and xz.
GNU Make. It has been patched to provide “nested” output that can be fed into the nix-log2xml command and log2html stylesheet to create a structured, readable output of the build steps performed by Make.
Bash. This is the shell used for all builders in the Nix Packages collection. Not using /bin/sh removes a large source of portability problems.
The patch command.
On Linux, stdenv also includes the
patchelf utility.
Variables affecting stdenv
initialisation
NIX_DEBUGIf set, stdenv will print some
debug information during the build. In particular, the
gcc and ld wrapper scripts
will print out the complete command line passed to the wrapped
tools.
Variables specifying dependencies
nativeBuildInputs
A list of dependencies used by the new derivation at build-time.
I.e. these dependencies should not make it into the package's runtime-closure, though this is currently not checked.
For each dependency dir, the directory , if it exists, is added to the dir/binPATH environment variable.
Other environment variables are also set up via a pluggable mechanism.
For instance, if buildInputs contains Perl, then the lib/site_perl subdirectory of each input is added to the PERL5LIB environment variable.
See Section 3.6, “Package setup hooks” for details.
buildInputs
A list of dependencies used by the new derivation at run-time.
Currently, the build-time environment is modified in the exact same way as with nativeBuildInputs.
This is problematic in that when cross-compiling, foreign executables can clobber native ones on the PATH.
Even more confusing is static-linking.
A statically-linked library should be listed here because ultimately that generated machine code will be used at run-time, even though a derivation containing the object files or static archives will only be used at build-time.
A less confusing solution to this would be nice.
propagatedNativeBuildInputs
Like nativeBuildInputs, but these dependencies are propagated:
that is, the dependencies listed here are added to the nativeBuildInputs of any package that uses this package as a dependency.
So if package Y has propagatedBuildInputs = [X], and package Z has buildInputs = [Y], then package X will appear in Z’s build environment automatically.
propagatedBuildInputs
Like buildInputs, but propagated just like propagatedNativeBuildInputs.
This inherits buildInputs's flaws of clobbering native executables when cross-compiling and being confusing for static linking.
Variables affecting build properties
enableParallelBuildingIf set, stdenv will pass specific
flags to make and other build tools to enable
parallel building with up to build-cores
workers.
preferLocalBuildIf set, specifies that the package is so lightweight in terms of build operations (e.g. write a text file from a Nix string to the store) that there's no need to look for it in binary caches -- it's faster to just build it locally. It also tells Hydra and other facilities that this package doesn't need to be exported in binary caches (noone would use it, after all).
Special variables
passthruThis is an attribute set which can be filled with arbitrary values. For example:
passthru = {
foo = "bar";
baz = {
value1 = 4;
value2 = 5;
};
}
Values inside it are not passed to the builder, so you can change
them without triggering a rebuild. However, they can be accessed outside of a
derivation directly, as if they were set inside a derivation itself, e.g.
hello.baz.value1. We don't specify any usage or
schema of passthru - it is meant for values that would be
useful outside the derivation in other parts of a Nix expression (e.g. in other
derivations). An example would be to convey some specific dependency of your
derivation which contains a program with plugins support. Later, others who
make derivations with plugins can use passed-through dependency to ensure that
their plugin would be binary-compatible with built program.
The generic builder has a number of phases. Package builds are split into phases to make it easier to override specific parts of the build (e.g., unpacking the sources or installing the binaries). Furthermore, it allows a nicer presentation of build logs in the Nix build farm.
Each phase can be overridden in its entirety either by setting
the environment variable
to a string
containing some shell commands to be executed, or by redefining the
shell function
namePhase. The former
is convenient to override a phase from the derivation, while the
latter is convenient from a build script.namePhase
There are a number of variables that control what phases are executed and in what order:
Variables affecting phase control
phasesSpecifies the phases. You can change the order in which
phases are executed, or add new phases, by setting this
variable. If it’s not set, the default value is used, which is
$prePhases unpackPhase patchPhase $preConfigurePhases
configurePhase $preBuildPhases buildPhase checkPhase
$preInstallPhases installPhase fixupPhase $preDistPhases
distPhase $postPhases.
Usually, if you just want to add a few phases, it’s more
convenient to set one of the variables below (such as
preInstallPhases), as you then don’t specify
all the normal phases.
prePhasesAdditional phases executed before any of the default phases.
preConfigurePhasesAdditional phases executed just before the configure phase.
preBuildPhasesAdditional phases executed just before the build phase.
preInstallPhasesAdditional phases executed just before the install phase.
preFixupPhasesAdditional phases executed just before the fixup phase.
preDistPhasesAdditional phases executed just before the distribution phase.
postPhasesAdditional phases executed after any of the default phases.
The unpack phase is responsible for unpacking the source code of
the package. The default implementation of
unpackPhase unpacks the source files listed in
the src environment variable to the current directory.
It supports the following files by default:
These can optionally be compressed using
gzip (.tar.gz,
.tgz or .tar.Z),
bzip2 (.tar.bz2 or
.tbz2) or xz
(.tar.xz or
.tar.lzma).
Zip files are unpacked using
unzip. However, unzip is
not in the standard environment, so you should add it to
buildInputs yourself.
These are simply copied to the current directory.
The hash part of the file name is stripped,
e.g. /nix/store/1wydxgby13cz...-my-sources
would be copied to
my-sources.
Additional file types can be supported by setting the
unpackCmd variable (see below).
Variables controlling the unpack phase
srcs / srcThe list of source files or directories to be unpacked or copied. One of these must be set.
sourceRootAfter running unpackPhase,
the generic builder changes the current directory to the directory
created by unpacking the sources. If there are multiple source
directories, you should set sourceRoot to the
name of the intended directory.
setSourceRootAlternatively to setting
sourceRoot, you can set
setSourceRoot to a shell command to be
evaluated by the unpack phase after the sources have been
unpacked. This command must set
sourceRoot.
preUnpackHook executed at the start of the unpack phase.
postUnpackHook executed at the end of the unpack phase.
dontMakeSourcesWritableIf set to 1, the unpacked
sources are not made
writable. By default, they are made writable to prevent problems
with read-only sources. For example, copied store directories
would be read-only without this.
unpackCmdThe unpack phase evaluates the string
$unpackCmd for any unrecognised file. The path
to the current source file is contained in the
curSrc variable.
The patch phase applies the list of patches defined in the
patches variable.
Variables controlling the patch phase
patchesThe list of patches. They must be in the format
accepted by the patch command, and may
optionally be compressed using gzip
(.gz), bzip2
(.bz2) or xz
(.xz).
patchFlagsFlags to be passed to patch.
If not set, the argument -p1 is used, which
causes the leading directory component to be stripped from the
file names in each patch.
prePatchHook executed at the start of the patch phase.
postPatchHook executed at the end of the patch phase.
The configure phase prepares the source tree for building. The
default configurePhase runs
./configure (typically an Autoconf-generated
script) if it exists.
Variables controlling the configure phase
configureScriptThe name of the configure script. It defaults to
./configure if it exists; otherwise, the
configure phase is skipped. This can actually be a command (like
perl ./Configure.pl).
configureFlagsA list of strings passed as additional arguments to the configure script.
configureFlagsArrayA shell array containing additional arguments
passed to the configure script. You must use this instead of
configureFlags if the arguments contain
spaces.
dontAddPrefixBy default, the flag
--prefix=$prefix is added to the configure
flags. If this is undesirable, set this variable to
true.
prefixThe prefix under which the package must be
installed, passed via the --prefix option to the
configure script. It defaults to
$out.
dontAddDisableDepTrackBy default, the flag
--disable-dependency-tracking is added to the
configure flags to speed up Automake-based builds. If this is
undesirable, set this variable to true.
dontFixLibtoolBy default, the configure phase applies some
special hackery to all files called ltmain.sh
before running the configure script in order to improve the purity
of Libtool-based packages[1]. If this is undesirable, set this
variable to true.
dontDisableStaticBy default, when the configure script has
--enable-static, the option
--disable-static is added to the configure flags.
If this is undesirable, set this variable to true.
preConfigureHook executed at the start of the configure phase.
postConfigureHook executed at the end of the configure phase.
The build phase is responsible for actually building the package
(e.g. compiling it). The default buildPhase
simply calls make if a file named
Makefile, makefile or
GNUmakefile exists in the current directory (or
the makefile is explicitly set); otherwise it does
nothing.
Variables controlling the build phase
dontBuildSet to true to skip the build phase.
makefileThe file name of the Makefile.
makeFlagsA list of strings passed as additional flags to
make. These flags are also used by the default
install and check phase. For setting make flags specific to the
build phase, use buildFlags (see
below).
makeFlagsArrayA shell array containing additional arguments
passed to make. You must use this instead of
makeFlags if the arguments contain
spaces, e.g.
makeFlagsArray=(CFLAGS="-O0 -g" LDFLAGS="-lfoo -lbar")
Note that shell arrays cannot be passed through environment
variables, so you cannot set makeFlagsArray in
a derivation attribute (because those are passed through
environment variables): you have to define them in shell
code.
buildFlags / buildFlagsArrayA list of strings passed as additional flags to
make. Like makeFlags and
makeFlagsArray, but only used by the build
phase.
preBuildHook executed at the start of the build phase.
postBuildHook executed at the end of the build phase.
You can set flags for make through the
makeFlags variable.
Before and after running make, the hooks
preBuild and postBuild are
called, respectively.
The check phase checks whether the package was built correctly
by running its test suite. The default
checkPhase calls make check,
but only if the doCheck variable is enabled.
Variables controlling the check phase
doCheckIf set to a non-empty string, the check phase is executed, otherwise it is skipped (default). Thus you should set
doCheck = true;
in the derivation to enable checks.
makeFlags /
makeFlagsArray /
makefileSee the build phase for details.
checkTargetThe make target that runs the tests. Defaults to
check.
checkFlags / checkFlagsArrayA list of strings passed as additional flags to
make. Like makeFlags and
makeFlagsArray, but only used by the check
phase.
preCheckHook executed at the start of the check phase.
postCheckHook executed at the end of the check phase.
The install phase is responsible for installing the package in
the Nix store under out. The default
installPhase creates the directory
$out and calls make
install.
Variables controlling the install phase
makeFlags /
makeFlagsArray /
makefileSee the build phase for details.
installTargetsThe make targets that perform the installation.
Defaults to install. Example:
installTargets = "install-bin install-doc";
installFlags / installFlagsArrayA list of strings passed as additional flags to
make. Like makeFlags and
makeFlagsArray, but only used by the install
phase.
preInstallHook executed at the start of the install phase.
postInstallHook executed at the end of the install phase.
The fixup phase performs some (Nix-specific) post-processing
actions on the files installed under $out by the
install phase. The default fixupPhase does the
following:
It moves the man/,
doc/ and info/
subdirectories of $out to
share/.
It strips libraries and executables of debug information.
On Linux, it applies the patchelf
command to ELF executables and libraries to remove unused
directories from the RPATH in order to prevent
unnecessary runtime dependencies.
It rewrites the interpreter paths of shell scripts
to paths found in PATH. E.g.,
/usr/bin/perl will be rewritten to
/nix/store/
found in some-perl/bin/perlPATH.
Variables controlling the fixup phase
dontStripIf set, libraries and executables are not stripped. By default, they are.
dontMoveSbinIf set, files in $out/sbin are not moved
to $out/bin. By default, they are.
stripAllListList of directories to search for libraries and executables from which all symbols should be stripped. By default, it’s empty. Stripping all symbols is risky, since it may remove not just debug symbols but also ELF information necessary for normal execution.
stripAllFlagsFlags passed to the strip
command applied to the files in the directories listed in
stripAllList. Defaults to -s
(i.e. --strip-all).
stripDebugListList of directories to search for libraries and
executables from which only debugging-related symbols should be
stripped. It defaults to lib bin
sbin.
stripDebugFlagsFlags passed to the strip
command applied to the files in the directories listed in
stripDebugList. Defaults to
-S
(i.e. --strip-debug).
dontPatchELFIf set, the patchelf command is
not used to remove unnecessary RPATH entries.
Only applies to Linux.
dontPatchShebangsIf set, scripts starting with
#! do not have their interpreter paths
rewritten to paths in the Nix store.
forceShareThe list of directories that must be moved from
$out to $out/share.
Defaults to man doc info.
setupHookA package can export a setup hook by setting this
variable. The setup hook, if defined, is copied to
$out/nix-support/setup-hook. Environment
variables are then substituted in it using substituteAll.
preFixupHook executed at the start of the fixup phase.
postFixupHook executed at the end of the fixup phase.
separateDebugInfoIf set to true, the standard
environment will enable debug information in C/C++ builds. After
installation, the debug information will be separated from the
executables and stored in the output named
debug. (This output is enabled automatically;
you don’t need to set the outputs attribute
explicitly.) To be precise, the debug information is stored in
,
where debug/lib/debug/.build-id/XX/YYYY…XXYYYY… is the build
ID of the binary — a SHA-1 hash of the contents of
the binary. Debuggers like GDB use the build ID to look up the
separated debug information.
For example, with GDB, you can add
set debug-file-directory ~/.nix-profile/lib/debug
to ~/.gdbinit. GDB will then be able to find
debug information installed via nix-env
-i.
The installCheck phase checks whether the package was installed
correctly by running its test suite against the installed directories.
The default installCheck calls make
installcheck.
Variables controlling the installCheck phase
doInstallCheckIf set to a non-empty string, the installCheck phase is executed, otherwise it is skipped (default). Thus you should set
doInstallCheck = true;
in the derivation to enable install checks.
preInstallCheckHook executed at the start of the installCheck phase.
postInstallCheckHook executed at the end of the installCheck phase.
The distribution phase is intended to produce a source
distribution of the package. The default
distPhase first calls make
dist, then it copies the resulting source tarballs to
$out/tarballs/. This phase is only executed if
the attribute doDist is set.
Variables controlling the distribution phase
distTargetThe make target that produces the distribution.
Defaults to dist.
distFlags / distFlagsArrayAdditional flags passed to make.
tarballsThe names of the source distribution files to be
copied to $out/tarballs/. It can contain
shell wildcards. The default is
*.tar.gz.
dontCopyDistIf set, no files are copied to
$out/tarballs/.
preDistHook executed at the start of the distribution phase.
postDistHook executed at the end of the distribution phase.
The standard environment provides a number of useful functions.
makeWrapper
executable
wrapperfile
argsConstructs a wrapper for a program with various possible arguments. For example:
# adds `FOOBAR=baz` to `$out/bin/foo`’s environment
makeWrapper $out/bin/foo $wrapperfile --set FOOBAR baz
# prefixes the binary paths of `hello` and `git`
# Be advised that paths often should be patched in directly
# (via string replacements or in `configurePhase`).
makeWrapper $out/bin/foo $wrapperfile --prefix PATH : ${lib.makeBinPath [ hello git ]}
There’s many more kinds of arguments, they are documented in
nixpkgs/pkgs/build-support/setup-hooks/make-wrapper.sh.
wrapProgram is a convenience function you probably
want to use most of the time.
substitute
infile
outfile
subsPerforms string substitution on the contents of
infile, writing the result to
outfile. The substitutions in
subs are of the following form:
--replace
s1
s2Replace every occurence of the string
s1 by
s2.
--subst-var
varNameReplace every occurence of
@ by
the contents of the environment variable
varName@varName. This is useful for
generating files from templates, using
@ in the
template as placeholders....@
--subst-var-by
varName
sReplace every occurence of
@ by
the string varName@s.
Example:
substitute ./foo.in ./foo.out \
--replace /usr/bin/bar $bar/bin/bar \
--replace "a string containing spaces" "some other text" \
--subst-var someVar
substitute is implemented using the
replace
command. Unlike with the sed command, you
don’t have to worry about escaping special characters. It
supports performing substitutions on binary files (such as
executables), though there you’ll probably want to make sure
that the replacement string is as long as the replaced
string.
substituteInPlace
file
subsLike substitute, but performs
the substitutions in place on the file
file.
substituteAll
infile
outfileReplaces every occurence of
@, where
varName@varName is any environment variable, in
infile, writing the result to
outfile. For instance, if
infile has the contents
#! @bash@/bin/sh PATH=@coreutils@/bin echo @foo@
and the environment contains
bash=/nix/store/bmwp0q28cf21...-bash-3.2-p39
and
coreutils=/nix/store/68afga4khv0w...-coreutils-6.12,
but does not contain the variable foo, then the
output will be
#! /nix/store/bmwp0q28cf21...-bash-3.2-p39/bin/sh PATH=/nix/store/68afga4khv0w...-coreutils-6.12/bin echo @foo@
That is, no substitution is performed for undefined variables.
Environment variables that start with an uppercase letter or an
underscore are filtered out,
to prevent global variables (like HOME) or private
variables (like __ETC_PROFILE_DONE) from accidentally
getting substituted.
The variables also have to be valid bash “names”, as
defined in the bash manpage (alphanumeric or _,
must not start with a number).
substituteAllInPlace
fileLike substituteAll, but performs
the substitutions in place on the file
file.
stripHash
pathStrips the directory and hash part of a store
path, outputting the name part to stdout.
For example:
# prints coreutils-8.24 stripHash "/nix/store/9s9r019176g7cvn2nvcw41gsp862y6b4-coreutils-8.24"
If you wish to store the result in another variable, then the following idiom may be useful:
name="/nix/store/9s9r019176g7cvn2nvcw41gsp862y6b4-coreutils-8.24" someVar=$(stripHash $name)
wrapProgram
executable
makeWrapperArgsConvenience function for makeWrapper
that automatically creates a sane wrapper file
It takes all the same arguments as makeWrapper,
except for --argv0.
It cannot be applied multiple times, since it will overwrite the wrapper file.
The following packages provide a setup hook:
Adds the include subdirectory
of each build input to the NIX_CFLAGS_COMPILE
environment variable, and the lib and
lib64 subdirectories to
NIX_LDFLAGS.
Adds the lib/site_perl subdirectory
of each build input to the PERL5LIB
environment variable.
Adds the
lib/${python.libPrefix}/site-packages subdirectory of
each build input to the PYTHONPATH environment
variable.
Adds the lib/pkgconfig and
share/pkgconfig subdirectories of each
build input to the PKG_CONFIG_PATH environment
variable.
Adds the share/aclocal
subdirectory of each build input to the ACLOCAL_PATH
environment variable.
The autoreconfHook derivation adds
autoreconfPhase, which runs autoreconf, libtoolize and
automake, essentially preparing the configure script in autotools-based
builds.
Adds every file named
catalog.xml found under the
xml/dtd and xml/xsl
subdirectories of each build input to the
XML_CATALOG_FILES environment
variable.
Adds the share/texmf-nix
subdirectory of each build input to the TEXINPUTS
environment variable.
Sets the QTDIR environment variable
to Qt’s path.
Exports GDK_PIXBUF_MODULE_FILE
environment variable the the builder. Add librsvg package
to buildInputs to get svg support.
Creates a temporary package database and registers every Haskell build input in it (TODO: how?).
Adds the
GStreamer plugins subdirectory of
each build input to the GST_PLUGIN_SYSTEM_PATH_1_0 or
GST_PLUGIN_SYSTEM_PATH environment variable.
Defines the paxmark helper for
setting per-executable PaX flags on Linux (where it is available by
default; on all other platforms, paxmark is a no-op).
For example, to disable secure memory protections on the executable
foo:
postFixup = ''
paxmark m $out/bin/foo
'';
The m flag is the most common flag and is typically
required for applications that employ JIT compilation or otherwise need to
execute code generated at run-time. Disabling PaX protections should be
considered a last resort: if possible, problematic features should be
disabled or patched to work with PaX.
[measures taken to prevent dependencies on packages outside the store, and what you can do to prevent them]
GCC doesn't search in locations such as
/usr/include. In fact, attempts to add such
directories through the -I flag are filtered out.
Likewise, the linker (from GNU binutils) doesn't search in standard
locations such as /usr/lib. Programs built on
Linux are linked against a GNU C Library that likewise doesn't search
in the default system locations.
There are flags available to harden packages at compile or link-time.
These can be toggled using the stdenv.mkDerivation parameters
hardeningDisable and hardeningEnable.
Both parameters take a list of flags as strings. The special
"all" flag can be passed to hardeningDisable
to turn off all hardening. These flags can also be used as environment variables
for testing or development purposes.
The following flags are enabled by default and might require disabling with
hardeningDisable if the program to package is incompatible.
formatAdds the -Wformat -Wformat-security
-Werror=format-security compiler options. At present,
this warns about calls to printf and
scanf functions where the format string is
not a string literal and there are no format arguments, as in
printf(foo);. This may be a security hole
if the format string came from untrusted input and contains
%n.
This needs to be turned off or fixed for errors similar to:
/tmp/nix-build-zynaddsubfx-2.5.2.drv-0/zynaddsubfx-2.5.2/src/UI/guimain.cpp:571:28: error: format not a string literal and no format arguments [-Werror=format-security]
printf(help_message);
^
cc1plus: some warnings being treated as errors
stackprotectorAdds the -fstack-protector-strong
--param ssp-buffer-size=4
compiler options. This adds safety checks against stack overwrites
rendering many potential code injection attacks into aborting situations.
In the best case this turns code injection vulnerabilities into denial
of service or into non-issues (depending on the application).
This needs to be turned off or fixed for errors similar to:
bin/blib.a(bios_console.o): In function `bios_handle_cup':
/tmp/nix-build-ipxe-20141124-5cbdc41.drv-0/ipxe-5cbdc41/src/arch/i386/firmware/pcbios/bios_console.c:86: undefined reference to `__stack_chk_fail'
fortifyAdds the -O2 -D_FORTIFY_SOURCE=2 compiler
options. During code generation the compiler knows a great deal of
information about buffer sizes (where possible), and attempts to replace
insecure unlimited length buffer function calls with length-limited ones.
This is especially useful for old, crufty code. Additionally, format
strings in writable memory that contain '%n' are blocked. If an application
depends on such a format string, it will need to be worked around.
Addtionally, some warnings are enabled which might trigger build
failures if compiler warnings are treated as errors in the package build.
In this case, set NIX_CFLAGS_COMPILE to
-Wno-error=warning-type.
This needs to be turned off or fixed for errors similar to:
malloc.c:404:15: error: return type is an incomplete type
malloc.c:410:19: error: storage size of 'ms' isn't known
strdup.h:22:1: error: expected identifier or '(' before '__extension__'
strsep.c:65:23: error: register name not specified for 'delim'
installwatch.c:3751:5: error: conflicting types for '__open_2'
fcntl2.h:50:4: error: call to '__open_missing_mode' declared with attribute error: open with O_CREAT or O_TMPFILE in second argument needs 3 arguments
picAdds the -fPIC compiler options. This options adds
support for position independant code in shared libraries and thus making
ASLR possible.
Most notably, the Linux kernel, kernel modules and other code not running in an operating system environment like boot loaders won't build with PIC enabled. The compiler will is most cases complain that PIC is not supported for a specific build.
This needs to be turned off or fixed for assembler errors similar to:
ccbLfRgg.s: Assembler messages:
ccbLfRgg.s:33: Error: missing or invalid displacement expression `private_key_len@GOTOFF'
strictoverflowSigned integer overflow is undefined behaviour according to the C
standard. If it happens, it is an error in the program as it should check
for overflow before it can happen, not afterwards. GCC provides built-in
functions to perform arithmetic with overflow checking, which are correct
and faster than any custom implementation. As a workaround, the option
-fno-strict-overflow makes gcc behave as if signed
integer overflows were defined.
This flag should not trigger any build or runtime errors.
relroAdds the -z relro linker option. During program
load, several ELF memory sections need to be written to by the linker,
but can be turned read-only before turning over control to the program.
This prevents some GOT (and .dtors) overwrite attacks, but at least the
part of the GOT used by the dynamic linker (.got.plt) is still vulnerable.
This flag can break dynamic shared object loading. For instance, the
module systems of Xorg and OpenCV are incompatible with this flag. In almost
all cases the bindnow flag must also be disabled and
incompatible programs typically fail with similar errors at runtime.
bindnowAdds the -z bindnow linker option. During program
load, all dynamic symbols are resolved, allowing for the complete GOT to
be marked read-only (due to relro). This prevents GOT
overwrite attacks. For very large applications, this can incur some
performance loss during initial load while symbols are resolved, but this
shouldn't be an issue for daemons.
This flag can break dynamic shared object loading. For instance, the module systems of Xorg and PHP are incompatible with this flag. Programs incompatible with this flag often fail at runtime due to missing symbols, like:
intel_drv.so: undefined symbol: vgaHWFreeHWRec
The following flags are disabled by default and should be enabled
with hardeningEnable for packages that take untrusted
input like network services.
pieAdds the -fPIE compiler and -pie
linker options. Position Independent Executables are needed to take
advantage of Address Space Layout Randomization, supported by modern
kernel versions. While ASLR can already be enforced for data areas in
the stack and heap (brk and mmap), the code areas must be compiled as
position-independent. Shared libraries already do this with the
pic flag, so they gain ASLR automatically, but binary
.text regions need to be build with pie to gain ASLR.
When this happens, ROP attacks are much harder since there are no static
locations to bounce off of during a memory corruption attack.
For more in-depth information on these hardening flags and hardening in general, refer to the Debian Wiki, Ubuntu Wiki, Gentoo Wiki, and the Arch Wiki.
[1] It clears the
sys_lib_
variables in the Libtool script to prevent Libtool from using
libraries in *search_path/usr/lib and
such.
Table of Contents
The Nix language allows a derivation to produce multiple outputs, which is similar to what is utilized by other Linux distribution packaging systems. The outputs reside in separate nix store paths, so they can be mostly handled independently of each other, including passing to build inputs, garbage collection or binary substitution. The exception is that building from source always produces all the outputs.
The main motivation is to save disk space by reducing runtime closure sizes; consequently also sizes of substituted binaries get reduced. Splitting can be used to have more granular runtime dependencies, for example the typical reduction is to split away development-only files, as those are typically not needed during runtime. As a result, closure sizes of many packages can get reduced to a half or even much less.
When installing a package via systemPackages or nix-env you have several options:
You can install particular outputs explicitly, as each is available in the Nix language as an attribute of the package. The outputs attribute contains a list of output names.
You can let it use the default outputs. These are handled by meta.outputsToInstall attribute that contains a list of output names.
TODO: more about tweaking the attribute, etc.
NixOS provides configuration option environment.extraOutputsToInstall that allows adding extra outputs of environment.systemPackages atop the default ones. It's mainly meant for documentation and debug symbols, and it's also modified by specific options.
packageOverrides” to override meta.outputsToInstall attributes, but that's a rather inconvenient way.In the Nix language the individual outputs can be reached explicitly as attributes, e.g. coreutils.info, but the typical case is just using packages as build inputs.
When a multiple-output derivation gets into a build input of another derivation, the dev output is added if it exists, otherwise the first output is added. In addition to that, propagatedBuildOutputs of that package which by default contain $outputBin and $outputLib are also added. (See Section 4.4.1, “File type groups”.)
Here you find how to write a derivation that produces multiple outputs.
In nixpkgs there is a framework supporting multiple-output derivations. It tries to cover most cases by default behavior. You can find the source separated in <nixpkgs/pkgs/build-support/setup-hooks/multiple-outputs.sh>; it's relatively well-readable. The whole machinery is triggered by defining the outputs attribute to contain the list of desired output names (strings).
outputs = [ "bin" "dev" "out" "doc" ];
Often such a single line is enough. For each output an equally named environment variable is passed to the builder and contains the path in nix store for that output. By convention, the first output should contain the executable programs provided by the package as that output is used by Nix in string conversions, allowing references to binaries like ${pkgs.perl}/bin/perl to always work. Typically you also want to have the main out output, as it catches any files that didn't get elsewhere.
debug output, described at separateDebugInfo.The support code currently recognizes some particular kinds of outputs and either instructs the build system of the package to put files into their desired outputs or it moves the files during the fixup phase. Each group of file types has an outputFoo variable specifying the output name where they should go. If that variable isn't defined by the derivation writer, it is guessed – a default output name is defined, falling back to other possibilities if the output isn't defined.
$outputDev
is for development-only files. These include C(++) headers, pkg-config, cmake and aclocal files. They go to dev or out by default.
$outputBin
is meant for user-facing binaries, typically residing in bin/. They go to bin or out by default.
$outputLib
is meant for libraries, typically residing in lib/ and libexec/. They go to lib or out by default.
$outputDoc
is for user documentation, typically residing in share/doc/. It goes to doc or out by default.
$outputDevdoc
is for developer documentation. Currently we count gtk-doc in there. It goes to devdoc or is removed (!) by default. This is because e.g. gtk-doc tends to be rather large and completely unused by nixpkgs users.
$outputMan
is for man pages (except for section 3). They go to man or doc or $outputBin by default.
$outputDevman
is for section 3 man pages. They go to devman or $outputMan by default.
$outputInfo
is for info pages. They go to info or doc or $outputMan by default.
Some configure scripts don't like some of the parameters passed by default by the framework, e.g. --docdir=/foo/bar. You can disable this by setting setOutputFlags = false;.
The outputs of a single derivation can retain references to each other, but note that circular references are not allowed. (And each strongly-connected component would act as a single output anyway.)
Most of split packages contain their core functionality in libraries. These libraries tend to refer to various kind of data that typically gets into out, e.g. locale strings, so there is often no advantage in separating the libraries into lib, as keeping them in out is easier.
Some packages have hidden assumptions on install paths, which complicates splitting.
Table of Contents
"Cross-compilation" means compiling a program on one machine for another type of machine. For example, a typical use of cross compilation is to compile programs for embedded devices. These devices often don't have the computing power and memory to compile their own programs. One might think that cross-compilation is a fairly niche concern, but there are advantages to being rigorous about distinguishing build-time vs run-time environments even when one is developing and deploying on the same machine. Nixpkgs is increasingly adopting this opinion in that packages should be written with cross-compilation in mind, and nixpkgs should evaluate in a similar way (by minimizing cross-compilation-specific special cases) whether or not one is cross-compiling.
This chapter will be organized in three parts. First, it will describe the basics of how to package software in a way that supports cross-compilation. Second, it will describe how to use Nixpkgs when cross-compiling. Third, it will describe the internal infrastructure supporting cross-compilation.
The three GNU Autoconf platforms, build, host, and cross, are historically the result of much confusion. https://gcc.gnu.org/onlinedocs/gccint/Configure-Terms.html clears this up somewhat but there is more to be said. An important advice to get out the way is, unless you are packaging a compiler or other build tool, just worry about the build and host platforms. Dealing with just two platforms usually better matches people's preconceptions, and in this case is completely correct.
In Nixpkgs, these three platforms are defined as attribute sets under the names buildPlatform, hostPlatform, and targetPlatform.
All are guaranteed to contain at least a platform field, which contains detailed information on the platform.
All three are always defined at the top level, so one can get at them just like a dependency in a function that is imported with callPackage:
{ stdenv, buildPlatform, hostPlatform, fooDep, barDep, .. }: ...
system field with a short 2-part, hyphen-separated summering string name for the platform.
But, when when cross compiling, hostPlatform and targetPlatform may instead contain config with a fuller 3- or 4-part string in the manner of LLVM.
We should have all 3 platforms always contain both, and maybe give config a better name while we are at it.
buildPlatformThe "build platform" is the platform on which a package is built. Once someone has a built package, or pre-built binary package, the build platform should not matter and be safe to ignore.
hostPlatformThe "host platform" is the platform on which a package is run. This is the simplest platform to understand, but also the one with the worst name.
targetPlatformThe "target platform" is black sheep. The other two intrinsically apply to all compiled software—or any build process with a notion of "build-time" followed by "run-time". The target platform only applies to programming tools, and even then only is a good for for some of them. Briefly, GCC, Binutils, GHC, and certain other tools are written in such a way such that a single build can only compiler code for a single platform. Thus, when building them, one must think ahead about what platforms they wish to use the tool to produce machine code for, and build binaries for each.
There is no fundamental need to think about the target ahead of time like this. LLVM, for example, was designed from the beginning with cross-compilation in mind, and so a normal LLVM binary will support every architecture that LLVM supports. If the tool supports modular or pluggable backends, one might imagine specifying a set of target platforms / backends one wishes to support, rather than a single one.
The biggest reason for mess, if there is one, is that many compilers have the bad habit a build process that builds the compiler and standard library/runtime together. Then the specifying target platform is essential, because it determines the host platform of the standard library/runtime. Nixpkgs tries to avoid this where possible too, but still, because the concept of a target platform is so ingrained now in Autoconf and other tools, it is best to support it as is. Tools like LLVM that don't need up-front target platforms can safely ignore it like normal packages, and it will do no harm.
stdenv.cross.
This field defined as hostPlatform when the host and build platforms differ, but otherwise not defined at all.
This field is obsolete and will soon disappear—please do not use it.
As mentioned in the introduction to this chapter, one can think about a build time vs run time distinction whether cross-compiling or not. In the case of cross-compilation, this corresponds with whether a derivation running on the native or foreign platform is produced. An interesting thing to think about is how this corresponds with the three Autoconf platforms. In the run-time case, the depending and depended-on package simply have matching build, host, and target platforms. But in the build-time case, one can imagine "sliding" the platforms one over. The depended-on package's host and target platforms (respectively) become the depending package's build and host platforms. This is the most important guiding principle behind cross-compilation with Nixpkgs, and will be called the sliding window principle. In this manner, given the 3 platforms for one package, we can determine the three platforms for all its transitive dependencies.
Some examples will probably make this clearer.
If a package is being built with a (build, host, target) platform triple of (foo, bar, bar), then its build-time dependencies would have a triple of (foo, foo, bar), and those packages' build-time dependencies would have triple of (foo, foo, foo).
In other words, it should take two "rounds" of following build-time dependency edges before one reaches a fixed point where, by the sliding window principle, the platform triple no longer changes.
Indeed, this happens with cross compilation, where only rounds of native dependencies starting with the second necessarily coincide with native packages.
How does this work in practice? Nixpkgs is now structured so that build-time dependencies are taken from from buildPackages, whereas run-time dependencies are taken from the top level attribute set.
For example, buildPackages.gcc should be used at build time, while gcc should be used at run time.
Now, for most of Nixpkgs's history, there was no buildPackages, and most packages have not been refactored to use it explicitly.
Instead, one can use the four attributes used for specifying dependencies as documented in ???.
We "splice" together the run-time and build-time package sets with callPackage, and then mkDerivation for each of four attributes pulls the right derivation out.
This splicing can be skipped when not cross compiling as the package sets are the same, but is a bit slow for cross compiling.
Because of this, a best-of-both-worlds solution is in the works with no splicing or explicit access of buildPackages needed.
For now, feel free to use either method.
Many sources (manual, wiki, etc) probably mention passing system, platform, and, optionally, crossSystem to nixpkgs:
import <nixpkgs> { system = ..; platform = ..; crossSystem = ..; }.
system and platform together determine the system on which packages are built, and crossSystem specifies the platform on which packages are ultimately intended to run, if it is different.
This still works, but with more recent changes, one can alternatively pass localSystem, containing system and platform, for symmetry.
One would think that localSystem and crossSystem overlap horribly with the three *Platforms (buildPlatform, hostPlatform, and targetPlatform; see stage.nix or the manual).
Actually, those identifiers are purposefully not used here to draw a subtle but important distinction:
While the granularity of having 3 platforms is necessary to properly *build* packages, it is overkill for specifying the user's *intent* when making a build plan or package set.
A simple "build vs deploy" dichotomy is adequate: the sliding window principle described in the previous section shows how to interpolate between the these two "end points" to get the 3 platform triple for each bootstrapping stage.
That means for any package a given package set, even those not bound on the top level but only reachable via dependencies or buildPackages, the three platforms will be defined as one of localSystem or crossSystem, with the former replacing the latter as one traverses build-time dependencies.
A last simple difference then is crossSystem should be null when one doesn't want to cross-compile, while the *Platforms are always non-null.
localSystem is always non-null.
To be written.
gccCross.
Such *Cross derivations is a holdover from before we properly distinguished between the host and target platforms
—the derivation with "Cross" in the name covered the build = host != target case, while the other covered the host = target, with build platform the same or not based on whether one was using its .nativeDrv or .crossDrv.
This ugliness will disappear soon.
Table of Contents
Nix comes with certain defaults about what packages can and cannot be installed, based on a package's metadata. By default, Nix will prevent installation if any of the following criteria are true:
The package is thought to be broken, and has had
its meta.broken set to
true.
The package's meta.license is set
to a license which is considered to be unfree.
The package has known security vulnerabilities but
has not or can not be updated for some reason, and a list of issues
has been entered in to the package's
meta.knownVulnerabilities.
Note that all this is checked during evaluation already,
and the check includes any package that is evaluated.
In particular, all build-time dependencies are checked.
nix-env -qa will (attempt to) hide any packages
that would be refused.
Each of these criteria can be altered in the nixpkgs configuration.
The nixpkgs configuration for a NixOS system is set in the
configuration.nix, as in the following example:
{
nixpkgs.config = {
allowUnfree = true;
};
}
However, this does not allow unfree software for individual users. Their configurations are managed separately.
A user's of nixpkgs configuration is stored in a user-specific
configuration file located at
~/.config/nixpkgs/config.nix. For example:
{
allowUnfree = true;
}
There are two ways to try compiling a package which has been marked as broken.
For allowing the build of a broken package once, you can use an environment variable for a single invocation of the nix tools:
$ export NIXPKGS_ALLOW_BROKEN=1
For permanently allowing broken packages to be built, you may
add allowBroken = true; to your user's
configuration file, like this:
{
allowBroken = true;
}
There are several ways to tweak how Nix handles a package which has been marked as unfree.
To temporarily allow all unfree packages, you can use an environment variable for a single invocation of the nix tools:
$ export NIXPKGS_ALLOW_UNFREE=1
It is possible to permanently allow individual unfree packages,
while still blocking unfree packages by default using the
allowUnfreePredicate configuration
option in the user configuration file.
This option is a function which accepts a package as a parameter, and returns a boolean. The following example configuration accepts a package and always returns false:
{
allowUnfreePredicate = (pkg: false);
}
A more useful example, the following configuration allows only allows flash player and visual studio code:
{
allowUnfreePredicate = (pkg: elem (builtins.parseDrvName pkg.name).name [ "flashplayer" "vscode" ]);
}
It is also possible to whitelist and blacklist licenses
that are specifically acceptable or not acceptable, using
whitelistedLicenses and
blacklistedLicenses, respectively.
The following example configuration whitelists the
licenses amd and wtfpl:
{
whitelistedLicenses = with stdenv.lib.licenses; [ amd wtfpl ];
}
The following example configuration blacklists the
gpl3 and agpl3 licenses:
{
blacklistedLicenses = with stdenv.lib.licenses; [ agpl3 gpl3 ];
}
A complete list of licenses can be found in the file
lib/licenses.nix of the nixpkgs tree.
There are several ways to tweak how Nix handles a package which has been marked as insecure.
To temporarily allow all insecure packages, you can use an environment variable for a single invocation of the nix tools:
$ export NIXPKGS_ALLOW_INSECURE=1
It is possible to permanently allow individual insecure
packages, while still blocking other insecure packages by
default using the permittedInsecurePackages
configuration option in the user configuration file.
The following example configuration permits the
installation of the hypothetically insecure package
hello, version 1.2.3:
{
permittedInsecurePackages = [
"hello-1.2.3"
];
}
It is also possible to create a custom policy around which
insecure packages to allow and deny, by overriding the
allowInsecurePredicate configuration
option.
The allowInsecurePredicate option is a
function which accepts a package and returns a boolean, much
like allowUnfreePredicate.
The following configuration example only allows insecure packages with very short names:
{
allowInsecurePredicate = (pkg: (builtins.stringLength (builtins.parseDrvName pkg.name).name) <= 5);
}
Note that permittedInsecurePackages is
only checked if allowInsecurePredicate is not
specified.
packageOverridesYou can define a function called
packageOverrides in your local
~/.config/nixpkgs/config.nix to overide nix packages. It
must be a function that takes pkgs as an argument and return modified
set of packages.
{
packageOverrides = pkgs: rec {
foo = pkgs.foo.override { ... };
};
}
Table of Contents
The nixpkgs repository has several utility functions to manipulate Nix expressions.
Sometimes one wants to override parts of
nixpkgs, e.g. derivation attributes, the results of
derivations or even the whole package set.
The function override is usually available for all the
derivations in the nixpkgs expression (pkgs).
It is used to override the arguments passed to a function.
Example usages:
pkgs.foo.override { arg1 = val1; arg2 = val2; ... }
import pkgs.path { overlays = [ (self: super: {
foo = super.foo.override { barSupport = true ; };
})]};
mypkg = pkgs.callPackage ./mypkg.nix {
mydep = pkgs.mydep.override { ... };
}
In the first example, pkgs.foo is the result of a function call
with some default arguments, usually a derivation.
Using pkgs.foo.override will call the same function with
the given new arguments.
The function overrideAttrs allows overriding the
attribute set passed to a stdenv.mkDerivation call,
producing a new derivation based on the original one.
This function is available on all derivations produced by the
stdenv.mkDerivation function, which is most packages
in the nixpkgs expression pkgs.
Example usage:
helloWithDebug = pkgs.hello.overrideAttrs (oldAttrs: rec {
separateDebugInfo = true;
});
In the above example, the separateDebugInfo attribute is
overriden to be true, thus building debug info for
helloWithDebug, while all other attributes will be
retained from the original hello package.
The argument oldAttrs is conventionally used to refer to
the attr set originally passed to stdenv.mkDerivation.
separateDebugInfo is processed only by the
stdenv.mkDerivation function, not the generated, raw
Nix derivation. Thus, using overrideDerivation will
not work in this case, as it overrides only the attributes of the final
derivation. It is for this reason that overrideAttrs
should be preferred in (almost) all cases to
overrideDerivation, i.e. to allow using
sdenv.mkDerivation to process input arguments, as well
as the fact that it is easier to use (you can use the same attribute
names you see in your Nix code, instead of the ones generated (e.g.
buildInputs vs nativeBuildInputs,
and involves less typing.
overrideAttrs in almost all
cases, see its documentation for the reasons why.
overrideDerivation is not deprecated and will continue
to work, but is less nice to use and does not have as many abilities as
overrideAttrs.
~/.config/nixpkgs/config.nix.
The function overrideDerivation creates a new derivation
based on an existing one by overriding the original's attributes with
the attribute set produced by the specified function.
This function is available on all
derivations defined using the makeOverridable function.
Most standard derivation-producing functions, such as
stdenv.mkDerivation, are defined using this
function, which means most packages in the nixpkgs expression,
pkgs, have this function.
Example usage:
mySed = pkgs.gnused.overrideDerivation (oldAttrs: {
name = "sed-4.2.2-pre";
src = fetchurl {
url = ftp://alpha.gnu.org/gnu/sed/sed-4.2.2-pre.tar.bz2;
sha256 = "11nq06d131y4wmf3drm0yk502d2xc6n5qy82cg88rb9nqd2lj41k";
};
patches = [];
});
In the above example, the name, src,
and patches of the derivation will be overridden, while
all other attributes will be retained from the original derivation.
The argument oldAttrs is used to refer to the attribute set of
the original derivation.
overrideDerivation function.
For example, the name attribute reference
in url = "mirror://gnu/hello/${name}.tar.gz";
is filled-in *before* the overrideDerivation function
modifies the attribute set. This means that overriding the
name attribute, in this example, *will not* change the
value of the url attribute. Instead, we need to override
both the name *and* url attributes.
The function lib.makeOverridable is used to make the result
of a function easily customizable. This utility only makes sense for functions
that accept an argument set and return an attribute set.
Example usage:
f = { a, b }: { result = a+b; }
c = lib.makeOverridable f { a = 1; b = 2; }
The variable c is the value of the f function
applied with some default arguments. Hence the value of c.result
is 3, in this example.
The variable c however also has some additional functions, like
c.override which can be used to
override the default arguments. In this example the value of
(c.override { a = 4; }).result is 6.
Generators are functions that create file formats from nix
data structures, e. g. for configuration files.
There are generators available for: INI,
JSON and YAML
All generators follow a similar call interface: generatorName
configFunctions data, where configFunctions is a
set of user-defined functions that format variable parts of the content.
They each have common defaults, so often they do not need to be set
manually. An example is mkSectionName ? (name: libStr.escape [ "[" "]"
] name) from the INI generator. It gets the name
of a section and returns a sanitized name. The default
mkSectionName escapes [ and
] with a backslash.
"${drv}".
Detailed documentation for each generator can be found in
lib/generators.nix.
buildFHSUserEnv provides a way to build and run
FHS-compatible lightweight sandboxes. It creates an isolated root with
bound /nix/store, so its footprint in terms of disk
space needed is quite small. This allows one to run software which is hard or
unfeasible to patch for NixOS -- 3rd-party source trees with FHS assumptions,
games distributed as tarballs, software with integrity checking and/or external
self-updated binaries. It uses Linux namespaces feature to create
temporary lightweight environments which are destroyed after all child
processes exit, without root user rights requirement. Accepted arguments are:
nameEnvironment name.
targetPkgsPackages to be installed for the main host's architecture (i.e. x86_64 on x86_64 installations). Along with libraries binaries are also installed.
multiPkgsPackages to be installed for all architectures supported by a host (i.e. i686 and x86_64 on x86_64 installations). Only libraries are installed by default.
extraBuildCommandsAdditional commands to be executed for finalizing the directory structure.
extraBuildCommandsMultiLike extraBuildCommands, but
executed only on multilib architectures.
extraOutputsToInstallAdditional derivation outputs to be linked for both target and multi-architecture packages.
extraInstallCommandsAdditional commands to be executed for finalizing the derivation with runner script.
runScriptA command that would be executed inside the sandbox and
passed all the command line arguments. It defaults to
bash.
One can create a simple environment using a shell.nix
like that:
{ pkgs ? import <nixpkgs> {} }:
(pkgs.buildFHSUserEnv {
name = "simple-x11-env";
targetPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]) ++ (with pkgs.xorg;
[ libX11
libXcursor
libXrandr
]);
multiPkgs = pkgs: (with pkgs;
[ udev
alsaLib
]);
runScript = "bash";
}).env
Running nix-shell would then drop you into a shell with
these libraries and binaries available. You can use this to run
closed-source applications which expect FHS structure without hassles:
simply change runScript to the application path,
e.g. ./bin/start.sh -- relative paths are supported.
pkgs.dockerTools is a set of functions for creating and
manipulating Docker images according to the
Docker Image Specification v1.0.0
. Docker itself is not used to perform any of the operations done by these
functions.
dockerTools API is unstable and may be subject to
backwards-incompatible changes in the future.
This function is analogous to the docker build command, in that can used to build a Docker-compatible repository tarball containing a single image with one or multiple layers. As such, the result is suitable for being loaded in Docker with docker load.
The parameters of buildImage with relative example values are
described below:
The above example will build a Docker image redis/latest
from the given base image. Loading and running this image in Docker results in
redis-server being started automatically.
| |
| |
| |
| |
| |
| |
Note:
Using this parameter requires the kvm
device to be available.
| |
|
After the new layer has been created, its closure
(to which contents, config and
runAsRoot contribute) will be copied in the layer itself.
Only new dependencies that are not already in the existing layers will be copied.
At the end of the process, only one new single layer will be produced and added to the resulting image.
The resulting repository will only list the single image
image/tag. In the case of Example 7.1, “Docker build”
it would be redis/latest.
It is possible to inspect the arguments with which an image was built
using its buildArgs attribute.
getProtocolByName: does not exist (no such protocol name: tcp)
you may need to add pkgs.iana_etc to contents.
Error_Protocol ("certificate has unknown CA",True,UnknownCa)
you may need to add pkgs.cacert to contents.
This function is analogous to the docker pull command,
in that can be used to fetch a Docker image from a Docker registry.
Currently only registry v1 is supported.
By default Docker Hub
is used to pull images.
Its parameters are described in the example below:
| |
| |
| |
Note: The checksum is computed on the unpacked directory, not on the final tarball. | |
In the above example the default values are shown for the variables
|
This function is analogous to the docker export command, in that can used to flatten a Docker image that contains multiple layers. It is in fact the result of the merge of all the layers of the image. As such, the result is suitable for being imported in Docker with docker import.
kvm
device to be available.
The parameters of exportImage are the following:
Example 7.3. Docker export
exportImage {
fromImage = someLayeredImage;
fromImageName = null;
fromImageTag = null;
name = someLayeredImage.name;
}
The parameters relative to the base image have the same synopsis as
described in Section 7.4.1, “buildImage”, except that
fromImage is the only required argument in this case.
The name argument is the name of the derivation output,
which defaults to fromImage.name.
This constant string is a helper for setting up the base files for managing
users and groups, only if such files don't exist already.
It is suitable for being used in a
runAsRoot script for cases like
in the example below:
Example 7.4. Shadow base files
buildImage {
name = "shadow-basic";
runAsRoot = ''
#!${stdenv.shell}
${shadowSetup}
groupadd -r redis
useradd -r -g redis redis
mkdir /data
chown redis:redis /data
'';
}
Creating base files like /etc/passwd or
/etc/login.defs are necessary for shadow-utils to
manipulate users and groups.
Table of Contents
Nix packages can declare meta-attributes
that contain information about a package such as a description, its
homepage, its license, and so on. For instance, the GNU Hello package
has a meta declaration like this:
meta = {
description = "A program that produces a familiar, friendly greeting";
longDescription = ''
GNU Hello is a program that prints "Hello, world!" when you run it.
It is fully customizable.
'';
homepage = http://www.gnu.org/software/hello/manual/;
license = stdenv.lib.licenses.gpl3Plus;
maintainers = [ stdenv.lib.maintainers.eelco ];
platforms = stdenv.lib.platforms.all;
};
Meta-attributes are not passed to the builder of the package. Thus, a change to a meta-attribute doesn’t trigger a recompilation of the package. The value of a meta-attribute must be a string.
The meta-attributes of a package can be queried from the command-line using nix-env:
$ nix-env -qa hello --json
{
"hello": {
"meta": {
"description": "A program that produces a familiar, friendly greeting",
"homepage": "http://www.gnu.org/software/hello/manual/",
"license": {
"fullName": "GNU General Public License version 3 or later",
"shortName": "GPLv3+",
"url": "http://www.fsf.org/licensing/licenses/gpl.html"
},
"longDescription": "GNU Hello is a program that prints \"Hello, world!\" when you run it.\nIt is fully customizable.\n",
"maintainers": [
"Ludovic Court\u00e8s <ludo@gnu.org>"
],
"platforms": [
"i686-linux",
"x86_64-linux",
"armv5tel-linux",
"armv7l-linux",
"mips64el-linux",
"x86_64-darwin",
"i686-cygwin",
"i686-freebsd",
"x86_64-freebsd",
"i686-openbsd",
"x86_64-openbsd"
],
"position": "/home/user/dev/nixpkgs/pkgs/applications/misc/hello/default.nix:14"
},
"name": "hello-2.9",
"system": "x86_64-linux"
}
}
nix-env knows about the
description field specifically:
$ nix-env -qa hello --description hello-2.3 A program that produces a familiar, friendly greeting
It is expected that each meta-attribute is one of the following:
descriptionA short (one-line) description of the package. This is shown by nix-env -q --description and also on the Nixpkgs release pages.
Don’t include a period at the end. Don’t include newline characters. Capitalise the first character. For brevity, don’t repeat the name of package — just describe what it does.
Wrong: "libpng is a library that allows you to decode PNG images."
Right: "A library for decoding PNG images"
longDescriptionAn arbitrarily long description of the package.
branchRelease branch. Used to specify that a package is not going to receive updates that are not in this branch; for example, Linux kernel 3.0 is supposed to be updated to 3.0.X, not 3.1.
homepageThe package’s homepage. Example:
http://www.gnu.org/software/hello/manual/
downloadPageThe page where a link to the current version can be found. Example:
http://ftp.gnu.org/gnu/hello/
license
The license, or licenses, for the package. One from the attribute set
defined in
nixpkgs/lib/licenses.nix. At this moment
using both a list of licenses and a single license is valid. If the
license field is in the form of a list representation, then it means
that parts of the package are licensed differently. Each license
should preferably be referenced by their attribute. The non-list
attribute value can also be a space delimited string representation of
the contained attribute shortNames or spdxIds. The following are all valid
examples:
Single license referenced by attribute (preferred)
stdenv.lib.licenses.gpl3.
Single license referenced by its attribute shortName (frowned upon)
"gpl3".
Single license referenced by its attribute spdxId (frowned upon)
"GPL-3.0".
Multiple licenses referenced by attribute (preferred)
with stdenv.lib.licenses; [ asl20 free ofl ].
Multiple licenses referenced as a space delimited string of attribute shortNames (frowned upon)
"asl20 free ofl".
For details, see Section 8.2, “Licenses”.
maintainersA list of names and e-mail addresses of the
maintainers of this Nix expression. If
you would like to be a maintainer of a package, you may want to add
yourself to nixpkgs/lib/maintainers.nix
and write something like [ stdenv.lib.maintainers.alice
stdenv.lib.maintainers.bob ].
priorityThe priority of the package,
used by nix-env to resolve file name conflicts
between packages. See the Nix manual page for
nix-env for details. Example:
"10" (a low-priority
package).
platformsThe list of Nix platform types on which the package is supported. Hydra builds packages according to the platform specified. If no platform is specified, the package does not have prebuilt binaries. An example is:
meta.platforms = stdenv.lib.platforms.linux;
Attribute Set stdenv.lib.platforms in
nixpkgs/lib/platforms.nix defines various common
lists of platforms types.
hydraPlatformsThe list of Nix platform types for which the Hydra
instance at hydra.nixos.org will build the
package. (Hydra is the Nix-based continuous build system.) It
defaults to the value of meta.platforms. Thus,
the only reason to set meta.hydraPlatforms is
if you want hydra.nixos.org to build the
package on a subset of meta.platforms, or not
at all, e.g.
meta.platforms = stdenv.lib.platforms.linux; meta.hydraPlatforms = [];
brokenIf set to true, the package is
marked as “broken”, meaning that it won’t show up in
nix-env -qa, and cannot be built or installed.
Such packages should be removed from Nixpkgs eventually unless
they are fixed.
updateWalkerIf set to true, the package is
tested to be updated correctly by the update-walker.sh
script without additional settings. Such packages have
meta.version set and their homepage (or
the page specified by meta.downloadPage) contains
a direct link to the package tarball.
The meta.license attribute should preferrably contain
a value from stdenv.lib.licenses defined in
nixpkgs/lib/licenses.nix,
or in-place license description of the same format if the license is
unlikely to be useful in another expression.
Although it's typically better to indicate the specific license, a few generic options are available:
stdenv.lib.licenses.free,
"free"Catch-all for free software licenses not listed above.
stdenv.lib.licenses.unfreeRedistributable,
"unfree-redistributable"Unfree package that can be redistributed in binary form. That is, it’s legal to redistribute the output of the derivation. This means that the package can be included in the Nixpkgs channel.
Sometimes proprietary software can only be redistributed
unmodified. Make sure the builder doesn’t actually modify the
original binaries; otherwise we’re breaking the license. For
instance, the NVIDIA X11 drivers can be redistributed unmodified,
but our builder applies patchelf to make them
work. Thus, its license is "unfree" and it
cannot be included in the Nixpkgs channel.
stdenv.lib.licenses.unfree,
"unfree"Unfree package that cannot be redistributed. You can build it yourself, but you cannot redistribute the output of the derivation. Thus it cannot be included in the Nixpkgs channel.
stdenv.lib.licenses.unfreeRedistributableFirmware,
"unfree-redistributable-firmware"This package supplies unfree, redistributable
firmware. This is a separate value from
unfree-redistributable because not everybody
cares whether firmware is free.
Table of Contents
math.h not
foundpython setup.py bdist_wheel cannot
create .whlinstall_data /
data_files problemsconfiguration.nix?The standard build
environment makes it easy to build typical Autotools-based
packages with very little code. Any other kind of package can be
accomodated by overriding the appropriate phases of
stdenv. However, there are specialised functions
in Nixpkgs to easily build packages for other programming languages,
such as Perl or Haskell. These are described in this chapter.
In this document and related Nix expressions we use the term Beam to describe the environment. Beam is the name of the Erlang Virtial Machine and, as far as we know, from a packaging perspective all languages that run on Beam are interchangable. The things that do change, like the build system, are transperant to the users of the package. So we make no distinction.
By default Rebar3 wants to manage it's own dependencies. In the
normal non-Nix, this is perfectly acceptable. In the Nix world it
is not. To support this we have created two versions of rebar3,
rebar3 and rebar3-open. The
rebar3 version has been patched to remove the
ability to download anything from it. If you are not running it a
nix-shell or a nix-build then its probably not going to work for
you. rebar3-open is the normal, un-modified
rebar3. It should work exactly as would any other version of
rebar3. Any Erlang package should rely on
rebar3 and thats really what you should be
using too.
Both Mix and Erlang.mk work exactly as you would expect. There
is a bootstrap process that needs to be run for both of
them. However, that is supported by the
buildMix and buildErlangMk derivations.
Beam packages are not registered in the top level simply because
they are not relevant to the vast majority of Nix users. They are
installable using the beamPackages attribute
set.
You can list the avialable packages in the
beamPackages with the following command:
$ nix-env -f "<nixpkgs>" -qaP -A beamPackages beamPackages.esqlite esqlite-0.2.1 beamPackages.goldrush goldrush-0.1.7 beamPackages.ibrowse ibrowse-4.2.2 beamPackages.jiffy jiffy-0.14.5 beamPackages.lager lager-3.0.2 beamPackages.meck meck-0.8.3 beamPackages.rebar3-pc pc-1.1.0
To install any of those packages into your profile, refer to them by their attribute path (first column):
$ nix-env -f "<nixpkgs>" -iA beamPackages.ibrowse
The attribute path of any Beam packages corresponds to the name of that particular package in Hex or its OTP Application/Release name.
There is a Nix functional called
buildRebar3. We use this function to make a
derivation that understands how to build the rebar3 project. For
example, the epression we use to build the hex2nix
project follows.
{stdenv, fetchFromGitHub, buildRebar3, ibrowse, jsx, erlware_commons }:
buildRebar3 rec {
name = "hex2nix";
version = "0.0.1";
src = fetchFromGitHub {
owner = "ericbmerritt";
repo = "hex2nix";
rev = "${version}";
sha256 = "1w7xjidz1l5yjmhlplfx7kphmnpvqm67w99hd2m7kdixwdxq0zqg";
};
beamDeps = [ ibrowse jsx erlware_commons ];
}
The only visible difference between this derivation and
something like stdenv.mkDerivation is that we
have added erlangDeps to the derivation. If
you add your Beam dependencies here they will be correctly
handled by the system.
If your package needs to compile native code via Rebar's port
compilation mechenism. You should add compilePort =
true; to the derivation.
Erlang.mk functions almost identically to Rebar. The only real
difference is that buildErlangMk is called
instead of buildRebar3
{ buildErlangMk, fetchHex, cowlib, ranch }:
buildErlangMk {
name = "cowboy";
version = "1.0.4";
src = fetchHex {
pkg = "cowboy";
version = "1.0.4";
sha256 =
"6a0edee96885fae3a8dd0ac1f333538a42e807db638a9453064ccfdaa6b9fdac";
};
beamDeps = [ cowlib ranch ];
meta = {
description = ''Small, fast, modular HTTP server written in
Erlang.'';
license = stdenv.lib.licenses.isc;
homepage = "https://github.com/ninenines/cowboy";
};
}
Mix functions almost identically to Rebar. The only real
difference is that buildMix is called
instead of buildRebar3
{ buildMix, fetchHex, plug, absinthe }:
buildMix {
name = "absinthe_plug";
version = "1.0.0";
src = fetchHex {
pkg = "absinthe_plug";
version = "1.0.0";
sha256 =
"08459823fe1fd4f0325a8bf0c937a4520583a5a26d73b193040ab30a1dfc0b33";
};
beamDeps = [ plug absinthe];
meta = {
description = ''A plug for Absinthe, an experimental GraphQL
toolkit'';
license = stdenv.lib.licenses.bsd3;
homepage = "https://github.com/CargoSense/absinthe_plug";
};
}
Often, all you want to do is be able to access a valid
environment that contains a specific package and its
dependencies. we can do that with the env
part of a derivation. For example, lets say we want to access an
erlang repl with ibrowse loaded up. We could do the following.
~/w/nixpkgs ❯❯❯ nix-shell -A beamPackages.ibrowse.env --run "erl"
Erlang/OTP 18 [erts-7.0] [source] [64-bit] [smp:4:4] [async-threads:10] [hipe] [kernel-poll:false]
Eshell V7.0 (abort with ^G)
1> m(ibrowse).
Module: ibrowse
MD5: 3b3e0137d0cbb28070146978a3392945
Compiled: January 10 2016, 23:34
Object file: /nix/store/g1rlf65rdgjs4abbyj4grp37ry7ywivj-ibrowse-4.2.2/lib/erlang/lib/ibrowse-4.2.2/ebin/ibrowse.beam
Compiler options: [{outdir,"/tmp/nix-build-ibrowse-4.2.2.drv-0/hex-source-ibrowse-4.2.2/_build/default/lib/ibrowse/ebin"},
debug_info,debug_info,nowarn_shadow_vars,
warn_unused_import,warn_unused_vars,warnings_as_errors,
{i,"/tmp/nix-build-ibrowse-4.2.2.drv-0/hex-source-ibrowse-4.2.2/_build/default/lib/ibrowse/include"}]
Exports:
add_config/1 send_req_direct/7
all_trace_off/0 set_dest/3
code_change/3 set_max_attempts/3
get_config_value/1 set_max_pipeline_size/3
get_config_value/2 set_max_sessions/3
get_metrics/0 show_dest_status/0
get_metrics/2 show_dest_status/1
handle_call/3 show_dest_status/2
handle_cast/2 spawn_link_worker_process/1
handle_info/2 spawn_link_worker_process/2
init/1 spawn_worker_process/1
module_info/0 spawn_worker_process/2
module_info/1 start/0
rescan_config/0 start_link/0
rescan_config/1 stop/0
send_req/3 stop_worker_process/1
send_req/4 stream_close/1
send_req/5 stream_next/1
send_req/6 terminate/2
send_req_direct/4 trace_off/0
send_req_direct/5 trace_off/2
send_req_direct/6 trace_on/0
trace_on/2
ok
2>
Notice the -A beamPackages.ibrowse.env.That
is the key to this functionality.
Getting access to an environment often isn't enough to do real
development. Many times we need to create a
shell.nix file and do our development inside
of the environment specified by that file. This file looks a lot
like the packaging described above. The main difference is that
src points to project root and we call the
package directly.
{ pkgs ? import "<nixpkgs"> {} }:
with pkgs;
let
f = { buildRebar3, ibrowse, jsx, erlware_commons }:
buildRebar3 {
name = "hex2nix";
version = "0.1.0";
src = ./.;
erlangDeps = [ ibrowse jsx erlware_commons ];
};
drv = beamPackages.callPackage f {};
in
drv
We can leveral the support of the Derivation, regardless of which build Derivation is called by calling the commands themselv.s
# =============================================================================
# Variables
# =============================================================================
NIX_TEMPLATES := "$(CURDIR)/nix-templates"
TARGET := "$(PREFIX)"
PROJECT_NAME := thorndyke
NIXPKGS=../nixpkgs
NIX_PATH=nixpkgs=$(NIXPKGS)
NIX_SHELL=nix-shell -I "$(NIX_PATH)" --pure
# =============================================================================
# Rules
# =============================================================================
.PHONY= all test clean repl shell build test analyze configure install \
test-nix-install publish plt analyze
all: build
guard-%:
@ if [ "${${*}}" == "" ]; then \
echo "Environment variable $* not set"; \
exit 1; \
fi
clean:
rm -rf _build
rm -rf .cache
repl:
$(NIX_SHELL) --run "iex -pa './_build/prod/lib/*/ebin'"
shell:
$(NIX_SHELL)
configure:
$(NIX_SHELL) --command 'eval "$$configurePhase"'
build: configure
$(NIX_SHELL) --command 'eval "$$buildPhase"'
install:
$(NIX_SHELL) --command 'eval "$$installPhase"'
test:
$(NIX_SHELL) --command 'mix test --no-start --no-deps-check'
plt:
$(NIX_SHELL) --run "mix dialyzer.plt --no-deps-check"
analyze: build plt
$(NIX_SHELL) --run "mix dialyzer --no-compile"
If you add the shell.nix as described and
user rebar as follows things should simply work. Aside from the
test, plt, and
analyze the talks work just fine for all of
the build Derivations.
Updating the Hex packages requires the use of the
hex2nix tool. Given the path to the Erlang
modules (usually
pkgs/development/erlang-modules). It will
happily dump a file called
hex-packages.nix. That file will contain all
the packages that use a recognized build system in Hex. However,
it can't know whether or not all those packages are buildable.
To make life easier for our users, it makes good sense to go
ahead and attempt to build all those packages and remove the
ones that don't build. To do that, simply run the command (in
the root of your nixpkgs repository). that follows.
$ nix-build -A beamPackages
That will build every package in
beamPackages. Then you can go through and
manually remove the ones that fail. Hopefully, someone will
improve hex2nix in the future to automate
that.
Bower is a package manager
for web site front-end components. Bower packages (comprising of
build artefacts and sometimes sources) are stored in
git repositories, typically on Github. The
package registry is run by the Bower team with package metadata
coming from the bower.json file within each
package.
The end result of running Bower is a
bower_components directory which can be included
in the web app's build process.
Bower can be run interactively, by installing
nodePackages.bower. More interestingly, the Bower
components can be declared in a Nix derivation, with the help of
nodePackages.bower2nix.
Suppose you have a bower.json with the following contents:
Example 9.1. bower.json
{
"name": "my-web-app",
"dependencies": {
"angular": "~1.5.0",
"bootstrap": "~3.3.6"
}
}
Running bower2nix will produce something like the following output:
{ fetchbower, buildEnv }:
buildEnv { name = "bower-env"; ignoreCollisions = true; paths = [
(fetchbower "angular" "1.5.3" "~1.5.0" "1749xb0firxdra4rzadm4q9x90v6pzkbd7xmcyjk6qfza09ykk9y")
(fetchbower "bootstrap" "3.3.6" "~3.3.6" "1vvqlpbfcy0k5pncfjaiskj3y6scwifxygfqnw393sjfxiviwmbv")
(fetchbower "jquery" "2.2.2" "1.9.1 - 2" "10sp5h98sqwk90y4k6hbdviwqzvzwqf47r3r51pakch5ii2y7js1")
]; }
Using the bower2nix command line arguments, the
output can be redirected to a file. A name like
bower-packages.nix would be fine.
The resulting derivation is a union of all the downloaded Bower
packages (and their dependencies). To use it, they still need to be
linked together by Bower, which is where
buildBowerComponents is useful.
buildBowerComponents function
The function is implemented in
pkgs/development/bower-modules/generic/default.nix.
Example usage:
In Example 9.2, “buildBowerComponents”, the following arguments are of special significance to the function:
| |
|
buildBowerComponents will run Bower to link
together the output of bower2nix, resulting in a
bower_components directory which can be used.
Here is an example of a web frontend build process using gulp. You might use grunt, or anything else.
Example 9.3. Example build script (gulpfile.js)
var gulp = require('gulp');
gulp.task('default', [], function () {
gulp.start('build');
});
gulp.task('build', [], function () {
console.log("Just a dummy gulp build");
gulp
.src(["./bower_components/**/*"])
.pipe(gulp.dest("./gulpdist/"));
});
Example 9.4. Full example — default.nix
{ myWebApp ? { outPath = ./.; name = "myWebApp"; }
, pkgs ? import <nixpkgs> {}
}:
pkgs.stdenv.mkDerivation {
name = "my-web-app-frontend";
src = myWebApp;
buildInputs = [ pkgs.nodePackages.gulp ];
bowerComponents = pkgs.buildBowerComponents {
name = "my-web-app";
generated = ./bower-packages.nix;
src = myWebApp;
};
buildPhase = ''
cp --reflink=auto --no-preserve=mode -R $bowerComponents/bower_components .
export HOME=$PWD
${pkgs.nodePackages.gulp}/bin/gulp build
'';
installPhase = "mv gulpdist $out";
}
A few notes about Example 9.4, “Full example — default.nix”:
The result of | |
Whether to symlink or copy the
| |
gulp requires | |
The actual build command. Other tools could be used. |
ENOCACHE errors from
buildBowerComponents
This means that Bower was looking for a package version which
doesn't exist in the generated
bower-packages.nix.
If bower.json has been updated, then run
bower2nix again.
It could also be a bug in bower2nix or
fetchbower. If possible, try reformulating
the version specification in bower.json.
Coq libraries should be installed in
$(out)/lib/coq/${coq.coq-version}/user-contrib/.
Such directories are automatically added to the
$COQPATH environment variable by the hook defined
in the Coq derivation.
Some libraries require OCaml and sometimes also Camlp5. The exact
versions that were used to build Coq are saved in the
coq.ocaml and coq.camlp5
attributes.
Here is a simple package example. It is a pure Coq library, thus it
only depends on Coq. Its makefile has been
generated using coq_makefile so we only have to
set the $COQLIB variable at install time.
{stdenv, fetchurl, coq}:
stdenv.mkDerivation {
src = fetchurl {
url = http://coq.inria.fr/pylons/contribs/files/Karatsuba/v8.4/Karatsuba.tar.gz;
sha256 = "0ymfpv4v49k4fm63nq6gcl1hbnnxrvjjp7yzc4973n49b853c5b1";
};
name = "coq-karatsuba";
buildInputs = [ coq ];
installFlags = "COQLIB=$(out)/lib/coq/${coq.coq-version}/";
}
The function buildGoPackage builds
standard Go programs.
Example 9.5. buildGoPackage
deis = buildGoPackage rec {
name = "deis-${version}";
version = "1.13.0";
goPackagePath = "github.com/deis/deis";
subPackages = [ "client" ];
src = fetchFromGitHub {
owner = "deis";
repo = "deis";
rev = "v${version}";
sha256 = "1qv9lxqx7m18029lj8cw3k7jngvxs4iciwrypdy0gd2nnghc68sw";
};
goDeps = ./deps.nix;
buildFlags = "--tags release";
}
Example 9.5, “buildGoPackage” is an example expression using buildGoPackage, the following arguments are of special significance to the function:
| |
In this example only | |
| |
|
The goDeps attribute can be imported from a separate
nix file that defines which Go libraries are needed and should
be included in GOPATH for buildPhase.
Example 9.6. deps.nix
[{ goPackagePath = "gopkg.in/yaml.v2";
fetch = { type = "git";
url = "https://gopkg.in/yaml.v2"; rev = "a83829b6f1293c91addabc89d0571c246397bbf4"; sha256 = "1m4dsmk90sbi17571h6pld44zxz7jc4lrnl4f27dpd1l8g5xvjhh"; }; } { goPackagePath = "github.com/docopt/docopt-go"; fetch = { type = "git"; url = "https://github.com/docopt/docopt-go"; rev = "784ddc588536785e7299f7272f39101f7faccc3f"; sha256 = "0wwz48jl9fvl1iknvn9dqr4gfy1qs03gxaikrxxp9gry6773v3sj"; }; } ]
| |
| |
|
buildGoPackage produces Multiple-output packages
where bin includes program binaries. You can test build a Go binary as follows:
$ nix-build -A deis.bin
or build all outputs with:
$ nix-build -A deis.all
bin output will be installed by default with nix-env -i
or systemPackages.
You may use Go packages installed into the active Nix profiles by adding the following to your ~/.bashrc:
for p in $NIX_PROFILES; do
GOPATH="$p/share/go:$GOPATH"
done
To extract dependency information from a Go package in automated way use go2nix.
It can produce complete derivation and goDeps file for Go programs.
Nixpkgs distributes build instructions for all Haskell packages registered on Hackage, but strangely enough normal Nix package lookups don’t seem to discover any of them, except for the default version of ghc, cabal-install, and stack:
$ nix-env -i alex error: selector ‘alex’ matches no derivations $ nix-env -qa ghc ghc-7.10.2
The Haskell package set is not registered in the top-level
namespace because it is huge. If all Haskell
packages were visible to these commands, then name-based
search/install operations would be much slower than they are now.
We avoided that by keeping all Haskell-related packages in a
separate attribute set called haskellPackages,
which the following command will list:
$ nix-env -f "<nixpkgs>" -qaP -A haskellPackages haskellPackages.a50 a50-0.5 haskellPackages.abacate haskell-abacate-0.0.0.0 haskellPackages.abcBridge haskell-abcBridge-0.12 haskellPackages.afv afv-0.1.1 haskellPackages.alex alex-3.1.4 haskellPackages.Allure Allure-0.4.101.1 haskellPackages.alms alms-0.6.7 [... some 8000 entries omitted ...]
To install any of those packages into your profile, refer to them by their attribute path (first column):
$ nix-env -f "<nixpkgs>" -iA haskellPackages.Allure ...
The attribute path of any Haskell packages corresponds to the name
of that particular package on Hackage: the package
cabal-install has the attribute
haskellPackages.cabal-install, and so on.
(Actually, this convention causes trouble with packages like
3dmodels and 4Blocks,
because these names are invalid identifiers in the Nix language.
The issue of how to deal with these rare corner cases is currently
unresolved.)
Haskell packages who’s Nix name (second column) begins with a
haskell- prefix are packages that provide a
library whereas packages without that prefix provide just
executables. Libraries may provide executables too, though: the
package haskell-pandoc, for example, installs
both a library and an application. You can install and use Haskell
executables just like any other program in Nixpkgs, but using
Haskell libraries for development is a bit trickier and we’ll
address that subject in great detail in section
How to
create a development environment.
Attribute paths are deterministic inside of Nixpkgs, but the path
necessary to reach Nixpkgs varies from system to system. We dodged
that problem by giving nix-env an explicit
-f "<nixpkgs>" parameter, but
if you call nix-env without that flag, then
chances are the invocation fails:
$ nix-env -iA haskellPackages.cabal-install
error: attribute ‘haskellPackages’ in selection path
‘haskellPackages.cabal-install’ not found
On NixOS, for example, Nixpkgs does not exist in the top-level namespace by default. To figure out the proper attribute path, it’s easiest to query for the path of a well-known Nixpkgs package, i.e.:
$ nix-env -qaP coreutils nixos.coreutils coreutils-8.23
If your system responds like that (most NixOS installations will),
then the attribute path to haskellPackages is
nixos.haskellPackages. Thus, if you want to use
nix-env without giving an explicit
-f flag, then that’s the way to do it:
$ nix-env -qaP -A nixos.haskellPackages $ nix-env -iA nixos.haskellPackages.cabal-install
Our current default compiler is GHC 7.10.x and the
haskellPackages set contains packages built
with that particular version. Nixpkgs contains the latest major
release of every GHC since 6.10.4, however, and there is a whole
family of package sets available that defines Hackage packages
built with each of those compilers, too:
$ nix-env -f "<nixpkgs>" -qaP -A haskell.packages.ghc6123 $ nix-env -f "<nixpkgs>" -qaP -A haskell.packages.ghc763
The name haskellPackages is really just a
synonym for haskell.packages.ghc7102, because
we prefer that package set internally and recommend it to our
users as their default choice, but ultimately you are free to
compile your Haskell packages with any GHC version you please. The
following command displays the complete list of available
compilers:
$ nix-env -f "<nixpkgs>" -qaP -A haskell.compiler haskell.compiler.ghc6104 ghc-6.10.4 haskell.compiler.ghc6123 ghc-6.12.3 haskell.compiler.ghc704 ghc-7.0.4 haskell.compiler.ghc722 ghc-7.2.2 haskell.compiler.ghc742 ghc-7.4.2 haskell.compiler.ghc763 ghc-7.6.3 haskell.compiler.ghc784 ghc-7.8.4 haskell.compiler.ghc7102 ghc-7.10.2 haskell.compiler.ghcHEAD ghc-7.11.20150402 haskell.compiler.ghcNokinds ghc-nokinds-7.11.20150704 haskell.compiler.ghcjs ghcjs-0.1.0 haskell.compiler.jhc jhc-0.8.2 haskell.compiler.uhc uhc-1.1.9.0
We have no package sets for jhc or
uhc yet, unfortunately, but for every version
of GHC listed above, there exists a package set based on that
compiler. Also, the attributes
haskell.compiler.ghcXYC and
haskell.packages.ghcXYC.ghc are synonymous for
the sake of convenience.
A simple development environment consists of a Haskell compiler
and one or both of the tools cabal-install
and stack. We saw in section
How to install
Haskell packages how you can install those programs into
your user profile:
$ nix-env -f "<nixpkgs>" -iA haskellPackages.ghc haskellPackages.cabal-install
Instead of the default package set
haskellPackages, you can also use the more
precise name haskell.compiler.ghc7102, which
has the advantage that it refers to the same GHC version
regardless of what Nixpkgs considers “default” at
any given time.
Once you’ve made those tools available in
$PATH, it’s possible to build Hackage
packages the same way people without access to Nix do it all the
time:
$ cabal get lens-4.11 && cd lens-4.11 $ cabal install -j --dependencies-only $ cabal configure $ cabal build
If you enjoy working with Cabal sandboxes, then that’s entirely possible too: just execute the command
$ cabal sandbox init
before installing the required dependencies.
The nix-shell utility makes it easy to switch
to a different compiler version; just enter the Nix shell
environment with the command
$ nix-shell -p haskell.compiler.ghc784
to bring GHC 7.8.4 into $PATH. Alternatively,
you can use Stack instead of nix-shell
directly to select compiler versions and other build tools
per-project. It uses nix-shell under the hood
when Nix support is turned on. See
How
to build a Haskell project using Stack.
If you’re using cabal-install, re-running
cabal configure inside the spawned shell
switches your build to use that compiler instead. If you’re
working on a project that doesn’t depend on any additional
system libraries outside of GHC, then it’s even sufficient to
just run the cabal configure command inside
of the shell:
$ nix-shell -p haskell.compiler.ghc784 --command "cabal configure"
Afterwards, all other commands like
cabal build work just fine in any shell
environment, because the configure phase recorded the absolute
paths to all required tools like GHC in its build configuration
inside of the dist/ directory. Please note,
however, that nix-collect-garbage can break
such an environment because the Nix store paths created by
nix-shell aren’t “alive” anymore
once nix-shell has terminated. If you find
that your Haskell builds no longer work after garbage
collection, then you’ll have to re-run
cabal configure inside of a new
nix-shell environment.
GHC expects to find all installed libraries inside of its own
lib directory. This approach works fine on
traditional Unix systems, but it doesn’t work for Nix, because
GHC’s store path is immutable once it’s built. We cannot install
additional libraries into that location. As a consequence, our
copies of GHC don’t know any packages except their own core
libraries, like base,
containers, Cabal, etc.
We can register additional libraries to GHC, however, using a
special build function called
ghcWithPackages. That function expects one
argument: a function that maps from an attribute set of Haskell
packages to a list of packages, which determines the libraries
known to that particular version of GHC. For example, the Nix
expression ghcWithPackages (pkgs: [pkgs.mtl])
generates a copy of GHC that has the mtl
library registered in addition to its normal core packages:
$ nix-shell -p "haskellPackages.ghcWithPackages (pkgs: [pkgs.mtl])"
[nix-shell:~]$ ghc-pkg list mtl
/nix/store/zy79...-ghc-7.10.2/lib/ghc-7.10.2/package.conf.d:
mtl-2.2.1
This function allows users to define their own development
environment by means of an override. After adding the following
snippet to ~/.config/nixpkgs/config.nix,
{
packageOverrides = super: let self = super.pkgs; in
{
myHaskellEnv = self.haskell.packages.ghc7102.ghcWithPackages
(haskellPackages: with haskellPackages; [
# libraries
arrows async cgi criterion
# tools
cabal-install haskintex
]);
};
}
it’s possible to install that compiler with
nix-env -f "<nixpkgs>" -iA myHaskellEnv.
If you’d like to switch that development environment to a
different version of GHC, just replace the
ghc7102 bit in the previous definition with
the appropriate name. Of course, it’s also possible to define
any number of these development environments! (You can’t install
two of them into the same profile at the same time, though,
because that would result in file conflicts.)
The generated ghc program is a wrapper script
that re-directs the real GHC executable to use a new
lib directory — one that we specifically
constructed to contain all those packages the user requested:
$ cat $(type -p ghc) #! /nix/store/xlxj...-bash-4.3-p33/bin/bash -e export NIX_GHC=/nix/store/19sm...-ghc-7.10.2/bin/ghc export NIX_GHCPKG=/nix/store/19sm...-ghc-7.10.2/bin/ghc-pkg export NIX_GHC_DOCDIR=/nix/store/19sm...-ghc-7.10.2/share/doc/ghc/html export NIX_GHC_LIBDIR=/nix/store/19sm...-ghc-7.10.2/lib/ghc-7.10.2 exec /nix/store/j50p...-ghc-7.10.2/bin/ghc "-B$NIX_GHC_LIBDIR" "$@"
The variables $NIX_GHC,
$NIX_GHCPKG, etc. point to the
new store path
ghcWithPackages constructed specifically for
this environment. The last line of the wrapper script then
executes the real ghc, but passes the path to
the new lib directory using GHC’s
-B flag.
The purpose of those environment variables is to work around an
impurity in the popular
ghc-paths
library. That library promises to give its users access to GHC’s
installation paths. Only, the library can’t possible know that
path when it’s compiled, because the path GHC considers its own
is determined only much later, when the user configures it
through ghcWithPackages. So we
patched
ghc-paths to return the paths found in those environment
variables at run-time rather than trying to guess them at
compile-time.
To make sure that mechanism works properly all the time, we
recommend that you set those variables to meaningful values in
your shell environment, too, i.e. by adding the following code
to your ~/.bashrc:
if type >/dev/null 2>&1 -p ghc; then eval "$(egrep ^export "$(type -p ghc)")" fi
If you are certain that you’ll use only one GHC environment
which is located in your user profile, then you can use the
following code, too, which has the advantage that it doesn’t
contain any paths from the Nix store, i.e. those settings always
remain valid even if a nix-env -u operation
updates the GHC environment in your profile:
if [ -e ~/.nix-profile/bin/ghc ]; then export NIX_GHC="$HOME/.nix-profile/bin/ghc" export NIX_GHCPKG="$HOME/.nix-profile/bin/ghc-pkg" export NIX_GHC_DOCDIR="$HOME/.nix-profile/share/doc/ghc/html" export NIX_GHC_LIBDIR="$HOME/.nix-profile/lib/ghc-$($NIX_GHC --numeric-version)" fi
If you plan to use your environment for interactive programming,
not just compiling random Haskell code, you might want to
replace ghcWithPackages in all the listings
above with ghcWithHoogle.
This environment generator not only produces an environment with
GHC and all the specified libraries, but also generates a
hoogle and haddock indexes
for all the packages, and provides a wrapper script around
hoogle binary that uses all those things. A
precise name for this thing would be
“ghcWithPackagesAndHoogleAndDocumentationIndexes”,
which is, regrettably, too long and scary.
For example, installing the following environment
{
packageOverrides = super: let self = super.pkgs; in
{
myHaskellEnv = self.haskellPackages.ghcWithHoogle
(haskellPackages: with haskellPackages; [
# libraries
arrows async cgi criterion
# tools
cabal-install haskintex
]);
};
}
allows one to browse module documentation index
not
too dissimilar to this for all the specified packages
and their dependencies by directing a browser of choice to
~/.nix-profiles/share/doc/hoogle/index.html
(or
/run/current-system/sw/share/doc/hoogle/index.html
in case you put it in
environment.systemPackages in NixOS).
After you’ve marveled enough at that try adding the following to
your ~/.ghc/ghci.conf
:def hoogle \s -> return $ ":! hoogle search -cl --count=15 \"" ++ s ++ "\"" :def doc \s -> return $ ":! hoogle search -cl --info \"" ++ s ++ "\""
and test it by typing into ghci:
:hoogle a -> a :doc a -> a
Be sure to note the links to haddock files in
the output. With any modern and properly configured terminal
emulator you can just click those links to navigate there.
Finally, you can run
hoogle server -p 8080
and navigate to http://localhost:8080/ for your own local
Hoogle.
Note, however, that Firefox and possibly other browsers disallow
navigation from http: to
file: URIs for security reasons, which might
be quite an inconvenience. See
this
page for workarounds.
Stack is a popular
build tool for Haskell projects. It has first-class support for
Nix. Stack can optionally use Nix to automatically select the
right version of GHC and other build tools to build, test and
execute apps in an existing project downloaded from somewhere on
the Internet. Pass the --nix flag to any
stack command to do so, e.g.
$ git clone --recursive http://github.com/yesodweb/wai $ cd wai $ stack --nix build
If you want stack to use Nix by default, you
can add a nix section to the
stack.yaml file, as explained in the
Stack
documentation. For example:
nix: enable: true packages: [pkgconfig zeromq zlib]
The example configuration snippet above tells Stack to create an
ad hoc environment for nix-shell as in the
below section, in which the pkgconfig,
zeromq and zlib packages
from Nixpkgs are available. All stack
commands will implicitly be executed inside this ad hoc
environment.
Some projects have more sophisticated needs. For examples, some
ad hoc environments might need to expose Nixpkgs packages
compiled in a certain way, or with extra environment variables.
In these cases, you’ll need a shell field
instead of packages:
nix: enable: true shell-file: shell.nix
For more on how to write a shell.nix file see
the below section. You’ll need to express a derivation. Note
that Nixpkgs ships with a convenience wrapper function around
mkDerivation called
haskell.lib.buildStackProject to help you
create this derivation in exactly the way Stack expects. All of
the same inputs as mkDerivation can be
provided. For example, to build a Stack project that including
packages that link against a version of the R library compiled
with special options turned on:
with (import <nixpkgs> { });
let R = pkgs.R.override { enableStrictBarrier = true; };
in
haskell.lib.buildStackProject {
name = "HaskellR";
buildInputs = [ R zeromq zlib ];
}
You can select a particular GHC version to compile with by
setting the ghc attribute as an argument to
buildStackProject. Better yet, let Stack
choose what GHC version it wants based on the snapshot specified
in stack.yaml (only works with Stack >=
1.1.3):
{nixpkgs ? import <nixpkgs> { }, ghc ? nixpkgs.ghc}:
with nixpkgs;
let R = pkgs.R.override { enableStrictBarrier = true; };
in
haskell.lib.buildStackProject {
name = "HaskellR";
buildInputs = [ R zeromq zlib ];
inherit ghc;
}
The easiest way to create an ad hoc development environment is
to run nix-shell with the appropriate GHC
environment given on the command-line:
nix-shell -p "haskellPackages.ghcWithPackages (pkgs: with pkgs; [mtl pandoc])"
For more sophisticated use-cases, however, it’s more convenient
to save the desired configuration in a file called
shell.nix that looks like this:
{ nixpkgs ? import <nixpkgs> {}, compiler ? "ghc7102" }:
let
inherit (nixpkgs) pkgs;
ghc = pkgs.haskell.packages.${compiler}.ghcWithPackages (ps: with ps; [
monad-par mtl
]);
in
pkgs.stdenv.mkDerivation {
name = "my-haskell-env-0";
buildInputs = [ ghc ];
shellHook = "eval $(egrep ^export ${ghc}/bin/ghc)";
}
Now run nix-shell — or even
nix-shell --pure — to enter a shell
environment that has the appropriate compiler in
$PATH. If you use --pure,
then add all other packages that your development environment
needs into the buildInputs attribute. If
you’d like to switch to a different compiler version, then pass
an appropriate compiler argument to the
expression, i.e.
nix-shell --argstr compiler ghc784.
If you need such an environment because you’d like to compile a
Hackage package outside of Nix — i.e. because you’re hacking on
the latest version from Git —, then the package set provides
suitable nix-shell environments for you already! Every Haskell
package has an env attribute that provides a
shell environment suitable for compiling that particular
package. If you’d like to hack the lens
library, for example, then you just have to check out the source
code and enter the appropriate environment:
$ cabal get lens-4.11 && cd lens-4.11 Downloading lens-4.11... Unpacking to lens-4.11/ $ nix-shell "<nixpkgs>" -A haskellPackages.lens.env [nix-shell:/tmp/lens-4.11]$
At point, you can run cabal configure,
cabal build, and all the other development
commands. Note that you need cabal-install
installed in your $PATH already to use it
here — the nix-shell environment does not
provide it.
If your own Haskell packages have build instructions for Cabal,
then you can convert those automatically into build instructions
for Nix using the cabal2nix utility, which you
can install into your profile by running
nix-env -i cabal2nix.
For example, let’s assume that you’re working on a private
project called foo. To generate a Nix build
expression for it, change into the project’s top-level directory
and run the command:
$ cabal2nix . >foo.nix
Then write the following snippet into a file called
default.nix:
{ nixpkgs ? import <nixpkgs> {}, compiler ? "ghc7102" }:
nixpkgs.pkgs.haskell.packages.${compiler}.callPackage ./foo.nix { }
Finally, store the following code in a file called
shell.nix:
{ nixpkgs ? import <nixpkgs> {}, compiler ? "ghc7102" }:
(import ./default.nix { inherit nixpkgs compiler; }).env
At this point, you can run nix-build to have
Nix compile your project and install it into a Nix store path.
The local directory will contain a symlink called
result after nix-build
returns that points into that location. Of course, passing the
flag --argstr compiler ghc763 allows
switching the build to any version of GHC currently supported.
Furthermore, you can call nix-shell to enter
an interactive development environment in which you can use
cabal configure and
cabal build to develop your code. That
environment will automatically contain a proper GHC derivation
with all the required libraries registered as well as all the
system-level libraries your package might need.
If your package does not depend on any system-level libraries, then it’s sufficient to run
$ nix-shell --command "cabal configure"
once to set up your build. cabal-install
determines the absolute paths to all resources required for the
build and writes them into a config file in the
dist/ directory. Once that’s done, you can
run cabal build and any other command for
that project even outside of the nix-shell
environment. This feature is particularly nice for those of us
who like to edit their code with an IDE, like Emacs’
haskell-mode, because it’s not necessary to
start Emacs inside of nix-shell just to make it find out the
necessary settings for building the project;
cabal-install has already done that for us.
If you want to do some quick-and-dirty hacking and don’t want to
bother setting up a default.nix and
shell.nix file manually, then you can use the
--shell flag offered by
cabal2nix to have it generate a stand-alone
nix-shell environment for you. With that
feature, running
$ cabal2nix --shell . >shell.nix $ nix-shell --command "cabal configure"
is usually enough to set up a build environment for any given
Haskell package. You can even use that generated file to run
nix-build, too:
$ nix-build shell.nix
If you have multiple private Haskell packages that depend on
each other, then you’ll have to register those packages in the
Nixpkgs set to make them visible for the dependency resolution
performed by callPackage. First of all,
change into each of your projects top-level directories and
generate a default.nix file with
cabal2nix:
$ cd ~/src/foo && cabal2nix . >default.nix $ cd ~/src/bar && cabal2nix . >default.nix
Then edit your ~/.config/nixpkgs/config.nix
file to register those builds in the default Haskell package
set:
{
packageOverrides = super: let self = super.pkgs; in
{
haskellPackages = super.haskellPackages.override {
overrides = self: super: {
foo = self.callPackage ../src/foo {};
bar = self.callPackage ../src/bar {};
};
};
};
}
Once that’s accomplished,
nix-env -f "<nixpkgs>" -qA haskellPackages
will show your packages like any other package from Hackage, and
you can build them
$ nix-build "<nixpkgs>" -A haskellPackages.foo
or enter an interactive shell environment suitable for building them:
$ nix-shell "<nixpkgs>" -A haskellPackages.bar.env
Every Haskell package set takes a function called
overrides that you can use to manipulate the
package as much as you please. One useful application of this
feature is to replace the default
mkDerivation function with one that enables
library profiling for all packages. To accomplish that, add
configure the following snippet in your
~/.config/nixpkgs/config.nix file:
{
packageOverrides = super: let self = super.pkgs; in
{
profiledHaskellPackages = self.haskellPackages.override {
overrides = self: super: {
mkDerivation = args: super.mkDerivation (args // {
enableLibraryProfiling = true;
});
};
};
};
}
Then, replace instances of haskellPackages in
the cabal2nix-generated
default.nix or shell.nix
files with profiledHaskellPackages.
Nixpkgs provides the latest version of
ghc-events,
which is 0.4.4.0 at the time of this writing. This is fine for
users of GHC 7.10.x, but GHC 7.8.4 cannot compile that binary.
Now, one way to solve that problem is to register an older
version of ghc-events in the 7.8.x-specific
package set. The first step is to generate Nix build
instructions with cabal2nix:
$ cabal2nix cabal://ghc-events-0.4.3.0 >~/.nixpkgs/ghc-events-0.4.3.0.nix
Then add the override in
~/.config/nixpkgs/config.nix:
{
packageOverrides = super: let self = super.pkgs; in
{
haskell = super.haskell // {
packages = super.haskell.packages // {
ghc784 = super.haskell.packages.ghc784.override {
overrides = self: super: {
ghc-events = self.callPackage ./ghc-events-0.4.3.0.nix {};
};
};
};
};
};
}
This code is a little crazy, no doubt, but it’s necessary because the intuitive version
haskell.packages.ghc784 = super.haskell.packages.ghc784.override {
overrides = self: super: {
ghc-events = self.callPackage ./ghc-events-0.4.3.0.nix {};
};
};
doesn’t do what we want it to: that code replaces the
haskell package set in Nixpkgs with one that
contains only one entry,packages, which
contains only one entry ghc784. This override
loses the haskell.compiler set, and it loses
the haskell.packages.ghcXYZ sets for all
compilers but GHC 7.8.4. To avoid that problem, we have to
perform the convoluted little dance from above, iterating over
each step in hierarchy.
Once it’s accomplished, however, we can install a variant of
ghc-events that’s compiled with GHC 7.8.4:
nix-env -f "<nixpkgs>" -iA haskell.packages.ghc784.ghc-events
Unfortunately, it turns out that this build fails again while
executing the test suite! Apparently, the release archive on
Hackage is missing some data files that the test suite requires,
so we cannot run it. We accomplish that by re-generating the Nix
expression with the --no-check flag:
$ cabal2nix --no-check cabal://ghc-events-0.4.3.0 >~/.nixpkgs/ghc-events-0.4.3.0.nix
Now the builds succeeds.
Of course, in the concrete example of
ghc-events this whole exercise is not an
ideal solution, because ghc-events can
analyze the output emitted by any version of GHC later than 6.12
regardless of the compiler version that was used to build the
ghc-events executable, so strictly speaking
there’s no reason to prefer one built with GHC 7.8.x in the
first place. However, for users who cannot use GHC 7.10.x at all
for some reason, the approach of downgrading to an older version
might be useful.
GHC and distributed build farms don’t get along well:
https://ghc.haskell.org/trac/ghc/ticket/4012
When you see an error like this one
package foo-0.7.1.0 is broken due to missing package text-1.2.0.4-98506efb1b9ada233bb5c2b2db516d91
then you have to download and re-install foo
and all its dependents from scratch:
# nix-store -q --referrers /nix/store/*-haskell-text-1.2.0.4 \ | xargs -L 1 nix-store --repair-path
If you’re using additional Hydra servers other than
hydra.nixos.org, then it might be necessary
to purge the local caches that store data from those machines to
disable these binary channels for the duration of the previous
command, i.e. by running:
rm /nix/var/nix/binary-cache-v3.sqlite rm /nix/var/nix/manifests/* rm /nix/var/nix/channel-cache/*
Open a shell with haste-compiler and
haste-cabal-install (you don’t actually need
node, but it can be useful to test stuff):
$ nix-shell -p "haskellPackages.ghcWithPackages (self: with self; [haste-cabal-install haste-compiler])" -p nodejs
You may not need the following step but if
haste-boot fails to compile all the packages
it needs, this might do the trick
$ haste-cabal update
haste-boot builds a set of core libraries so
that they can be used from Javascript transpiled programs:
$ haste-boot
Transpile and run a “Hello world” program:
$ echo 'module Main where main = putStrLn "Hello world"' > hello-world.hs $ hastec --onexec hello-world.hs $ node hello-world.js Hello world
Users of GHC on Darwin have occasionally reported that builds fail, because the compiler complains about a missing include file:
fatal error: 'math.h' file not found
The issue has been discussed at length in ticket 6390, and so far no good solution has been proposed. As a work-around, users who run into this problem can configure the environment variables
export NIX_CFLAGS_COMPILE="-idirafter /usr/include" export NIX_CFLAGS_LINK="-L/usr/lib"
in their ~/.bashrc file to avoid the compiler
error.
-- While building package zlib-0.5.4.2 using: runhaskell -package=Cabal-1.22.4.0 -clear-package-db [... lots of flags ...] Process exited with code: ExitFailure 1 Logs have been written to: /home/foo/src/stack-ide/.stack-work/logs/zlib-0.5.4.2.log Configuring zlib-0.5.4.2... Setup.hs: Missing dependency on a foreign library: * Missing (or bad) header file: zlib.h This problem can usually be solved by installing the system package that provides this library (you may need the "-dev" version). If the library is already installed but in a non-standard location then you can use the flags --extra-include-dirs= and --extra-lib-dirs= to specify where it is. If the header file does exist, it may contain errors that are caught by the C compiler at the preprocessing stage. In this case you can re-run configure with the verbosity flag -v3 to see the error messages.
When you run the build inside of the nix-shell environment, the system is configured to find libz.so without any special flags – the compiler and linker “just know” how to find it. Consequently, Cabal won’t record any search paths for libz.so in the package description, which means that the package works fine inside of nix-shell, but once you leave the shell the shared object can no longer be found. That issue is by no means specific to Stack: you’ll have that problem with any other Haskell package that’s built inside of nix-shell but run outside of that environment.
You can remedy this issue in several ways. The easiest is to add
a nix section to the
stack.yaml like the following:
nix: enable: true packages: [ zlib ]
Stack’s Nix support knows to add
${zlib.out}/lib and
${zlib.dev}/include as an
--extra-lib-dirs and
extra-include-dirs, respectively.
Alternatively, you can achieve the same effect by hand. First of
all, run
$ nix-build --no-out-link "<nixpkgs>" -A zlib /nix/store/alsvwzkiw4b7ip38l4nlfjijdvg3fvzn-zlib-1.2.8
to find out the store path of the system’s zlib library. Now, you can
add that path (plus a “/lib” suffix) to your $LD_LIBRARY_PATH environment variable to make sure your system linker finds libz.so automatically. It’s no pretty solution, but it will work.
As a variant of (1), you can also install any number of system libraries into your user’s profile (or some other profile) and point $LD_LIBRARY_PATH to that profile instead, so that you don’t have to list dozens of those store paths all over the place.
The solution I prefer is to call stack with an appropriate –extra-lib-dirs flag like so:
$ stack –extra-lib-dirs=/nix/store/alsvwzkiw4b7ip38l4nlfjijdvg3fvzn-zlib-1.2.8/lib build
Typically, you’ll need –extra-include-dirs as well. It’s possible to add those flag to the project’s “stack.yaml” or your user’s global “~/.stack/global/stack.yaml” file so that you don’t have to specify them manually every time. But again, you’re likely better off using Stack’s Nix support instead.
The same thing applies to cabal configure, of
course, if you’re building with cabal-install
instead of Stack.
There are two levels of static linking. The first option is to
configure the build with the Cabal flag
--disable-executable-dynamic. In Nix
expressions, this can be achieved by setting the attribute:
enableSharedExecutables = false;
That gives you a binary with statically linked Haskell libraries and dynamically linked system libraries.
To link both Haskell libraries and system libraries statically,
the additional flags
--ghc-option=-optl=-static --ghc-option=-optl=-pthread
need to be used. In Nix, this is accomplished with:
configureFlags = [ "--ghc-option=-optl=-static" "--ghc-option=-optl=-pthread" ];
It’s important to realize, however, that most system libraries in Nix are built as shared libraries only, i.e. there is just no static library available that Cabal could link!
By default GHC implements the Integer type using the GNU Multiple Precision Arithmetic (GMP) library. The implementation can be found in the integer-gmp package.
A potential problem with this is that GMP is licensed under the GNU Lesser General Public License (LGPL), a kind of “copyleft” license. According to the terms of the LGPL, paragraph 5, you may distribute a program that is designed to be compiled and dynamically linked with the library under the terms of your choice (i.e., commercially) but if your program incorporates portions of the library, if it is linked statically, then your program is a “derivative”–a “work based on the library”–and according to paragraph 2, section c, you “must cause the whole of the work to be licensed” under the terms of the LGPL (including for free).
The LGPL licensing for GMP is a problem for the overall licensing of binary programs compiled with GHC because most distributions (and builds) of GHC use static libraries. (Dynamic libraries are currently distributed only for OS X.) The LGPL licensing situation may be worse: even though The Glasgow Haskell Compiler License is essentially a “free software” license (BSD3), according to paragraph 2 of the LGPL, GHC must be distributed under the terms of the LGPL!
To work around these problems GHC can be build with a slower but LGPL-free alternative implemention for Integer called integer-simple.
To get a GHC compiler build with
integer-simple instead of
integer-gmp use the attribute:
pkgs.haskell.compiler.integer-simple."${ghcVersion}".
For example:
$ nix-build -E '(import <nixpkgs> {}).pkgs.haskell.compiler.integer-simple.ghc802'
...
$ result/bin/ghc-pkg list | grep integer
integer-simple-0.1.1.1
The following command displays the complete list of GHC
compilers build with integer-simple:
$ nix-env -f "<nixpkgs>" -qaP -A haskell.compiler.integer-simple haskell.compiler.integer-simple.ghc7102 ghc-7.10.2 haskell.compiler.integer-simple.ghc7103 ghc-7.10.3 haskell.compiler.integer-simple.ghc722 ghc-7.2.2 haskell.compiler.integer-simple.ghc742 ghc-7.4.2 haskell.compiler.integer-simple.ghc763 ghc-7.6.3 haskell.compiler.integer-simple.ghc783 ghc-7.8.3 haskell.compiler.integer-simple.ghc784 ghc-7.8.4 haskell.compiler.integer-simple.ghc801 ghc-8.0.1 haskell.compiler.integer-simple.ghc802 ghc-8.0.2 haskell.compiler.integer-simple.ghcHEAD ghc-8.1.20170106
To get a package set supporting
integer-simple use the attribute:
pkgs.haskell.packages.integer-simple."${ghcVersion}".
For example use the following to get the
scientific package build with
integer-simple:
$ nix-build -A pkgs.haskell.packages.integer-simple.ghc802.scientific
The Youtube video Nix Loves Haskell provides an introduction into Haskell NG aimed at beginners. The slides are available at http://cryp.to/nixos-meetup-3-slides.pdf and also – in a form ready for cut & paste – at https://github.com/NixOS/cabal2nix/blob/master/doc/nixos-meetup-3-slides.md.
Another Youtube video is Escaping Cabal Hell with Nix, which discusses the subject of Haskell development with Nix but also provides a basic introduction to Nix as well, i.e. it’s suitable for viewers with almost no prior Nix experience.
Oliver Charles wrote a very nice Tutorial how to develop Haskell packages with Nix.
The Journey into the Haskell NG infrastructure series of postings describe the new Haskell infrastructure in great detail:
Part 1 explains the differences between the old and the new code and gives instructions how to migrate to the new setup.
Part 2 looks in-depth at how to tweak and configure your setup by means of overrides.
Part 3 describes the infrastructure that keeps the Haskell package set in Nixpkgs up-to-date.
This directory contains build rules for idris packages. In addition,
it contains several functions to build and compose those packages.
Everything is exposed to the user via the
idrisPackages attribute.
This is like the normal nixpkgs callPackage function, specialized to idris packages.
This is a list of all of the libraries that come packaged with Idris itself.
A function to build an idris package. Its sole argument is a set
like you might pass to stdenv.mkDerivation,
except build-idris-package sets several
attributes for you. See build-idris-package.nix
for details.
A version of build-idris-package specialized to
builtin libraries. Mostly for internal use.
Bundle idris together with a list of packages. Because idris
currently only supports a single directory in its library path,
you must include all desired libraries here, including
prelude and base.
Ant-based Java packages are typically built from source as follows:
stdenv.mkDerivation {
name = "...";
src = fetchurl { ... };
buildInputs = [ jdk ant ];
buildPhase = "ant";
}
Note that jdk is an alias for the OpenJDK.
JAR files that are intended to be used by other packages should
be installed in $out/share/java. The OpenJDK has
a stdenv setup hook that adds any JARs in the
share/java directories of the build inputs to the
CLASSPATH environment variable. For instance, if the
package libfoo installs a JAR named
foo.jar in its share/java
directory, and another package declares the attribute
buildInputs = [ jdk libfoo ];
then CLASSPATH will be set to
/nix/store/...-libfoo/share/java/foo.jar.
Private JARs
should be installed in a location like
$out/share/.package-name
If your Java package provides a program, you need to generate a
wrapper script to run it using the OpenJRE. You can use
makeWrapper for this:
buildInputs = [ makeWrapper ];
installPhase =
''
mkdir -p $out/bin
makeWrapper ${jre}/bin/java $out/bin/foo \
--add-flags "-cp $out/share/java/foo.jar org.foo.Main"
'';
Note the use of jre, which is the part of the
OpenJDK package that contains the Java Runtime Environment. By using
${jre}/bin/java instead of
${jdk}/bin/java, you prevent your package from
depending on the JDK at runtime.
It is possible to use a different Java compiler than javac from the OpenJDK. For instance, to use the Eclipse Java Compiler:
buildInputs = [ jre ant ecj ];
(Note that here you don’t need the full JDK as an input, but just the JRE.) The ECJ has a stdenv setup hook that sets some environment variables to cause Ant to use ECJ, but this doesn’t work with all Ant files. Similarly, you can use the GNU Java Compiler:
buildInputs = [ gcj ant ];
Here, Ant will automatically use gij (the GNU Java Runtime) instead of the OpenJRE.
Lua packages are built by the buildLuaPackage function. This function is
implemented
in
pkgs/development/lua-modules/generic/default.nix
and works similarly to buildPerlPackage. (See
Section 9.10, “Perl” for details.)
Lua packages are defined
in pkgs/top-level/lua-packages.nix.
Most of them are simple. For example:
fileSystem = buildLuaPackage {
name = "filesystem-1.6.2";
src = fetchurl {
url = "https://github.com/keplerproject/luafilesystem/archive/v1_6_2.tar.gz";
sha256 = "1n8qdwa20ypbrny99vhkmx8q04zd2jjycdb5196xdhgvqzk10abz";
};
meta = {
homepage = "https://github.com/keplerproject/luafilesystem";
hydraPlatforms = stdenv.lib.platforms.linux;
maintainers = with maintainers; [ flosse ];
};
};
Though, more complicated package should be placed in a seperate file in
pkgs/development/lua-modules.
Lua packages accept additional parameter disabled, which defines
the condition of disabling package from luaPackages. For example, if package has
disabled assigned to lua.luaversion != "5.1",
it will not be included in any luaPackages except lua51Packages, making it
only be built for lua 5.1.
To add a package from NPM to nixpkgs:
Install node2nix:
nix-env -f '<nixpkgs>' -iA node2nix.
Modify
pkgs/development/node-packages/node-packages.json,
to add, update, or remove package entries.
Run the script:
cd pkgs/development/node-packages && sh generate.sh.
Build your new package to test your changes:
cd /path/to/nixpkgs && nix-build -A nodePackages.<new-or-updated-package>.
To build against a specific node.js version (e.g. 5.x):
nix-build -A nodePackages_5_x.<new-or-updated-package>
Add, commit, and share your changes!
Nixpkgs provides a function buildPerlPackage,
a generic package builder function for any Perl package that has a
standard Makefile.PL. It’s implemented in pkgs/development/perl-modules/generic.
Perl packages from CPAN are defined in pkgs/top-level/perl-packages.nix,
rather than pkgs/all-packages.nix. Most Perl
packages are so straight-forward to build that they are defined here
directly, rather than having a separate function for each package
called from perl-packages.nix. However, more
complicated packages should be put in a separate file, typically in
pkgs/development/perl-modules. Here is an
example of the former:
ClassC3 = buildPerlPackage rec {
name = "Class-C3-0.21";
src = fetchurl {
url = "mirror://cpan/authors/id/F/FL/FLORA/${name}.tar.gz";
sha256 = "1bl8z095y4js66pwxnm7s853pi9czala4sqc743fdlnk27kq94gz";
};
};
Note the use of mirror://cpan/, and the
${name} in the URL definition to ensure that the
name attribute is consistent with the source that we’re actually
downloading. Perl packages are made available in
all-packages.nix through the variable
perlPackages. For instance, if you have a package
that needs ClassC3, you would typically write
foo = import ../path/to/foo.nix {
inherit stdenv fetchurl ...;
inherit (perlPackages) ClassC3;
};
in all-packages.nix. You can test building a
Perl package as follows:
$ nix-build -A perlPackages.ClassC3
buildPerlPackage adds perl- to
the start of the name attribute, so the package above is actually
called perl-Class-C3-0.21. So to install it, you
can say:
$ nix-env -i perl-Class-C3
(Of course you can also install using the attribute name:
nix-env -i -A perlPackages.ClassC3.)
So what does buildPerlPackage do? It does
the following:
In the configure phase, it calls perl
Makefile.PL to generate a Makefile. You can set the
variable makeMakerFlags to pass flags to
Makefile.PL
It adds the contents of the PERL5LIB
environment variable to #! .../bin/perl line of
Perl scripts as -I
flags. This ensures that a script can find its
dependencies.dir
In the fixup phase, it writes the propagated build
inputs (propagatedBuildInputs) to the file
$out/nix-support/propagated-user-env-packages.
nix-env recursively installs all packages listed
in this file when you install a package that has it. This ensures
that a Perl package can find its dependencies.
buildPerlPackage is built on top of
stdenv, so everything can be customised in the
usual way. For instance, the BerkeleyDB module has
a preConfigure hook to generate a configuration
file used by Makefile.PL:
{ buildPerlPackage, fetchurl, db }:
buildPerlPackage rec {
name = "BerkeleyDB-0.36";
src = fetchurl {
url = "mirror://cpan/authors/id/P/PM/PMQS/${name}.tar.gz";
sha256 = "07xf50riarb60l1h6m2dqmql8q5dij619712fsgw7ach04d8g3z1";
};
preConfigure = ''
echo "LIB = ${db}/lib" > config.in
echo "INCLUDE = ${db}/include" >> config.in
'';
}
Dependencies on other Perl packages can be specified in the
buildInputs and
propagatedBuildInputs attributes. If something is
exclusively a build-time dependency, use
buildInputs; if it’s (also) a runtime dependency,
use propagatedBuildInputs. For instance, this
builds a Perl module that has runtime dependencies on a bunch of other
modules:
ClassC3Componentised = buildPerlPackage rec {
name = "Class-C3-Componentised-1.0004";
src = fetchurl {
url = "mirror://cpan/authors/id/A/AS/ASH/${name}.tar.gz";
sha256 = "0xql73jkcdbq4q9m0b0rnca6nrlvf5hyzy8is0crdk65bynvs8q1";
};
propagatedBuildInputs = [
ClassC3 ClassInspector TestException MROCompat
];
};
Nix expressions for Perl packages can be generated (almost) automatically from CPAN. This is done by the program nix-generate-from-cpan, which can be installed as follows:
$ nix-env -i nix-generate-from-cpan
This program takes a Perl module name, looks it up on CPAN, fetches and unpacks the corresponding package, and prints a Nix expression on standard output. For example:
$ nix-generate-from-cpan XML::Simple
XMLSimple = buildPerlPackage rec {
name = "XML-Simple-2.22";
src = fetchurl {
url = "mirror://cpan/authors/id/G/GR/GRANTM/${name}.tar.gz";
sha256 = "b9450ef22ea9644ae5d6ada086dc4300fa105be050a2030ebd4efd28c198eb49";
};
propagatedBuildInputs = [ XMLNamespaceSupport XMLSAX XMLSAXExpat ];
meta = {
description = "An API for simple XML files";
license = with stdenv.lib.licenses; [ artistic1 gpl1Plus ];
};
};
The output can be pasted into
pkgs/top-level/perl-packages.nix or wherever else
you need it.
Several versions of Python are available on Nix as well as a high amount of packages. The default interpreter is CPython 2.7.
It is important to make a distinction between Python packages that are used as libraries, and applications that are written in Python.
Applications on Nix are installed typically into your user
profile imperatively using nix-env -i, and
on NixOS declaratively by adding the package name to
environment.systemPackages in
/etc/nixos/configuration.nix. Dependencies
such as libraries are automatically installed and should not
be installed explicitly.
The same goes for Python applications and libraries. Python applications can be installed in your profile, but Python libraries you would like to use to develop cannot. If you do install libraries in your profile, then you will end up with import errors.
The recommended method for creating Python environments for
development is with nix-shell. Executing
$ nix-shell -p python35Packages.numpy python35Packages.toolz
opens a Nix shell which has available the requested packages and dependencies. Now you can launch the Python interpreter (which is itself a dependency)
[nix-shell:~] python3
If the packages were not available yet in the Nix store, Nix
would download or build them automatically. A convenient
option with nix-shell is the
--run option, with which you can execute a
command in the nix-shell. Let’s say we want
the above environment and directly run the Python interpreter
$ nix-shell -p python35Packages.numpy python35Packages.toolz --run "python3"
This way you can use the --run option also
to directly run a script
$ nix-shell -p python35Packages.numpy python35Packages.toolz --run "python3 myscript.py"
In fact, for this specific use case there is a more convenient
method. You can add a
shebang
to your script specifying which dependencies Nix shell needs.
With the following shebang, you can use
nix-shell myscript.py and it will make
available all dependencies and run the script in the
python3 shell.
#! /usr/bin/env nix-shell #! nix-shell -i python3 -p python3Packages.numpy import numpy print(numpy.__version__)
Likely you do not want to type your dependencies each and
every time. What you can do is write a simple Nix expression
which sets up an environment for you, requiring you only to
type nix-shell. Say we want to have Python
3.5, numpy and toolz,
like before, in an environment. With a
shell.nix file containing
with import <nixpkgs> {};
(pkgs.python35.withPackages (ps: [ps.numpy ps.toolz])).env
executing nix-shell gives you again a Nix
shell from which you can run Python.
What’s happening here?
We begin with importing the Nix Packages collections.
import <nixpkgs> import the
<nixpkgs> function,
{} calls it and the
with statement brings all attributes of
nixpkgs in the local scope. Therefore
we can now use pkgs.
Then we create a Python 3.5 environment with the
withPackages function.
The withPackages function expects us to
provide a function as an argument that takes the set of
all python packages and returns a list of packages to
include in the environment. Here, we select the packages
numpy and toolz from
the package set.
And finally, for in interactive use we return the
environment by using the env attribute.
Now that you know how to get a working Python environment on Nix, it is time to go forward and start actually developing with Python. We will first have a look at how Python packages are packaged on Nix. Then, we will look how you can use development mode with your code.
On Nix all packages are built by functions. The main function
in Nix for building Python packages is
buildPythonPackage.
Let’s see how we would build the toolz
package. According to
python-packages.nix
toolz is build using
{ # ...
toolz = buildPythonPackage rec {
name = "toolz-${version}";
version = "0.7.4";
src = pkgs.fetchurl {
url = "mirror://pypi/t/toolz/toolz-${version}.tar.gz";
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
};
meta = {
homepage = "http://github.com/pytoolz/toolz/";
description = "List processing tools and functional utilities";
license = licenses.bsd3;
maintainers = with maintainers; [ fridh ];
};
};
}
What happens here? The function
buildPythonPackage is called and as
argument it accepts a set. In this case the set is a recursive
set
(rec).
One of the arguments is the name of the package, which
consists of a basename (generally following the name on PyPi)
and a version. Another argument, src
specifies the source, which in this case is fetched from an
url. fetchurl not only downloads the target
file, but also validates its hash. Furthermore, we specify
some (optional)
meta
information.
The output of the function is a derivation, which is an
attribute with the name toolz of the set
pythonPackages. Actually, sets are created
for all interpreter versions, so e.g.
python27Packages,
python35Packages and
pypyPackages.
The above example works when you’re directly working on
pkgs/top-level/python-packages.nix in the
Nixpkgs repository. Often though, you will want to test a Nix
expression outside of the Nixpkgs tree. If you create a
shell.nix file with the following contents
with import <nixpkgs> {};
pkgs.python35Packages.buildPythonPackage rec {
name = "toolz-${version}";
version = "0.8.0";
src = pkgs.fetchurl {
url = "mirror://pypi/t/toolz/toolz-${version}.tar.gz";
sha256 = "e8451af61face57b7c5d09e71c0d27b8005f001ead56e9fdf470417e5cc6d479";
};
doCheck = false;
meta = {
homepage = "http://github.com/pytoolz/toolz/";
description = "List processing tools and functional utilities";
license = licenses.bsd3;
maintainers = with maintainers; [ fridh ];
};
}
and then execute nix-shell will result in
an environment in which you can use Python 3.5 and the
toolz package. As you can see we had to
explicitly mention for which Python version we want to build a
package.
The above example considered only a single package. Generally
you will want to use multiple packages. If we create a
shell.nix file with the following contents
with import <nixpkgs> {};
( let
toolz = pkgs.python35Packages.buildPythonPackage rec {
name = "toolz-${version}";
version = "0.8.0";
src = pkgs.fetchurl {
url = "mirror://pypi/t/toolz/toolz-${version}.tar.gz";
sha256 = "e8451af61face57b7c5d09e71c0d27b8005f001ead56e9fdf470417e5cc6d479";
};
doCheck = false;
meta = {
homepage = "http://github.com/pytoolz/toolz/";
description = "List processing tools and functional utilities";
};
};
in pkgs.python35.withPackages (ps: [ps.numpy toolz])
).env
and again execute nix-shell, then we get a
Python 3.5 environment with our locally defined package as
well as numpy which is build according to
the definition in Nixpkgs. What did we do here? Well, we took
the Nix expression that we used earlier to build a Python
environment, and said that we wanted to include our own
version of toolz. To introduce our own
package in the scope of withPackages we
used a
let
expression. You can see that we used
ps.numpy to select numpy from the nixpkgs
package set (ps). But we do not take
toolz from the nixpkgs package set this
time. Instead, toolz will resolve to our
local definition that we introduced with
let.
Our example, toolz, doesn’t have any
dependencies on other Python packages or system libraries.
According to the manual, buildPythonPackage
uses the arguments buildInputs and
propagatedBuildInputs to specify
dependencies. If something is exclusively a build-time
dependency, then the dependency should be included as a
buildInput, but if it is (also) a runtime
dependency, then it should be added to
propagatedBuildInputs. Test dependencies are
considered build-time dependencies.
The following example shows which arguments are given to
buildPythonPackage in order to build
datashape.
{ # ...
datashape = buildPythonPackage rec {
name = "datashape-${version}";
version = "0.4.7";
src = pkgs.fetchurl {
url = "mirror://pypi/D/DataShape/${name}.tar.gz";
sha256 = "14b2ef766d4c9652ab813182e866f493475e65e558bed0822e38bf07bba1a278";
};
buildInputs = with self; [ pytest ];
propagatedBuildInputs = with self; [ numpy multipledispatch dateutil ];
meta = {
homepage = https://github.com/ContinuumIO/datashape;
description = "A data description language";
license = licenses.bsd2;
maintainers = with maintainers; [ fridh ];
};
};
}
We can see several runtime dependencies,
numpy, multipledispatch,
and dateutil. Furthermore, we have one
buildInput, i.e. pytest.
pytest is a test runner and is only used
during the checkPhase and is therefore not
added to propagatedBuildInputs.
In the previous case we had only dependencies on other Python
packages to consider. Occasionally you have also system
libraries to consider. E.g., lxml provides
Python bindings to libxml2 and
libxslt. These libraries are only required
when building the bindings and are therefore added as
buildInputs.
{ # ...
lxml = buildPythonPackage rec {
name = "lxml-3.4.4";
src = pkgs.fetchurl {
url = "mirror://pypi/l/lxml/${name}.tar.gz";
sha256 = "16a0fa97hym9ysdk3rmqz32xdjqmy4w34ld3rm3jf5viqjx65lxk";
};
buildInputs = with self; [ pkgs.libxml2 pkgs.libxslt ];
meta = {
description = "Pythonic binding for the libxml2 and libxslt libraries";
homepage = http://lxml.de;
license = licenses.bsd3;
maintainers = with maintainers; [ sjourdois ];
};
};
}
In this example lxml and Nix are able to work
out exactly where the relevant files of the dependencies are.
This is not always the case.
The example below shows bindings to The Fastest Fourier
Transform in the West, commonly known as FFTW. On Nix we have
separate packages of FFTW for the different types of floats
("single",
"double",
"long-double"). The bindings need
all three types, and therefore we add all three as
buildInputs. The bindings don’t expect to
find each of them in a different folder, and therefore we have
to set LDFLAGS and CFLAGS.
{ # ...
pyfftw = buildPythonPackage rec {
name = "pyfftw-${version}";
version = "0.9.2";
src = pkgs.fetchurl {
url = "mirror://pypi/p/pyFFTW/pyFFTW-${version}.tar.gz";
sha256 = "f6bbb6afa93085409ab24885a1a3cdb8909f095a142f4d49e346f2bd1b789074";
};
buildInputs = [ pkgs.fftw pkgs.fftwFloat pkgs.fftwLongDouble];
propagatedBuildInputs = with self; [ numpy scipy ];
# Tests cannot import pyfftw. pyfftw works fine though.
doCheck = false;
preConfigure = ''
export LDFLAGS="-L${pkgs.fftw.dev}/lib -L${pkgs.fftwFloat.out}/lib -L${pkgs.fftwLongDouble.out}/lib"
export CFLAGS="-I${pkgs.fftw.dev}/include -I${pkgs.fftwFloat.dev}/include -I${pkgs.fftwLongDouble.dev}/include"
'';
meta = {
description = "A pythonic wrapper around FFTW, the FFT library, presenting a unified interface for all the supported transforms";
homepage = http://hgomersall.github.com/pyFFTW/;
license = with licenses; [ bsd2 bsd3 ];
maintainer = with maintainers; [ fridh ];
};
};
}
Note also the line doCheck = false;, we
explicitly disabled running the test-suite.
As a Python developer you’re likely aware of
development
mode (python setup.py develop);
instead of installing the package this command creates a
special link to the project code. That way, you can run
updated code without having to reinstall after each and every
change you make. Development mode is also available. Let’s see
how you can use it.
In the previous Nix expression the source was fetched from an
url. We can also refer to a local source instead using
src = ./path/to/source/tree;
If we create a shell.nix file which calls
buildPythonPackage, and if
src is a local source, and if the local
source has a setup.py, then development
mode is activated.
In the following example we create a simple environment that
has a Python 3.5 version of our package in it, as well as its
dependencies and other packages we like to have in the
environment, all specified with
propagatedBuildInputs. Indeed, we can just
add any package we like to have in our environment to
propagatedBuildInputs.
with import <nixpkgs>;
with pkgs.python35Packages;
buildPythonPackage rec {
name = "mypackage";
src = ./path/to/package/source;
propagatedBuildInputs = [ pytest numpy pkgs.libsndfile ];
}
It is important to note that due to how development mode is implemented on Nix it is not possible to have multiple packages simultaneously in development mode.
So far we discussed how you can use Python on Nix, and how you can develop with it. We’ve looked at how you write expressions to package Python packages, and we looked at how you can create environments in which specified packages are available.
At some point you’ll likely have multiple packages which you
would like to be able to use in different projects. In order to
minimise unnecessary duplication we now look at how you can
maintain yourself a repository with your own packages. The
important functions here are import and
callPackage.
Earlier we created a Python environment using
withPackages, and included the
toolz package via a let
expression. Let’s split the package definition from the
environment definition.
We first create a function that builds toolz
in ~/path/to/toolz/release.nix
{ pkgs, buildPythonPackage }:
buildPythonPackage rec {
name = "toolz-${version}";
version = "0.7.4";
src = pkgs.fetchurl {
url = "mirror://pypi/t/toolz/toolz-${version}.tar.gz";
sha256 = "43c2c9e5e7a16b6c88ba3088a9bfc82f7db8e13378be7c78d6c14a5f8ed05afd";
};
meta = {
homepage = "http://github.com/pytoolz/toolz/";
description = "List processing tools and functional utilities";
license = licenses.bsd3;
maintainers = with maintainers; [ fridh ];
};
}
It takes two arguments, pkgs and
buildPythonPackage. We now call this function
using callPackage in the definition of our
environment
with import <nixpkgs> {};
( let
toolz = pkgs.callPackage /path/to/toolz/release.nix {
pkgs = pkgs;
buildPythonPackage = pkgs.python35Packages.buildPythonPackage;
};
in pkgs.python35.withPackages (ps: [ ps.numpy toolz ])
).env
Important to remember is that the Python version for which the
package is made depends on the python
derivation that is passed to
buildPythonPackage. Nix tries to
automatically pass arguments when possible, which is why
generally you don’t explicitly define which
python derivation should be used. In the
above example we use buildPythonPackage that
is part of the set python35Packages, and in
this case the python35 interpreter is
automatically used.
Versions 2.7, 3.3, 3.4, 3.5 and 3.6 of the CPython interpreter
are available as respectively python27,
python33, python34,
python35 and python36. The
PyPy interpreter is available as pypy. The
aliases python2 and
python3 correspond to respectively
python27 and python35. The
default interpreter, python, maps to
python2. The Nix expressions for the
interpreters can be found in
pkgs/development/interpreters/python.
All packages depending on any Python interpreter get appended
out/{python.sitePackages} to
$PYTHONPATH if such directory exists.
To reduce closure size the
Tkinter/tkinter is
available as a separate package,
pythonPackages.tkinter.
Each interpreter has the following attributes:
libPrefix. Name of the folder in
${python}/lib/ for corresponding
interpreter.
interpreter. Alias for
${python}/bin/${executable}.
buildEnv. Function to build python
interpreter environments with extra packages bundled
together. See section python.buildEnv
function for usage and documentation.
withPackages. Simpler interface to
buildEnv. See section
python.withPackages function for
usage and documentation.
sitePackages. Alias for
lib/${libPrefix}/site-packages.
executable. Name of the interpreter
executable, e.g. python3.4.
pkgs. Set of Python packages for that
specific interpreter. The package set can be modified by
overriding the interpreter and passing
packageOverrides.
Python libraries and applications that use
setuptools or distutils
are typically build with respectively the
buildPythonPackage and
buildPythonApplication functions. These two
functions also support installing a wheel.
All Python packages reside in
pkgs/top-level/python-packages.nix and all
applications elsewhere. In case a package is used as both a
library and an application, then the package should be in
pkgs/top-level/python-packages.nix since only
those packages are made available for all interpreter versions.
The preferred location for library expressions is in
pkgs/development/python-modules. It is
important that these packages are called from
pkgs/top-level/python-packages.nix and not
elsewhere, to guarantee the right version of the package is
built.
Based on the packages defined in
pkgs/top-level/python-packages.nix an
attribute set is created for each available Python interpreter.
The available sets are
pkgs.python26Packages
pkgs.python27Packages
pkgs.python33Packages
pkgs.python34Packages
pkgs.python35Packages
pkgs.python36Packages
pkgs.pypyPackages
and the aliases
pkgs.python2Packages pointing to
pkgs.python27Packages
pkgs.python3Packages pointing to
pkgs.python35Packages
pkgs.pythonPackages pointing to
pkgs.python2Packages
The buildPythonPackage function is
implemented in
pkgs/development/interpreters/python/build-python-package.nix
The following is an example:
{ # ...
twisted = buildPythonPackage {
name = "twisted-8.1.0";
src = pkgs.fetchurl {
url = http://tmrc.mit.edu/mirror/twisted/Twisted/8.1/Twisted-8.1.0.tar.bz2;
sha256 = "0q25zbr4xzknaghha72mq57kh53qw1bf8csgp63pm9sfi72qhirl";
};
propagatedBuildInputs = [ self.ZopeInterface ];
meta = {
homepage = http://twistedmatrix.com/;
description = "Twisted, an event-driven networking engine written in Python";
license = stdenv.lib.licenses.mit;
};
};
}
The buildPythonPackage mainly does four
things:
In the buildPhase, it calls
${python.interpreter} setup.py bdist_wheel
to build a wheel binary zipfile.
In the installPhase, it installs the
wheel file using pip install *.whl.
In the postFixup phase, the
wrapPythonPrograms bash function is
called to wrap all programs in the
$out/bin/* directory to include
$PATH environment variable and add
dependent libraries to script’s
sys.path.
In the installCheck phase,
${python.interpreter} setup.py test is
ran.
As in Perl, dependencies on other Python packages can be
specified in the buildInputs and
propagatedBuildInputs attributes. If
something is exclusively a build-time dependency, use
buildInputs; if it’s (also) a runtime
dependency, use propagatedBuildInputs.
By default tests are run because
doCheck = true. Test dependencies, like
e.g. the test runner, should be added to
buildInputs.
By default meta.platforms is set to the
same value as the interpreter unless overriden otherwise.
All parameters from mkDerivation function
are still supported.
namePrefix: Prepended text to
${name} parameter. Defaults to
"python3.3-" for Python
3.3, etc. Set it to "" if
you’re packaging an application or a command line tool.
disabled: If true,
package is not build for particular python interpreter
version. Grep around
pkgs/top-level/python-packages.nix
for examples.
setupPyBuildFlags: List of flags
passed to setup.py build_ext command.
pythonPath: List of packages to be
added into $PYTHONPATH. Packages in
pythonPath are not propagated
(contrary to propagatedBuildInputs).
preShellHook: Hook to execute
commands before shellHook.
postShellHook: Hook to execute
commands after shellHook.
makeWrapperArgs: A list of strings.
Arguments to be passed to
makeWrapper, which wraps generated
binaries. By default, the arguments to
makeWrapper set
PATH and
PYTHONPATH environment variables
before calling the binary. Additional arguments here can
allow a developer to set environment variables which
will be available when the binary is run. For example,
makeWrapperArgs = ["--set FOO BAR" "--set BAZ QUX"].
installFlags: A list of strings.
Arguments to be passed to
pip install. To pass options to
python setup.py install, use
--install-option. E.g.,
`installFlags=[“–install-option=‘–cpp_implementation’”].
format: Format of the source. Valid
options are setuptools (default),
flit, wheel, and
other. setuptools
is for when the source has a setup.py
and setuptools is used to build a
wheel, flit, in case
flit should be used to build a wheel,
and wheel in case a wheel is
provided. In case you need to provide your own
buildPhase and
installPhase you can use
other.
catchConflicts If
true, abort package build if a
package name appears more than once in dependency tree.
Default is true.
checkInputs Dependencies needed for
running the checkPhase. These are
added to buildInputs when
doCheck = true.
The buildPythonApplication function is
practically the same as buildPythonPackage.
The difference is that buildPythonPackage
by default prefixes the names of the packages with the version
of the interpreter. Because with an application we’re not
interested in multiple version the prefix is dropped.
Python environments can be created using the low-level
pkgs.buildEnv function. This example shows
how to create an environment that has the Pyramid Web
Framework. Saving the following as
default.nix
with import <nixpkgs> {};
python.buildEnv.override {
extraLibs = [ pkgs.pythonPackages.pyramid ];
ignoreCollisions = true;
}
and running nix-build will create
/nix/store/cf1xhjwzmdki7fasgr4kz6di72ykicl5-python-2.7.8-env
with wrapped binaries in bin/.
You can also use the env attribute to
create local environments with needed packages installed. This
is somewhat comparable to virtualenv. For
example, running nix-shell with the
following shell.nix
with import <nixpkgs> {};
(python3.buildEnv.override {
extraLibs = with python3Packages; [ numpy requests2 ];
}).env
will drop you into a shell where Python will have the specified packages in its path.
extraLibs: List of packages installed
inside the environment.
postBuild: Shell command executed
after the build of environment.
ignoreCollisions: Ignore file
collisions inside the environment (default is
false).
The python.withPackages function provides a
simpler interface to the python.buildEnv
functionality. It takes a function as an argument that is
passed the set of python packages and returns the list of the
packages to be included in the environment. Using the
withPackages function, the previous example
for the Pyramid Web Framework environment can be written like
this:
with import <nixpkgs> {};
python.withPackages (ps: [ps.pyramid])
withPackages passes the correct package set
for the specific interpreter version as an argument to the
function. In the above example, ps equals
pythonPackages. But you can also easily
switch to using python3:
with import <nixpkgs> {};
python3.withPackages (ps: [ps.pyramid])
Now, ps is set to
python3Packages, matching the version of
the interpreter.
As python.withPackages simply uses
python.buildEnv under the hood, it also
supports the env attribute. The
shell.nix file from the previous section
can thus be also written like this:
with import <nixpkgs> {};
(python33.withPackages (ps: [ps.numpy ps.requests2])).env
In contrast to python.buildEnv,
python.withPackages does not support the
more advanced options such as
ignoreCollisions = true or
postBuild. If you need them, you have to
use python.buildEnv.
Python 2 namespace packages may provide
__init__.py that collide. In that case
python.buildEnv should be used with
ignoreCollisions = true.
Development or editable mode is supported. To develop Python
packages buildPythonPackage has additional
logic inside shellPhase to run
pip install -e . --prefix $TMPDIR/for the
package.
Warning: shellPhase is executed only if
setup.py exists.
Given a default.nix:
with import <nixpkgs> {};
buildPythonPackage { name = "myproject";
buildInputs = with pkgs.pythonPackages; [ pyramid ];
src = ./.; }
Running nix-shell with no arguments should
give you the environment in which the package would be built
with nix-build.
Shortcut to setup environments with C headers/libraries and python packages:
nix-shell -p pythonPackages.pyramid zlib libjpeg git
Note: There is a boolean value lib.inNixShell
set to true if nix-shell is invoked.
Packages inside nixpkgs are written by hand. However many tools exist in community to help save time. No tool is preferred at the moment.
python2nix by Vladimir Kirillov
pypi2nix by Rok Garbas
pypi2nix by Jaka Hudoklin
Python 2.7, 3.5 and 3.6 are now built deterministically and 3.4
mostly. Minor modifications had to be made to the interpreters
in order to generate deterministic bytecode. This has security
implications and is relevant for those using Python in a
nix-shell.
When the environment variable
DETERMINISTIC_BUILD is set, all bytecode will
have timestamp 1. The buildPythonPackage
function sets DETERMINISTIC_BUILD=1 and
PYTHONHASHSEED=0.
Both are also exported in nix-shell.
As explained in the user’s guide installing individual Python
packages imperatively with nix-env -i or
declaratively in environment.systemPackages
is not supported. However, it is possible to install a Python
environment with packages (python.buildEnv).
In the following examples we create an environment with Python
3.5, numpy and ipython. As
you might imagine there is one limitation here, and that’s you
can install only one environment at a time. You will notice the
complaints about collisions when you try to install a second
environment.
Create a file, e.g. build.nix, with the
following expression
with import <nixpkgs> {};
pkgs.python35.withPackages (ps: with ps; [ numpy ipython ])
and install it in your profile with
nix-env -if build.nix
Now you can use the Python interpreter, as well as the extra packages that you added to the environment.
If you prefer to, you could also add the environment as a package override to the Nixpkgs set.
{ # ...
packageOverrides = pkgs: with pkgs; {
myEnv = python35.withPackages (ps: with ps; [ numpy ipython ]);
};
}
and install it in your profile with
nix-env -iA nixpkgs.myEnv
We’re installing using the attribute path and assume the
channels is named nixpkgs. Note that I’m
using the attribute path here.
For the sake of completeness, here’s another example how to install the environment system-wide.
{ # ...
environment.systemPackages = with pkgs; [
(python35.withPackages(ps: with ps; [ numpy ipython ]))
];
}
Consider the packages A and
B that depend on each other. When packaging
B, a solution is to override package
A not to depend on B as an
input. The same should also be done when packaging
A.
We can override the interpreter and pass
packageOverrides. In the following example we
rename the pandas package and build it.
with import <nixpkgs> {};
let
python = let
packageOverrides = self: super: {
pandas = super.pandas.override {name="foo";};
};
in pkgs.python35.override {inherit packageOverrides;};
in python.pkgs.pandas
Using nix-build on this expression will build
the package pandas but with the new name
foo.
All packages in the package set will use the renamed package. A
typical use case is to switch to another version of a certain
package. For example, in the Nixpkgs repository we have multiple
versions of django and
scipy. In the following example we use a
different version of scipy and create an
environment that uses it. All packages in the Python package set
will now use the updated scipy version.
with import <nixpkgs> {};
( let
packageOverrides = self: super: {
scipy = super.scipy_0_17;
};
in (pkgs.python35.override {inherit packageOverrides;}).withPackages (ps: [ps.blaze])
).env
The requested package blaze depends on
pandas which itself depends on
scipy.
If you want the whole of Nixpkgs to use your modifications, then
you can use overlays as explained in this
manual. In the following example we build a
inkscape using a different version of
numpy.
let
pkgs = import <nixpkgs> {};
newpkgs = import pkgs.path { overlays = [ (pkgsself: pkgssuper: {
python27 = let
packageOverrides = self: super: {
numpy = super.numpy_1_10;
};
in pkgssuper.python27.override {inherit packageOverrides;};
} ) ]; };
in newpkgs.inkscape
Executing python setup.py bdist_wheel in a
nix-shellfails with
ValueError: ZIP does not support timestamps before 1980
This is because files are included that depend on items in the
Nix store which have a timestamp of, that is, it corresponds to
January the 1st, 1970 at 00:00:00. And as the error informs you,
ZIP does not support that. The command
bdist_wheel takes into account
SOURCE_DATE_EPOCH, and
nix-shell sets this to 1. By setting it to a
value corresponding to 1980 or later, or by unsetting it, it is
possible to build wheels.
Use 1980 as timestamp:
nix-shell --run "SOURCE_DATE_EPOCH=315532800 python3 setup.py bdist_wheel"
or the current time:
nix-shell --run "SOURCE_DATE_EPOCH=$(date +%s) python3 setup.py bdist_wheel"
or unset:
nix-shell --run "unset SOURCE_DATE_EPOCH; python3 setup.py bdist_wheel"
If you get the following error:
could not create '/nix/store/6l1bvljpy8gazlsw2aw9skwwp4pmvyxw-python-2.7.8/etc': Permission denied
This is a
known
bug in setuptools. Setuptools
install_data does not respect
--prefix. An example of such package using
the feature is
pkgs/tools/X11/xpra/default.nix. As
workaround install it as an extra preInstall
step:
${python.interpreter} setup.py install_data --install-dir=$out --root=$out
sed -i '/ = data\_files/d' setup.py
On most operating systems a global
site-packages is maintained. This however
becomes problematic if you want to run multiple Python versions
or have multiple versions of certain libraries for your
projects. Generally, you would solve such issues by creating
virtual environments using virtualenv.
On Nix each package has an isolated dependency tree which, in
the case of Python, guarantees the right versions of the
interpreter and libraries or packages are available. There is
therefore no need to maintain a global
site-packages.
If you want to create a Python environment for development, then
the recommended method is to use nix-shell,
either with or without the python.buildEnv
function.
This is an example of a default.nix for a
nix-shell, which allows to consume a
virtualenv environment, and install python
modules through pip the traditional way.
Create this default.nix file, together with a
requirements.txt and simply execute
nix-shell.
with import <nixpkgs> {};
with pkgs.python27Packages;
stdenv.mkDerivation {
name = "impurePythonEnv";
buildInputs = [
# these packages are required for virtualenv and pip to work:
#
python27Full
python27Packages.virtualenv
python27Packages.pip
# the following packages are related to the dependencies of your python
# project.
# In this particular example the python modules listed in the
# requirements.tx require the following packages to be installed locally
# in order to compile any binary extensions they may require.
#
taglib
openssl
git
libxml2
libxslt
libzip
stdenv
zlib ];
src = null;
shellHook = ''
# set SOURCE_DATE_EPOCH so that we can use python wheels
SOURCE_DATE_EPOCH=$(date +%s)
virtualenv --no-setuptools venv
export PATH=$PWD/venv/bin:$PATH
pip install -r requirements.txt
'';
}
Note that the pip install is an imperative
action. So every time nix-shell is executed
it will attempt to download the python modules listed in
requirements.txt. However these will be cached locally within
the virtualenv folder and not downloaded
again.
If you need to change a package’s attribute(s) from
configuration.nix you could do:
nixpkgs.config.packageOverrides = superP: {
pythonPackages = superP.pythonPackages.override {
overrides = self: super: {
bepasty-server = super.bepasty-server.overrideAttrs ( oldAttrs: {
src = pkgs.fetchgit {
url = "https://github.com/bepasty/bepasty-server";
sha256 = "9ziqshmsf0rjvdhhca55sm0x8jz76fsf2q4rwh4m6lpcf8wr0nps";
rev = "e2516e8cf4f2afb5185337073607eb9e84a61d2d";
};
});
};
};
};
If you are using the bepasty-server package
somewhere, for example in systemPackages or
indirectly from services.bepasty, then a
nixos-rebuild switch will rebuild the system
but with the bepasty-server package using a
different src attribute. This way one can
modify python based software/libraries
easily. Using self and
super one can also alter dependencies
(buildInputs) between the old state
(self) and new state
(super).
Following rules are desired to be respected:
Python libraries are supposed to be called from
python-packages.nix and packaged with
buildPythonPackage. The expression of a
library should be in
pkgs/development/python-modules/<name>/default.nix.
Libraries in
pkgs/top-level/python-packages.nix are
sorted quasi-alphabetically to avoid merge conflicts.
Python applications live outside of
python-packages.nix and are packaged with
buildPythonApplication.
Make sure libraries build for all Python interpreters.
By default we enable tests. Make sure the tests are found and, in the case of libraries, are passing for all interpreters. If certain tests fail they can be disabled individually. Try to avoid disabling the tests altogether. In any case, when you disable tests, leave a comment explaining why.
Commit names of Python libraries should include
pythonPackages, for example
pythonPackages.numpy: 1.11 -> 1.12.
Qt is a comprehensive desktop and mobile application development toolkit for C++. Legacy support is available for Qt 3 and Qt 4, but all current development uses Qt 5. The Qt 5 packages in Nixpkgs are updated frequently to take advantage of new features, but older versions are typically retained to support packages that may not be compatible with the latest version. When packaging applications and libraries for Nixpkgs, it is important to ensure that compatible versions of Qt 5 are used throughout; this consideration motivates the tools described below.
Libraries that depend on Qt 5 should be built with each available version to avoid linking a dependent package against incompatible versions of Qt 5. (Although Qt 5 maintains backward ABI compatibility, linking against multiple versions at once is generally not possible; at best it will lead to runtime faults.) Packages that provide libraries should be added to the top-level function mkLibsForQt5, which is used to build a set of libraries for every Qt 5 version. The callPackage provided in this scope will ensure that only one Qt version will be used throughout the dependency tree. Dependencies should be imported unqualified, i.e. qtbase not qt5.qtbase, so that callPackage can do its work. Do not import a package set such as qt5 or libsForQt5 into your package; although it may work fine in the moment, it could well break at the next Qt update.
If a library does not support a particular version of Qt 5, it is best to mark it as broken by setting its meta.broken attribute. A package may be marked broken for certain versions by testing the qtbase.version attribute, which will always give the current Qt 5 version.
Applications generally do not need to be built with every Qt version because they do not provide any libraries for dependent packages to link against. The primary consideration is merely ensuring that the application itself and its dependencies are linked against only one version of Qt. To call your application expression, use libsForQt5.callPackage instead of callPackage. Dependencies should be imported unqualified, i.e. qtbase not qt5.qtbase. Do not import a package set such as qt5 or libsForQt5 into your package; although it may work fine in the moment, it could well break at the next Qt update.
It is generally best to build an application package against the libsForQt5 library set. In case a package does not build with the latest Qt version, it is possible to pick a set pinned to a particular version, e.g. libsForQt55 for Qt 5.5, if that is the latest version the package supports.
Qt-based applications require that several paths be set at runtime. This is accomplished by wrapping the provided executables in a package with wrapQtProgram or makeQtWrapper during the postFixup phase. To use the wrapper generators, add makeQtWrapper to nativeBuildInputs. The wrapper generators support the same options as wrapProgram and makeWrapper respectively. It is usually only necessary to generate wrappers for programs intended to be invoked by the user.
The KDE Frameworks are a set of libraries for Qt 5 which form the basis of the Plasma desktop environment and the KDE Applications suite. Packaging a Frameworks-based library does not require any steps beyond those described above for general Qt-based libraries. Frameworks-based applications should not use makeQtWrapper; instead, use kdeWrapper to create the necessary wrappers: kdeWrapper { unwrapped = , where expr; targets = exes; }expr is the un-wrapped package expression and exes is a list of strings giving the relative paths to programs in the package which should be wrapped.
Define an environment for R that contains all the libraries that you’d like to use by adding the following snippet to your $HOME/.config/nixpkgs/config.nix file:
{
packageOverrides = super: let self = super.pkgs; in
{
rEnv = super.rWrapper.override {
packages = with self.rPackages; [
devtools
ggplot2
reshape2
yaml
optparse
];
};
};
}
Then you can use
nix-env -f "<nixpkgs>" -iA rEnv
to install it into your user profile. The set of available
libraries can be discovered by running the command
nix-env -f "<nixpkgs>" -qaP -A rPackages.
The first column from that output is the name that has to be
passed to rWrapper in the code snipped above.
However, if you’d like to add a file to your project source to
make the environment available for other contributors, you can
create a default.nix file like so:
let
pkgs = import <nixpkgs> {};
stdenv = pkgs.stdenv;
in with pkgs; {
myProject = stdenv.mkDerivation {
name = "myProject";
version = "1";
src = if pkgs.lib.inNixShell then null else nix;
buildInputs = with rPackages; [
R
ggplot2
knitr
];
};
}
and then run nix-shell . to be dropped into a
shell with those packages available.
RStudio by default will not use the libraries installed like
above. You must override its R version with your custom R
environment, and set useRPackages to
true, like below:
{
packageOverrides = super: let self = super.pkgs; in
{
rEnv = super.rWrapper.override {
packages = with self.rPackages; [
devtools
ggplot2
reshape2
yaml
optparse
];
};
rstudioEnv = super.rstudio.override {
R = rEnv;
useRPackages = true;
};
};
}
Then like above,
nix-env -f "<nixpkgs>" -iA rstudioEnv
will install this into your user profile.
nix-shell generate-shell.nix Rscript generate-r-packages.R cran > cran-packages.nix.new mv cran-packages.nix.new cran-packages.nix Rscript generate-r-packages.R bioc > bioc-packages.nix.new mv bioc-packages.nix.new bioc-packages.nix
generate-r-packages.R <repo> reads
<repo>-packages.nix, therefor the
renaming.
nix-build test-evaluation.nix --dry-run
If this exits fine, the expression is ok. If not, you have to edit
default.nix
There currently is support to bundle applications that are packaged as Ruby gems. The utility "bundix" allows you to write a Gemfile, let bundler create a Gemfile.lock, and then convert
this into a nix expression that contains all Gem dependencies automatically.
For example, to package sensu, we did:
$ cd pkgs/servers/monitoring
$ mkdir sensu
$ cd sensu
$ cat > Gemfile
source 'https://rubygems.org'
gem 'sensu'
$ nix-shell -p bundler --command "bundler package --path /tmp/vendor/bundle"
$ $(nix-build '<nixpkgs>' -A bundix)/bin/bundix
$ cat > default.nix
{ lib, bundlerEnv, ruby }:
bundlerEnv rec {
name = "sensu-${version}";
version = (import gemset).sensu.version;
inherit ruby;
# expects Gemfile, Gemfile.lock and gemset.nix in the same directory
gemdir = ./.;
meta = with lib; {
description = "A monitoring framework that aims to be simple, malleable, and scalable";
homepage = http://sensuapp.org/;
license = with licenses; mit;
maintainers = with maintainers; [ theuni ];
platforms = platforms.unix;
};
}
Please check in the Gemfile, Gemfile.lock and the gemset.nix so future updates can be run easily.
Resulting derivations also have two helpful items, env and wrapper. The first one allows one to quickly drop into
nix-shell with the specified environment present. E.g. nix-shell -A sensu.env would give you an environment with Ruby preset
so it has all the libraries necessary for sensu in its paths. The second one can be used to make derivations from custom Ruby scripts which have
Gemfiles with their dependencies specified. It is a derivation with ruby wrapped so it can find all the needed dependencies.
For example, to make a derivation my-script for a my-script.rb (which should be placed in bin) you should
run bundix as specified above and then use bundlerEnv lile this:
let env = bundlerEnv {
name = "my-script-env";
inherit ruby;
gemfile = ./Gemfile;
lockfile = ./Gemfile.lock;
gemset = ./gemset.nix;
};
in stdenv.mkDerivation {
name = "my-script";
buildInputs = [ env.wrapper ];
script = ./my-script.rb;
buildCommand = ''
mkdir -p $out/bin
install -D -m755 $script $out/bin/my-script
patchShebangs $out/bin/my-script
'';
}
To install the rust compiler and cargo put
rustStable.rustc rustStable.cargo
into the environment.systemPackages or bring them
into scope with
nix-shell -p rustStable.rustc -p rustStable.cargo.
There are also rustBeta and
rustNightly package sets available. These are not
updated very regulary. For daily builds see
Using the Rust
nightlies overlay
Rust applications are packaged by using the
buildRustPackage helper from
rustPlatform:
with rustPlatform;
buildRustPackage rec {
name = "ripgrep-${version}";
version = "0.4.0";
src = fetchFromGitHub {
owner = "BurntSushi";
repo = "ripgrep";
rev = "${version}";
sha256 = "0y5d1n6hkw85jb3rblcxqas2fp82h3nghssa4xqrhqnz25l799pj";
};
depsSha256 = "0q68qyl2h6i0qsz82z840myxlnjay8p1w5z7hfyr8fqp7wgwa9cx";
meta = with stdenv.lib; {
description = "A utility that combines the usability of The Silver Searcher with the raw speed of grep";
homepage = https://github.com/BurntSushi/ripgrep;
license = with licenses; [ unlicense ];
maintainers = [ maintainers.tailhook ];
platforms = platforms.all;
};
}
buildRustPackage requires a
depsSha256 attribute which is computed over all
crate sources of this package. Currently it is obtained by
inserting a fake checksum into the expression and building the
package once. The correct checksum can be then take from the
failed build.
To install crates with nix there is also an experimental project called nixcrates.
Mozilla provides an overlay for nixpkgs to bring a nightly version of Rust into scope. This overlay can also be used to install recent unstable or stable versions of Rust, if desired.
To use this overlay, clone
nixpkgs-mozilla,
and create a symbolic link to the file
rust-overlay.nix
in the ~/.config/nixpkgs/overlays directory.
$ git clone https://github.com/mozilla/nixpkgs-mozilla.git $ mkdir -p ~/.config/nixpkgs/overlays $ ln -s $(pwd)/nixpkgs-mozilla/rust-overlay.nix ~/.config/nixpkgs/overlays/rust-overlay.nix
The latest version can be installed with the following command:
$ nix-env -Ai nixos.rustChannels.stable.rust
Or using the attribute with nix-shell:
$ nix-shell -p nixos.rustChannels.stable.rust
To install the beta or nightly channel, “stable” should be substituted by “nightly” or “beta”, or use the function provided by this overlay to pull a version based on a build date.
The overlay automatically updates itself as it uses the same source as rustup.
Since release 15.09 there is a new TeX Live packaging that lives entirely under attribute texlive.
For basic usage just pull texlive.combined.scheme-basic for an environment with basic LaTeX support.
It typically won't work to use separately installed packages together. Instead, you can build a custom set of packages like this:
texlive.combine {
inherit (texlive) scheme-small collection-langkorean algorithms cm-super;
}
There are all the schemes, collections and a few thousand packages, as defined upstream (perhaps with tiny differences).
By default you only get executables and files needed during runtime, and a little documentation for the core packages. To change that, you need to add pkgFilter function to combine.
texlive.combine {
# inherit (texlive) whatever-you-want;
pkgFilter = pkg:
pkg.tlType == "run" || pkg.tlType == "bin" || pkg.pname == "cm-super";
# elem tlType [ "run" "bin" "doc" "source" ]
# there are also other attributes: version, name
}
You can list packages e.g. by nix-repl.
$ nix-repl
nix-repl> :l <nixpkgs>
nix-repl> texlive.collection-<TAB>
Some tools are still missing, e.g. luajittex;
some apps aren't packaged/tested yet (asymptote, biber, etc.);
feature/bug: when a package is rejected by pkgFilter, its dependencies are still propagated;
in case of any bugs or feature requests, file a github issue or better a pull request and /cc @vcunat.
You’ll get a vim(-your-suffix) in PATH also loading the plugins you want. Loading can be deferred; see examples.
VAM (=vim-addon-manager) and Pathogen plugin managers are supported. Vundle, NeoBundle could be your turn.
VAM introduced .json files supporting dependencies without versioning assuming that “using latest version” is ok most of the time.
First create a vim-scripts file having one plugin name per line. Example:
"tlib"
{'name': 'vim-addon-sql'}
{'filetype_regex': '\%(vim)$', 'names': ['reload', 'vim-dev-plugin']}
Such vim-scripts file can be read by VAM as well like this:
call vam#Scripts(expand('~/.vim-scripts'), {})
Create a default.nix file:
{ nixpkgs ? import <nixpkgs> {}, compiler ? "ghc7102" }:
nixpkgs.vim_configurable.customize { name = "vim"; vimrcConfig.vam.pluginDictionaries = [ "vim-addon-vim2nix" ]; }
Create a generate.vim file:
ActivateAddons vim-addon-vim2nix
let vim_scripts = "vim-scripts"
call nix#ExportPluginsForNix({
\ 'path_to_nixpkgs': eval('{"'.substitute(substitute(substitute($NIX_PATH, ':', ',', 'g'), '=',':', 'g'), '\([:,]\)', '"\1"',"g").'"}')["nixpkgs"],
\ 'cache_file': '/tmp/vim2nix-cache',
\ 'try_catch': 0,
\ 'plugin_dictionaries': ["vim-addon-manager"]+map(readfile(vim_scripts), 'eval(v:val)')
\ })
Then run
nix-shell -p vimUtils.vim_with_vim2nix --command "vim -c 'source generate.vim'"
You should get a Vim buffer with the nix derivations (output1) and vam.pluginDictionaries (output2). You can add your vim to your system’s configuration file like this and start it by “vim-my”:
my-vim =
let plugins = let inherit (vimUtils) buildVimPluginFrom2Nix; in {
copy paste output1 here
}; in vim_configurable.customize {
name = "vim-my";
vimrcConfig.vam.knownPlugins = plugins; # optional
vimrcConfig.vam.pluginDictionaries = [
copy paste output2 here
];
# Pathogen would be
# vimrcConfig.pathogen.knownPlugins = plugins; # plugins
# vimrcConfig.pathogen.pluginNames = ["tlib"];
};
Sample output1:
"reload" = buildVimPluginFrom2Nix { # created by nix#NixDerivation
name = "reload";
src = fetchgit {
url = "git://github.com/xolox/vim-reload";
rev = "0a601a668727f5b675cb1ddc19f6861f3f7ab9e1";
sha256 = "0vb832l9yxj919f5hfg6qj6bn9ni57gnjd3bj7zpq7d4iv2s4wdh";
};
dependencies = ["nim-misc"];
};
[...]
Sample output2:
[
''vim-addon-manager''
''tlib''
{ "name" = ''vim-addon-sql''; }
{ "filetype_regex" = ''\%(vim)$$''; "names" = [ ''reload'' ''vim-dev-plugin'' ]; }
]
Table of Contents
This chapter contains information about how to use and maintain the Nix expressions for a number of specific packages, such as the Linux kernel or X.org.
The Nix expressions to build the Linux kernel are in pkgs/os-specific/linux/kernel.
The function that builds the kernel has an argument
kernelPatches which should be a list of
{name, patch, extraConfig} attribute sets, where
name is the name of the patch (which is included in
the kernel’s meta.description attribute),
patch is the patch itself (possibly compressed),
and extraConfig (optional) is a string specifying
extra options to be concatenated to the kernel configuration file
(.config).
The kernel derivation exports an attribute
features specifying whether optional functionality
is or isn’t enabled. This is used in NixOS to implement
kernel-specific behaviour. For instance, if the kernel has the
iwlwifi feature (i.e. has built-in support for
Intel wireless chipsets), then NixOS doesn’t have to build the
external iwlwifi package:
modulesTree = [kernel] ++ pkgs.lib.optional (!kernel.features ? iwlwifi) kernelPackages.iwlwifi ++ ...;
How to add a new (major) version of the Linux kernel to Nixpkgs:
Copy the old Nix expression
(e.g. linux-2.6.21.nix) to the new one
(e.g. linux-2.6.22.nix) and update it.
Add the new kernel to all-packages.nix
(e.g., create an attribute
kernel_2_6_22).
Now we’re going to update the kernel configuration. First
unpack the kernel. Then for each supported platform
(i686, x86_64,
uml) do the following:
Make an copy from the old
config (e.g. config-2.6.21-i686-smp) to
the new one
(e.g. config-2.6.22-i686-smp).
Copy the config file for this platform
(e.g. config-2.6.22-i686-smp) to
.config in the kernel source tree.
Run make oldconfig
ARCH=
and answer all questions. (For the uml configuration, also
add {i386,x86_64,um}SHELL=bash.) Make sure to keep the
configuration consistent between platforms (i.e. don’t
enable some feature on i686 and disable
it on x86_64).
If needed you can also run make
menuconfig:
$ nix-env -i ncurses
$ export NIX_CFLAGS_LINK=-lncurses
$ make menuconfig ARCH=arch
Copy .config over the new config
file (e.g. config-2.6.22-i686-smp).
Test building the kernel: nix-build -A
kernel_2_6_22. If it compiles, ship it! For extra
credit, try booting NixOS with it.
It may be that the new kernel requires updating the external
kernel modules and kernel-dependent packages listed in the
linuxPackagesFor function in
all-packages.nix (such as the NVIDIA drivers,
AUFS, etc.). If the updated packages aren’t backwards compatible
with older kernels, you may need to keep the older versions
around.
The Nix expressions for the X.org packages reside in
pkgs/servers/x11/xorg/default.nix. This file is
automatically generated from lists of tarballs in an X.org release.
As such it should not be modified directly; rather, you should modify
the lists, the generator script or the file
pkgs/servers/x11/xorg/overrides.nix, in which you
can override or add to the derivations produced by the
generator.
The generator is invoked as follows:
$ cd pkgs/servers/x11/xorg $ cat tarballs-7.5.list extra.list old.list \ | perl ./generate-expr-from-tarballs.pl
For each of the tarballs in the .list files, the
script downloads it, unpacks it, and searches its
configure.ac and *.pc.in
files for dependencies. This information is used to generate
default.nix. The generator caches downloaded
tarballs between runs. Pay close attention to the NOT FOUND:
messages at the end of the
run, since they may indicate missing dependencies. (Some might be
optional dependencies, however.)name
A file like tarballs-7.5.list contains all
tarballs in a X.org release. It can be generated like this:
$ export i="mirror://xorg/X11R7.4/src/everything/"
$ cat $(PRINT_PATH=1 nix-prefetch-url $i | tail -n 1) \
| perl -e 'while (<>) { if (/(href|HREF)="([^"]*.bz2)"/) { print "$ENV{'i'}$2\n"; }; }' \
| sort > tarballs-7.4.list
extra.list contains libraries that aren’t part of
X.org proper, but are closely related to it, such as
libxcb. old.list contains
some packages that were removed from X.org, but are still needed by
some people or by other packages (such as
imake).
If the expression for a package requires derivation attributes
that the generator cannot figure out automatically (say,
patches or a postInstall hook),
you should modify
pkgs/servers/x11/xorg/overrides.nix.
The Nix expressions related to the Eclipse platform and IDE are in
pkgs/applications/editors/eclipse.
Nixpkgs provides a number of packages that will install Eclipse in its various forms, these range from the bare-bones Eclipse Platform to the more fully featured Eclipse SDK or Scala-IDE packages and multiple version are often available. It is possible to list available Eclipse packages by issuing the command:
$ nix-env -f '<nixpkgs>' -qaP -A eclipses --description
Once an Eclipse variant is installed it can be run using the eclipse command, as expected. From within Eclipse it is then possible to install plugins in the usual manner by either manually specifying an Eclipse update site or by installing the Marketplace Client plugin and using it to discover and install other plugins. This installation method provides an Eclipse installation that closely resemble a manually installed Eclipse.
If you prefer to install plugins in a more declarative manner then
Nixpkgs also offer a number of Eclipse plugins that can be
installed in an Eclipse environment. This
type of environment is created using the function
eclipseWithPlugins found inside the
nixpkgs.eclipses attribute set. This function
takes as argument { eclipse, plugins ? [], jvmArgs ? []
} where eclipse is a one of the
Eclipse packages described above, plugins is a
list of plugin derivations, and jvmArgs is a
list of arguments given to the JVM running the Eclipse. For
example, say you wish to install the latest Eclipse Platform with
the popular Eclipse Color Theme plugin and also allow Eclipse to
use more RAM. You could then add
packageOverrides = pkgs: {
myEclipse = with pkgs.eclipses; eclipseWithPlugins {
eclipse = eclipse-platform;
jvmArgs = [ "-Xmx2048m" ];
plugins = [ plugins.color-theme ];
};
}
to your Nixpkgs configuration
(~/.config/nixpkgs/config.nix) and install it by
running nix-env -f '<nixpkgs>' -iA
myEclipse and afterward run Eclipse as usual. It is
possible to find out which plugins are available for installation
using eclipseWithPlugins by running
$ nix-env -f '<nixpkgs>' -qaP -A eclipses.plugins --description
If there is a need to install plugins that are not available in
Nixpkgs then it may be possible to define these plugins outside
Nixpkgs using the buildEclipseUpdateSite and
buildEclipsePlugin functions found in the
nixpkgs.eclipses.plugins attribute set. Use the
buildEclipseUpdateSite function to install a
plugin distributed as an Eclipse update site. This function takes
{ name, src } as argument where
src indicates the Eclipse update site archive.
All Eclipse features and plugins within the downloaded update site
will be installed. When an update site archive is not available
then the buildEclipsePlugin function can be
used to install a plugin that consists of a pair of feature and
plugin JARs. This function takes an argument { name,
srcFeature, srcPlugin } where
srcFeature and srcPlugin are
the feature and plugin JARs, respectively.
Expanding the previous example with two plugins using the above functions we have
packageOverrides = pkgs: {
myEclipse = with pkgs.eclipses; eclipseWithPlugins {
eclipse = eclipse-platform;
jvmArgs = [ "-Xmx2048m" ];
plugins = [
plugins.color-theme
(plugins.buildEclipsePlugin {
name = "myplugin1-1.0";
srcFeature = fetchurl {
url = "http://…/features/myplugin1.jar";
sha256 = "123…";
};
srcPlugin = fetchurl {
url = "http://…/plugins/myplugin1.jar";
sha256 = "123…";
};
});
(plugins.buildEclipseUpdateSite {
name = "myplugin2-1.0";
src = fetchurl {
stripRoot = false;
url = "http://…/myplugin2.zip";
sha256 = "123…";
};
});
];
};
}
The Nix expressions for Elm reside in
pkgs/development/compilers/elm. They are generated
automatically by update-elm.rb script. One should
specify versions of Elm packages inside the script, clear the
packages directory and run the script from inside it.
elm-reactor is special because it also has Elm package
dependencies. The process is not automated very much for now -- you should
get the elm-reactor source tree (e.g. with
nix-shell) and run elm2nix.rb inside
it. Place the resulting package.nix file into
packages/elm-reactor-elm.nix.
autojump needs the shell integration to be useful but unlike other systems, nix doesn't have a standard share directory location. This is why a autojump-share script is shipped that prints the location of the shared folder. This can then be used in the .bashrc like this:
source "$(autojump-share)/autojump.bash"
Steam is distributed as a .deb file, for now only
as an i686 package (the amd64 package only has documentation).
When unpacked, it has a script called steam that
in ubuntu (their target distro) would go to /usr/bin
. When run for the first time, this script copies some
files to the user's home, which include another script that is the
ultimate responsible for launching the steam binary, which is also
in $HOME.
Nix problems and constraints:
We don't have /bin/bash and many
scripts point there. Similarly for /usr/bin/python
.
We don't have the dynamic loader in /lib
.
The steam.sh script in $HOME can
not be patched, as it is checked and rewritten by steam.
The steam binary cannot be patched, it's also checked.
The current approach to deploy Steam in NixOS is composing a FHS-compatible chroot environment, as documented here. This allows us to have binaries in the expected paths without disrupting the system, and to avoid patching them to work in a non FHS environment.
For 64-bit systems it's important to have
hardware.opengl.driSupport32Bit = true;
in your /etc/nixos/configuration.nix. You'll also need
hardware.pulseaudio.support32Bit = true;
if you are using PulseAudio - this will enable 32bit ALSA apps integration. To use the Steam controller, you need to add
services.udev.extraRules = ''
SUBSYSTEM=="usb", ATTRS{idVendor}=="28de", MODE="0666"
KERNEL=="uinput", MODE="0660", GROUP="users", OPTIONS+="static_node=uinput"
'';to your configuration.
Try to run
strace steam
to see what is causing steam to fail.
The open source radeon drivers need a newer libc++ than is provided by the default runtime, which leads to a crash on launch. Use
environment.systemPackages = [(pkgs.steam.override { newStdcpp = true; })];in your config if you get an error like
libGL error: unable to load driver: radeonsi_dri.so libGL error: driver pointer missing libGL error: failed to load driver: radeonsi libGL error: unable to load driver: swrast_dri.so libGL error: failed to load driver: swrast
Steam ships statically linked with a version of libcrypto that conflics with the one dynamically loaded by radeonsi_dri.so. If you get the error
steam.sh: line 713: 7842 Segmentation fault (core dumped)
have a look at this pull request.
There is no java in steam chrootenv by default. If you get a message like
/home/foo/.local/share/Steam/SteamApps/common/towns/towns.sh: line 1: java: command not found
You need to add
steam.override { withJava = true; };to your configuration.
The FHS-compatible chroot used for steam can also be used to run other linux games that expect a FHS environment. To do it, add
pkgs.(steam.override {
nativeOnly = true;
newStdcpp = true;
}).runto your configuration, rebuild, and run the game with
steam-run ./foo
Table of Contents
This chapter describes how to extend and change Nixpkgs packages using overlays. Overlays are used to add layers in the fix-point used by Nixpkgs to compose the set of all packages.
The set of overlays is looked for in the following places. The first one present is considered, and all the rest are ignored:
As an argument of the imported attribute set. When importing Nixpkgs,
the overlays attribute argument can be set to a list of
functions, which is described in Section 11.2, “Overlays Layout”.
In the directory pointed to by the Nix search path entry
<nixpkgs-overlays>.
In the directory ~/.config/nixpkgs/overlays/.
For the second and third options, the directory should contain Nix expressions defining the
overlays. Each overlay can be a file, a directory containing a
default.nix, or a symlink to one of those. The expressions should follow
the syntax described in Section 11.2, “Overlays Layout”.
The order of the overlay layers can influence the recipe of packages if multiple layers override the same recipe. In the case where overlays are loaded from a directory, they are loaded in alphabetical order.
To install an overlay using the last option, you can clone the overlay's repository and add
a symbolic link to it in ~/.config/nixpkgs/overlays/ directory.
Overlays are expressed as Nix functions which accept 2 arguments and return a set of packages.
self: super:
{
boost = super.boost.override {
python = self.python3;
};
rr = super.callPackage ./pkgs/rr {
stdenv = self.stdenv_32bit;
};
}
The first argument, usually named self, corresponds to the final package
set. You should use this set for the dependencies of all packages specified in your
overlay. For example, all the dependencies of rr in the example above come
from self, as well as the overriden dependencies used in the
boost override.
The second argument, usually named super,
corresponds to the result of the evaluation of the previous stages of
Nixpkgs. It does not contain any of the packages added by the current
overlay nor any of the following overlays. This set should be used either
to refer to packages you wish to override, or to access functions defined
in Nixpkgs. For example, the original recipe of boost
in the above example, comes from super, as well as the
callPackage function.
The value returned by this function should be a set similar to
pkgs/top-level/all-packages.nix, which contains
overridden and/or new packages.
Table of Contents
Use 2 spaces of indentation per indentation level in Nix expressions, 4 spaces in shell scripts.
Do not use tab characters, i.e. configure your
editor to use soft tabs. For instance, use (setq-default
indent-tabs-mode nil) in Emacs. Everybody has different
tab settings so it’s asking for trouble.
Use lowerCamelCase for variable
names, not UpperCamelCase. TODO: naming of
attributes in
all-packages.nix?
Function calls with attribute set arguments are written as
foo {
arg = ...;
}
not
foo
{
arg = ...;
}
Also fine is
foo { arg = ...; }
if it's a short call.
In attribute sets or lists that span multiple lines, the attribute names or list elements should be aligned:
# A long list.
list =
[ elem1
elem2
elem3
];
# A long attribute set.
attrs =
{ attr1 = short_expr;
attr2 =
if true then big_expr else big_expr;
};
# Alternatively:
attrs = {
attr1 = short_expr;
attr2 =
if true then big_expr else big_expr;
};
Short lists or attribute sets can be written on one line:
# A short list.
list = [ elem1 elem2 elem3 ];
# A short set.
attrs = { x = 1280; y = 1024; };
Breaking in the middle of a function argument can give hard-to-read code, like
someFunction { x = 1280;
y = 1024; } otherArg
yetAnotherArg
(especially if the argument is very large, spanning multiple lines).
Better:
someFunction
{ x = 1280; y = 1024; }
otherArg
yetAnotherArg
or
let res = { x = 1280; y = 1024; };
in someFunction res otherArg yetAnotherArg
The bodies of functions, asserts, and withs are not indented to prevent a lot of superfluous indentation levels, i.e.
{ arg1, arg2 }:
assert system == "i686-linux";
stdenv.mkDerivation { ...
not
{ arg1, arg2 }:
assert system == "i686-linux";
stdenv.mkDerivation { ...
Function formal arguments are written as:
{ arg1, arg2, arg3 }:
but if they don't fit on one line they're written as:
{ arg1, arg2, arg3
, arg4, ...
, # Some comment...
argN
}:
Functions should list their expected arguments as precisely as possible. That is, write
{ stdenv, fetchurl, perl }: ...
instead of
args: with args; ...
or
{ stdenv, fetchurl, perl, ... }: ...
For functions that are truly generic in the number of
arguments (such as wrappers around mkDerivation)
that have some required arguments, you should write them using an
@-pattern:
{ stdenv, doCoverageAnalysis ? false, ... } @ args:
stdenv.mkDerivation (args // {
... if doCoverageAnalysis then "bla" else "" ...
})
instead of
args:
args.stdenv.mkDerivation (args // {
... if args ? doCoverageAnalysis && args.doCoverageAnalysis then "bla" else "" ...
})
In Nixpkgs, there are generally three different names associated with a package:
The name attribute of the
derivation (excluding the version part). This is what most users
see, in particular when using
nix-env.
The variable name used for the instantiated package
in all-packages.nix, and when passing it as a
dependency to other functions. This is what Nix expression authors
see. It can also be used when installing using nix-env
-iA.
The filename for (the directory containing) the Nix expression.
Most of the time, these are the same. For instance, the package
e2fsprogs has a name attribute
"e2fsprogs-, is
bound to the variable name version"e2fsprogs in
all-packages.nix, and the Nix expression is in
pkgs/os-specific/linux/e2fsprogs/default.nix.
There are a few naming guidelines:
Generally, try to stick to the upstream package name.
Don’t use uppercase letters in the
name attribute — e.g.,
"mplayer-1.0rc2" instead of
"MPlayer-1.0rc2".
The version part of the name
attribute must start with a digit (following a
dash) — e.g., "hello-0.3.1rc2".
If a package is not a release but a commit from a repository, then
the version part of the name must be the date of that
(fetched) commit. The date must be in "YYYY-MM-DD" format.
Also append "unstable" to the name - e.g.,
"pkgname-unstable-2014-09-23".
Dashes in the package name should be preserved
in new variable names, rather than converted to underscores
(which was convention up to around 2013 and most names
still have underscores instead of dashes) — e.g.,
http-parser instead of
http_parser.
If there are multiple versions of a package, this
should be reflected in the variable names in
all-packages.nix,
e.g. json-c-0-9 and json-c-0-11.
If there is an obvious “default” version, make an attribute like
json-c = json-c-0-9;.
See also Section 12.3.2, “Versioning”
Names of files and directories should be in lowercase, with
dashes between words — not in camel case. For instance, it should be
all-packages.nix, not
allPackages.nix or
AllPackages.nix.
Each package should be stored in its own directory somewhere in
the pkgs/ tree, i.e. in
pkgs/.
Below are some rules for picking the right category for a package.
Many packages fall under several categories; what matters is the
primary purpose of a package. For example, the
category/subcategory/.../pkgnamelibxml2 package builds both a library and some
tools; but it’s a library foremost, so it goes under
pkgs/development/libraries.
When in doubt, consider refactoring the
pkgs/ tree, e.g. creating new categories or
splitting up an existing category.
development/libraries (e.g. libxml2)
development/compilers (e.g. gcc)
development/interpreters (e.g. guile)
development/tools/parsing (e.g. bison, flex)
development/tools/build-managers (e.g. gnumake)
development/tools/misc (e.g. binutils)
development/misc
(A tool is a relatively small program, especially one intented to be used non-interactively.)
tools/networking (e.g. wget)
tools/text (e.g. diffutils)
tools/system (e.g. cron)
tools/archivers (e.g. zip, tar)
tools/compression (e.g. gzip, bzip2)
tools/security (e.g. nmap, gnupg)
tools/misc
shells (e.g. bash)
servers/http (e.g. apache-httpd)
servers/x11 (e.g. xorg — this includes the client libraries and programs)
servers/misc
desktops (e.g. kde, gnome, enlightenment)
applications/window-managers (e.g. awesome, compiz, stumpwm)
A (typically large) program with a distinct user interface, primarily used interactively.
applications/version-management (e.g. subversion)
applications/video (e.g. vlc)
applications/graphics (e.g. gimp)
applications/networking/mailreaders (e.g. thunderbird)
applications/networking/newsreaders (e.g. pan)
applications/networking/browsers (e.g. firefox)
applications/networking/misc
applications/misc
data/fonts
data/sgml+xml/schemas/xml-dtd (e.g. docbook)
(Okay, these are executable...)
data/sgml+xml/stylesheets/xslt (e.g. docbook-xsl)
games
misc
Because every version of a package in Nixpkgs creates a potential maintenance burden, old versions of a package should not be kept unless there is a good reason to do so. For instance, Nixpkgs contains several versions of GCC because other packages don’t build with the latest version of GCC. Other examples are having both the latest stable and latest pre-release version of a package, or to keep several major releases of an application that differ significantly in functionality.
If there is only one version of a package, its Nix expression
should be named e2fsprogs/default.nix. If there
are multiple versions, this should be reflected in the filename,
e.g. e2fsprogs/1.41.8.nix and
e2fsprogs/1.41.9.nix. The version in the
filename should leave out unnecessary detail. For instance, if we
keep the latest Firefox 2.0.x and 3.5.x versions in Nixpkgs, they
should be named firefox/2.0.nix and
firefox/3.5.nix, respectively (which, at a given
point, might contain versions 2.0.0.20 and
3.5.4). If a version requires many auxiliary
files, you can use a subdirectory for each version,
e.g. firefox/2.0/default.nix and
firefox/3.5/default.nix.
All versions of a package must be included
in all-packages.nix to make sure that they
evaluate correctly.
There are multiple ways to fetch a package source in nixpkgs. The
general guidline is that you should package sources with a high degree of
availability. Right now there is only one fetcher which has mirroring
support and that is fetchurl. Note that you should also
prefer protocols which have a corresponding proxy environment variable.
You can find many source fetch helpers in pkgs/build-support/fetch*.
In the file pkgs/top-level/all-packages.nix you can
find fetch helpers, these have names on the form
fetchFrom*. The intention of these are to provide
snapshot fetches but using the same api as some of the version controlled
fetchers from pkgs/build-support/. As an example going
from bad to good:
Bad: Uses git:// which won't be proxied.
src = fetchgit {
url = "git://github.com/NixOS/nix.git";
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
sha256 = "1cw5fszffl5pkpa6s6wjnkiv6lm5k618s32sp60kvmvpy7a2v9kg";
}
Better: This is ok, but an archive fetch will still be faster.
src = fetchgit {
url = "https://github.com/NixOS/nix.git";
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
sha256 = "1cw5fszffl5pkpa6s6wjnkiv6lm5k618s32sp60kvmvpy7a2v9kg";
}
Best: Fetches a snapshot archive and you get the rev you want.
src = fetchFromGitHub {
owner = "NixOS";
repo = "nix";
rev = "1f795f9f44607cc5bec70d1300150bfefcef2aae";
sha256 = "04yri911rj9j19qqqn6m82266fl05pz98inasni0vxr1cf1gdgv9";
}
Only patches that are unique to nixpkgs should be
included in nixpkgs source.
Patches available online should be retrieved using
fetchpatch.
patches = [
(fetchpatch {
name = "fix-check-for-using-shared-freetype-lib.patch";
url = "http://git.ghostscript.com/?p=ghostpdl.git;a=patch;h=8f5d285";
sha256 = "1f0k043rng7f0rfl9hhb89qzvvksqmkrikmm38p61yfx51l325xr";
})
];
Table of Contents
Fork the repository on GitHub.
Create a branch for your future fix.
You can make branch from a commit of your local nixos-version. That will help you to avoid additional local compilations. Because you will receive packages from binary cache.
For example: nixos-version returns 15.05.git.0998212 (Dingo). So you can do:
$ git checkout 0998212 $ git checkout -b 'fix/pkg-name-update'
Please avoid working directly on the master branch.
Make commits of logical units.
If you removed pkgs, made some major NixOS changes etc., write about them in nixos/doc/manual/release-notes/rl-unstable.xml.
Check for unnecessary whitespace with git diff --check before committing.
Format the commit in a following way:
(pkg-name | service-name): (from -> to | init at version | refactor | etc) Additional information.
Examples:
nginx: init at 2.0.1
firefox: 3.0 -> 3.1.1
hydra service: add bazBaz option
nginx service: refactor config generation
Test your changes. If you work with
nixpkgs:
update pkg ->
nix-env -i pkg-name -f <path to your local nixpkgs folder>
add pkg ->
Make sure it's in pkgs/top-level/all-packages.nix
nix-env -i pkg-name -f <path to your local nixpkgs folder>
If you don't want to install pkg in you profile.
nix-build -A pkg-attribute-name <path to your local nixpkgs folder>/default.nix and check results in the folder result. It will appear in the same directory where you did nix-build.
If you did nix-env -i pkg-name you can do nix-env -e pkg-name to uninstall it from your system.
NixOS and its modules:
You can add new module to your NixOS configuration file (usually it's /etc/nixos/configuration.nix). And do sudo nixos-rebuild test -I nixpkgs=<path to your local nixpkgs folder> --fast.
If you have commits pkg-name: oh, forgot to insert whitespace: squash commits in this case. Use git rebase -i.
Rebase you branch against current master.
Push your changes to your fork of nixpkgs.
Create pull request:
Write the title in format (pkg-name | service): improvement.
If you update the pkg, write versions from -> to.
Write in comment if you have tested your patch. Do not rely much on TravisCI.
If you make an improvement, write about your motivation.
Notify maintainers of the package. For example add to the message: cc @jagajaga @domenkozar.
Make the appropriate changes in you branch.
Don't create additional commits, do
git rebase -i
git push --force to your branch.
Commits must be sufficiently tested before being merged, both for the master and staging branches.
Hydra builds for master and staging should not be used as testing platform, it's a build farm for changes that have been already tested.
When changing the bootloader installation process, extra care must be taken. Grub installations cannot be rolled back, hence changes may break people's installations forever. For any non-trivial change to the bootloader please file a PR asking for review, especially from @edolstra.
It should only see non-breaking commits that do not cause mass rebuilds.
It's only for non-breaking mass-rebuild commits. That means it's not to be used for testing, and changes must have been well tested already. Read policy here.
If the branch is already in a broken state, please refrain from adding extra new breakages. Stabilize it for a few days, merge into master, then resume development on staging. Keep an eye on the staging evaluations here. If any fixes for staging happen to be already in master, then master can be merged into staging.
If you're cherry-picking a commit to a stable release branch, always use git cherry-pick -xe and ensure the message contains a clear description about why this needs to be included in the stable branch.
An example of a cherry-picked commit would look like this:
nixos: Refactor the world.
The original commit message describing the reason why the world was torn apart.
(cherry picked from commit abcdef)
Reason: I just had a gut feeling that this would also be wanted by people from
the stone age.
Table of Contents
The nixpkgs projects receives a fairly high number of contributions via GitHub pull-requests. Reviewing and approving these is an important task and a way to contribute to the project.
The high change rate of nixpkgs make any pull request that is open for long enough subject to conflicts that will require extra work from the submitter or the merger. Reviewing pull requests in a timely manner and being responsive to the comments is the key to avoid these. Github provides sort filters that can be used to see the most recently and the least recently updated pull-requests.
When reviewing a pull request, please always be nice and polite. Controversial changes can lead to controversial opinions, but it is important to respect every community members and their work.
GitHub provides reactions, they are a simple and quick way to provide feedback to pull-requests or any comments. The thumb-down reaction should be used with care and if possible accompanied with some explanations so the submitter has directions to improve his contribution.
Pull-requests reviews should include a list of what has been reviewed in a comment, so other reviewers and mergers can know the state of the review.
All the review template samples provided in this section are generic and meant as examples. Their usage is optional and the reviewer is free to adapt them to his liking.
A package update is the most trivial and common type of pull-request. These pull-requests mainly consist in updating the version part of the package name and the source hash.
It can happen that non trivial updates include patches or more complex changes.
Reviewing process:
Add labels to the pull-request. (Requires commit rights)
8.has: package (update) and any topic
label that fit the updated package.
Ensure that the package versioning is fitting the guidelines.
Ensure that the commit text is fitting the guidelines.
Ensure that the package maintainers are notified.
mention-bot usually notify GitHub users based on the submitted changes, but it can happen that it misses some of the package maintainers.
Ensure that the meta field contains correct information.
License can change with version updates, so it should be checked to be fitting upstream license.
If the package has no maintainer, a maintainer must be set. This can be the update submitter or a community member that accepts to take maintainership of the package.
Ensure that the code contains no typos.
Building the package locally.
Pull-requests are often targeted to the master or staging branch so building the pull-request locally as it is submitted can trigger a large amount of source builds.
It is possible to rebase the changes on nixos-unstable or nixpkgs-unstable for easier review by running the following commands from a nixpkgs clone.
$ git remote add channels https://github.com/NixOS/nixpkgs-channels.git$ git fetch channels nixos-unstable
$ git fetch origin pull/PRNUMBER/head
$ git rebase --onto nixos-unstable BASEBRANCH FETCH_HEAD
![]()
This should be done only once to be able to fetch channel branches from the nixpkgs-channels repository. | |
Fetching the nixos-unstable branch. | |
Fetching the pull-request changes, | |
Rebasing the pull-request changes to the nixos-unstable branch. |
The nox
tool can be used to review a pull-request content in a single command.
It doesn't rebase on a channel branch so it might trigger multiple
source builds. PRNUMBER should be replaced by the
number at the end of the pull-request title.
$ nix-shell -p nox --run "nox-review -k pr PRNUMBER"
Running every binary.
Example 14.1. Sample template for a package update review
##### Reviewed points - [ ] package name fits guidelines - [ ] package version fits guidelines - [ ] package build on ARCHITECTURE - [ ] executables tested on ARCHITECTURE - [ ] all depending packages build ##### Possible improvements ##### Comments
New packages are a common type of pull-requests. These pull requests consists in adding a new nix-expression for a package.
Reviewing process:
Add labels to the pull-request. (Requires commit rights)
8.has: package (new) and any topic
label that fit the new package.
Ensure that the package versioning is fitting the guidelines.
Ensure that the commit name is fitting the guidelines.
Ensure that the meta field contains correct information.
License must be checked to be fitting upstream license.
Platforms should be set or the package will not get binary substitutes.
A maintainer must be set, this can be the package submitter or a community member that accepts to take maintainership of the package.
Ensure that the code contains no typos.
Ensure the package source.
Mirrors urls should be used when available.
The most appropriate function should be used (e.g.
packages from GitHub should use
fetchFromGitHub).
Building the package locally.
Running every binary.
Example 14.2. Sample template for a new package review
##### Reviewed points - [ ] package path fits guidelines - [ ] package name fits guidelines - [ ] package version fits guidelines - [ ] package build on ARCHITECTURE - [ ] executables tested on ARCHITECTURE - [ ] `meta.description` is set and fits guidelines - [ ] `meta.license` fits upstream license - [ ] `meta.platforms` is set - [ ] `meta.maintainers` is set - [ ] build time only dependencies are declared in `nativeBuildInputs` - [ ] source is fetched using the appropriate function - [ ] phases are respected - [ ] patches that are remotely available are fetched with `fetchpatch` ##### Possible improvements ##### Comments
Module updates are submissions changing modules in some ways. These often contains changes to the options or introduce new options.
Reviewing process
Add labels to the pull-request. (Requires commit rights)
8.has: module (update) and any topic
label that fit the module.
Ensure that the module maintainers are notified.
Mention-bot notify GitHub users based on the submitted changes, but it can happen that it miss some of the package maintainers.
Ensure that the module tests, if any, are succeeding.
Ensure that the introduced options are correct.
Type should be appropriate (string related types differs
in their merging capabilities, optionSet and
string types are deprecated).
Description, default and example should be provided.
Ensure that option changes are backward compatible.
mkRenamedOptionModule and
mkAliasOptionModule functions provide way to make
option changes backward compatible.
Ensure that removed options are declared with
mkRemovedOptionModule
Ensure that changes that are not backward compatible are mentioned in release notes.
Ensure that documentations affected by the change is updated.
Example 14.3. Sample template for a module update review
##### Reviewed points - [ ] changes are backward compatible - [ ] removed options are declared with `mkRemovedOptionModule` - [ ] changes that are not backward compatible are documented in release notes - [ ] module tests succeed on ARCHITECTURE - [ ] options types are appropriate - [ ] options description is set - [ ] options example is provided - [ ] documentation affected by the changes is updated ##### Possible improvements ##### Comments
New modules submissions introduce a new module to NixOS.
Add labels to the pull-request. (Requires commit rights)
8.has: module (new) and any topic label
that fit the module.
Ensure that the module tests, if any, are succeeding.
Ensure that the introduced options are correct.
Type should be appropriate (string related types differs
in their merging capabilities, optionSet and
string types are deprecated).
Description, default and example should be provided.
Ensure that module meta field is
present
Maintainers should be declared in
meta.maintainers.
Module documentation should be declared with
meta.doc.
Ensure that the module respect other modules functionality.
For example, enabling a module should not open firewall ports by default.
Example 14.4. Sample template for a new module review
##### Reviewed points - [ ] module path fits the guidelines - [ ] module tests succeed on ARCHITECTURE - [ ] options have appropriate types - [ ] options have default - [ ] options have example - [ ] options have descriptions - [ ] No unneeded package is added to system.environmentPackages - [ ] meta.maintainers is set - [ ] module documentation is declared in meta.doc ##### Possible improvements ##### Comments
Other type of submissions requires different reviewing steps.
If you consider having enough knowledge and experience in a topic and would like to be a long-term reviewer for related submissions, please contact the current reviewers for that topic. They will give you information about the reviewing process. The main reviewers for a topic can be hard to find as there is no list, but checking past pull-requests to see who reviewed or git-blaming the code to see who committed to that topic can give some hints.
Container system, boot system and library changes are some examples of the pull requests fitting this category.
It is possible for community members that have enough knowledge and experience on a special topic to contribute by merging pull requests.
TODO: add the procedure to request merging rights.
In a case a contributor leaves definitively the Nix community, he should create an issue or notify the mailing list with references of packages and modules he maintains so the maintainership can be taken over by other contributors.
The DocBook sources of the Nixpkgs manual are in the doc
subdirectory of the Nixpkgs repository. If you make modifications to
the manual, it's important to build it before committing. You can do that as follows:
$ cd /path/to/nixpkgs $ nix-build doc
If the build succeeds, the manual will be in
./result/share/doc/nixpkgs/manual.html.