haveged - A simple entropy daemon

FAQ

What is haveged?
Haveged is an unpredictable random number generator based upon an adaptation of the HAVEGE algorithm.
How does the havege algorithm generate random numbers?
In effect, the havege algorithm gathers the random timing variations encountered in benchmarking a PRNG calculation. The calculation is constructed to be sensitive to the effects of hardware interrupts and the output continually reseeded by XOR differences of successive iterations of the benchmark!
What is the relationship between haveged and havege?
The original havege project is no longer under active development. The haveged project replaces the original havege packaging (build system for either a lkm or user space link library) with more flexible package. The haveged project supplies a general purpose executable suitable for the most common use case, a daemon used to feed entropy into the linux random device. The same executable can also be used to generate output to the file system for testing or other special purposes. Haveged also includes many improvements to the havege concept to extend the range of it's application. For very special needs, these improvements are available sans the haveged command line interface in a user link library.
How does haveged improve on havege?
The primary constraints on the havege algorithm are the details of the implementation resource constraints, the size of the host's L1 instruction cache, the size of the host's L1 data cache, and the details of the high resolution timer source. For havege, all of these details were fixed at build time. Haveged includes all of havege timer sources and several more, including a generic source based on the clock_gettime system call. The haveged clock source is still fixed at build time, but all other constraints are dynamically tuned at invocation.

Haveged also extends and enhances the original havege test suite based on the NIS test suite. This testing is done at build time. Even if the test suite were 100% reliable (which it is not) relying on build time testing is extremely dangerous unless the build and execution environments are identical. Run-time testing was introduced by haveged to deal with this situation.

What is the theoretical justification for haveged randomness?
Haveged runs on too wide a range of hardware to create a detailed theoretical model of the entropy source. However, the general characteristics of the phenomena I call processor flutter are known (for a look at some of my observations of this mechanism on intel iron see the flutter documents). Even havege critics concede there is some entropy likely present in this mechanism, their argument usually becomes havege is to complex to analyze and thus open to suspicion.

These same folks will proudly point to a HWRNG with a theoretical entropy hardware source, but neglect to mention that the source probably needs a whitener to produce good random numbers and a monitor to detect if the hardware has gone south. To construct one of these animals, the circuit is built, the random hardware bit stream analyzed, a conditioner designed to correct the bit stream, and a monitor created to verify operation.

Compare to haveged. The exact nature of the hardware bit stream is unknown (my intel experiments suggest a log-normal distribution of timer variations with an entropy of about 5 bits/byte), the conditioner (the bench marked code) corrects the stream, run time testing verifies operation. What is the functional difference exactly? In either case, verification is the final word on correct operation.

How does haveged run-time testing compare to rng-tools-4?
Assuming you have a rngd entropy source, the operation of rngd and haveged are similar.

The rngd test suite, FIPS140-2, is run at start-up and continuously on 20000 bit blocks; test failures on a block, discard the data and retry up to a configured limit.

Haveged runs a test suite based upon AIS31. AIS31 consists of a suite of tests to detected 'statistically anomalous behavior', procedure A, and a suite of more theoretical tests, procedure B.

AIS31 procedure A consists of a disjointedness test on a block of 65k*48 bits followed by 257 repetitions of a 5 test suite run on successive 20000 bit blocks; the 5 tests consist of the 4 tests in FIPS140-1 augmented with an auto-correlation test.

AIS32 procedure B consists of distribution tests for 10,000 runs of 1, 2, 4, 8 bit sequences followed by a 256 K bit entropy estimate (Coron's test).

Testing schedule is determined command line parameters, the defaults require both Procedure A and Procedure B to both be completed at start up and all output to pass Procedure B incrementally (data is output as long as the no individual test failures have occurred in the active test procedure operating on internally buffered data).

FIPS140-1 and FIPS140-2 differ only by acceptance limits. FIPS140-2 has slightly more stringent limits, but the FIPS140-1 limits are baked into a retry strategy that guarantees a working RNG will not shut down due to a false alarm. The haveged default for internal buffering ensures that no single test failure has occurred in the last ~2MB of generated data. When an error is detected, internal data is discarded until the active test procedure is completed; if only a single error occurred in the test procedure, the retry will initiated and output will resume only after the retry completes successfully. Errors that cannot be recovered by the retry procedure are fatal.

Haveged testing is performed directly on the collection buffer contents. Note that both AIS test procedures require several MB of input to complete (the procedure B requirement depends on input and is not fixed) and any test sequence including procedure B will not have fixed buffer alignment; such test sequence should minimize the possibility that buffer alignment artifacts would go undetected.

Where can I find additional information on the AIS tests?
See AIS31: Funktionalitätsklassen und Evaluationsmethodologie für physikalische Zufallszahlengeneratoren at:
https://www.bsi.bund.de/DE/Themen/ZertifizierungundAnerkennung/ZertifizierungnachCCundITSEC/AnwendungshinweiseundInterpretationen/AIS/aiscc_node.html.
Haveged tests are based upon the reference implementation linked at the above page:
https://www.bsi.bund.de/SharedDocs/Downloads/DE/BSI/Zertifizierung/Interpretationen/AIS_31_testsuit_zip.zip?__blob=publicationFile
What about the claims I have heard that haveged doesn't work as advertised?
The claim is frequently based upon an experiment in which haveged was rebuilt the timer source replaced by a constant, tested using the check build target, and found to pass all build tests. I have repeated this experiment myself with haveged and verified that certain fixed timer values will pass the build tests and even the run-time tests under some circumstances; an invariant timer value turns haveged into a decent PRNG (I gave up trying to calculate the the PRNG period - it's huge). However, as long as run-time testing is in place, the output must still pass all the tests that a good HWRNG would pass. A HWRNG might malfunction in such a way that it too passes it's validity tests. So what is the point? If the output cannot be distinguished from a TRNG at run time, it's a TRNG.

In the usual application, feeding the entropy pool, this is even less of a concern because other entropy collection does not stop because haveged is running. Even a very occasional outside input would ensure unpredictable device output and that is what the game is really all about.

A security advisory on the polarSSL implementation of haveged is also often mentioned. The only thing polarSSL and haveged have in common is the collection macro; polarSSL makes no attempt to tune the collection loop to the host hardware and only warms up the generator with one buffer fill. Haveged attempts to tune it's collection loop dynamically to the host environment and also warms up the generator with 32 collection buffer fills before even attempting the start-up self test.

Why is this 'simple' package so complex?
The two main design goals for haveged were adaptability, and simplicity of use. For the vast majority of users, haveged is run and forget. The complexities of timer resolution, compiler optimization, and tuning the algorithm to the architecture are all bundled into a single pass-fail proposition; does the generator pass it's start up and run-time tests? For non-mainstream applications, there are build or command options to control the inputs to that proposition: algorithm tuning can be overridden, buffer size changed, run-time testing adjusted. Any drastic deviation from the provided defaults should be thoroughly tested.
What resource load does haveged place upon the system?
Haveged uses a 512 KB internal buffer to collect benchmark timing data and statistically test the results before output. Collection takes place only at start up (generator warm-up and start-up test) and when the 512KB of collected data has been consumed and a refill is required. When haveged is used as a daemon to feed the /dev/random entropy pool, haveged waits for the pool to fall below the low water mark before adding entropy to the random pool from the collection buffer to fill the pool to the high water mark. Haveged will remain suspended until the pool again falls to the low water mark.
Should I build haveged or use my distro's binary?
If your distro has a binary that fits your needs, use it. I recommend using a version of haveged with run-time testing (version >= 1.5). The source build procedure is not difficult if your distro's version is more antiquated, but the source distribution offers a lot of flexibility at the cost of complexity.
What hardware architectures does haveged run on?
Haveged host build variations are:

i*86_64-* ia64-*, x86_64-*, powerpc-*, ppc-*, powerpc64-*, ppc64-*, sparc*-*, sparclite*-*, s390*-*

All other host types use the generic build. The generic timer source can be also be selected for recognized hosts via a build option. The arm and mips architectures use the generic build.

 

Home