Judul : A good idea with bad usage: /dev/urandom
link : A good idea with bad usage: /dev/urandom
A good idea with bad usage: /dev/urandom
Last week, I wrote two articles pointing out issues with unofficial porting efforts of LibreSSL. In these articles, I highlighted some issues that I currently see going on with some of these projects.In the second article, I called attention to poor arc4random_buf() implementations being created, specifically saying: "Using poor sources of entropy like /dev/urandom on Linux, or worse, gettimeofday(), and using them to generate long-lived keys." Which seemed to irk some. Now for those that care to understand the quoted issue in its proper context should go look for the current LibreSSL porting projects they can find via Google, and review their source. However, most seem to be understanding this as some kind of argument about /dev/random versus /dev/urandom. It isn't. So without further ado:
Poorly seeding a cryptographically secure pseudorandom number generator (CSPRNG)
In order for many forms of cryptography to work properly, they depend on secrecy, unpredictability, and uniqueness. Therefore, we need a good way to use many unpredictable values.Now randomness doesn't really exist. When we humans see something as random, it's only because we don't know or understand all the details. Therefore, any perceived randomness on your part is your inability to track all the variables.
Computers are very complex machines, where many different components are all working independently, and in ways that are hard to keep track of externally. Therefore, the operating system is able to collect variables from all the various hardware involved, their current operating conditions, how they handle certain tasks, how long they take, how much electricity they use, and so on. These variables can now be combined together using very confusing and ridiculous algorithms, which essentially throw all the data through a washing machine and the world's worst roller coaster. This result is what we call entropy.
Now entropy might not be that large, and can only provide a small amount of unique random values to play with. However, a small amount of random values can be enough to create trillions of pseudo-random values. To do so, one uses some of the aforementioned ridiculous algorithms to produce two values. One value is never seen outside the pseudo-random number generator, and is used as part of the calculations for the next time a random value is requested, and the other value is output as the pseudo-random value generated for a single request.
A construct of this nature can allow an unlimited amount of "random" values to be generated. As long as this technique never repeats and is unpredictable, then it is cryptographically secure. Of course since the algorithm is known, the entropy seeding it must be a secret, otherwise it is completely predictable.
Different algorithms have different properties. OpenBSD's arc4random set of functions are known to be able to create a very large amount of good random values from a very little amount of entropy. Of course the better entropy it is supplied with, the better it can perform, so you'll always want the best entropy possible. As with any random number generator, supply it with predictable values, and its entire security is negated.
arc4random and /dev/(u)random
So, how does one port the arc4random family to Linux? Linux is well known for inventing and supplying two default files, /dev/random and /dev/urandom (unlimited random). The former is pretty much raw entropy, while the latter is the output of a CSPRNG function like OpenBSD's arc4random family. The former can be seen as more random, and the latter as less random, but the differences are extremely hard to measure, which is why CSPRNGs work in the first place. Since the former is only entropy, it is limited as to how much it can output, and one needing a lot of random data can be stuck waiting a while for it to fill up the random buffer. Since the latter is a CSPRNG, it can keep outputting data indefinitely, without any significant waiting periods.Now theoretically, one can make the arc4random_buf() function a wrapper around /dev/urandom, and be done with it. The only reason not to is because one may trust the arc4random set of algorithms more than /dev/urandom. In that case, would /dev/urandom be trusted enough to seed the arc4random algorithms, which is then in turn used for many other outputs, some of which end up in RSA keys, SSH keys, and so on? I'll leave that question to the cryptographic experts. But I'll show you how to use /dev/urandom poorly, how to attack the design, and corrective efforts.
First, take a look at how one project decided to handle the situation. It tries to use /dev/urandom, and in a worse case scenario uses gettimeofday() and other predictable data. Also, in a case where the read() function doesn't return as much as requested for some reason, but perhaps returned a decent chunk of it, the gettimeofday() and getpid() calls will overwrite what was returned.
This very much reminds me of why the OpenBSD team removed the OpenSSL techniques in the first place. You do not want to use a time function as your primary source of entropy, nor with range limited values, and other dumb things. If you're going to use time, at the very least use clock_gettime() which provides time resolution that is 1,000 times better than gettimeofday(), and can provide both current time, and monotonic time (time which cannot go backwards). Additionally, both gettimeofday() and clock_gettime() on 64-bit systems will return values where the top half of their data will be zeroed out, so you'll want to ensure you throw that away, and not just use their raw data verbatim. Further, how this one project uses /dev/urandom, like most other projects, is terrible.
Attacking /dev/(u)random usage
A naive approach to use /dev/urandom (and /dev/random) is as follows:int fd = open("/dev/urandom", O_RDONLY);
if (fd != -1)
{
uint8_t buffer[40];
if (read(fd, buffer, sizeof(buffer)) == sizeof(buffer)))
{
//Do what needs to be done with the random data...
}
else
{
//Read error handling
}
close(fd);
}
else
{
//Open error handling
}
This tries to open the file, ensures it was opened, tries to read 40 bytes of data, and continues what it needs to do in each scenario. Note, 40 bytes is the amount arc4random wants, which also happens to be 8 bytes more than the Linux manual page says you should be using at a time from its random device.
The first common mistake here is using read() like this. read() can be interrupted. Normally it can't be interrupted for regular files, but this device is not a regular file. Some random device implementations specifically document that read() can be interrupted upon them.
So now our second attempt:
//Like read(), but keep reading upon interuption, until everything possible is read
ssize_t insane_read(int fd, void *buf, size_t count)
{
ssize_t amount_read = 0;
while ((size_t)amount_read < count)
{
ssize_t r = read(fd, (char *)buf+amount_read, count-amount_read);
if (r > 0) { amount_read += r; }
else if (!r) { break; }
else if (errno != EINTR)
{
amount_read = -1;
break;
}
}
return(amount_read);
}
int success = 0;
int fd = open("/dev/urandom", O_RDONLY);
if (fd != -1)
{
uint8_t buffer[40];
ssize_t amount = insane_read(fd, buffer, sizeof(buffer)); //Grab as much data as we possibly can
close(fd);
if (amount > 0)
{
if (amount < sizeof(buffer))
{
//Continue filling with other sources
}
//Do what needs to be done with random data...
success = 1; //Yay!
}
}
if (!success)
{
//Error handling
}
With this improved approach, we know we're reading as much as possible, and if not, then we can try using lesser techniques to fill in the missing entropy required. So far so good, right?
Now, let me ask you, why would opening /dev/(u)random fail in the first place? First, it's possible the open was interrupted, as may happen on some implementations. So the open() call should probably be wrapped like read() is. In fact, you may consider switching to the C family fopen() and fread() calls which handle these kinds of problems for you. However, opening could be failing because the application has reached its file descriptor limit, which is even more prevalent with the C family of file functions. Another possibility is that the file doesn't even exist. Go ahead, try to delete the file as the superuser, nothing stops you. Also, you have to consider that applications may be running inside a chroot, and /dev/ entries don't exist.
I'll cover some alternative approaches for the above problems later. But if you managed to open and read all the data needed, everything is great, right? Wrong! How do you even know if /dev/(u)random is random in the first place? This may sound like a strange question, but it isn't. You can't just trust a file because of it's path. Consider an attacker ran the following:
void sparse_1gb_overwrite(const char *path)
{
int fd;
char byte = 0;
//No error checking
unlink(path);
fd = open(path, O_CREAT | O_WRONLY | O_TRUNC, 0644);
lseek(fd, 1073741822, SEEK_SET);
write(fd, &byte, 1);
close(fd);
}
sparse_1gb_create("/dev/random");
sparse_1gb_create("/dev/urandom");
Now both random devices are actually large sparse files with known data. This is worse than not having access to these files, in fact, the attacker is able to provide you with a seed of his own choosing! A strong cryptographic library should not just assume everything is in a proper state. In fact, if you're using a chroot, you're already admitting you don't trust what will happen on the file system, and you want to isolate some applications from the rest of the system.
So the next step is to ensure that the so called device you're opening is actually a device:
int success = 0;
int fd = insane_open("/dev/urandom", O_RDONLY);
if (fd != -1)
{
struct stat stat_buffer;
if (!fstat(fd, &stat_buffer) && S_ISCHR(stat_buffer.st_mode)) //Make sure we opened a character device!
{
uint8_t buffer[40];
ssize_t amount = insane_read(fd, buffer, sizeof(buffer)); //Grab as much data as we possibly can
if (amount > 0)
{
if (amount < sizeof(buffer))
{
//Continue filling with other sources
}
//Do what needs to be done with random data...
success = 1; //Yay!
}
}
close(fd);
}
if (!success)
{
//Error handling
}
So now we're out of the woods, right? Unfortunately not yet. How do you know you opened the correct character device? Maybe /dev/urandom is symlinked to /dev/zero? You can run lstat() on /dev/urandom initially, but that has TOCTOU issues. We can add the FreeBSD/Linux extension O_NOFOLLOW to the open command, but then /dev/urandom can't be used when it's linked to /dev/random as it is on FreeBSD, or linked to some other location entirely as on Solaris. Furthermore, avoiding symlinks is not enough:
void dev_zero_overwrite(const char *path)
{
unlink(path);
mknod(path, S_IFCHR | 0644, makedev(1, 5));
}
dev_zero_overwrite("/dev/random");
dev_zero_overwrite("/dev/urandom");
If an attacker manages to run the above code on Linux, the random devices are now both in their very essence /dev/zero!
Here's a list of device numbers for the random devices on various Operating Systems:
Linux | FreeBSD | DragonFlyBSD | NetBSD | OpenBSD | Solaris | |
/dev/random | 1:8 | 0:10 | 8:3 | 46:0 | 45:0 | 0:0 |
/dev/urandom | 1:9 | 8:4 | 46:1 | 45:2 | 0:1 | |
/dev/srandom | 45:1 | |||||
/dev/arandom | 45:3 |
If your application is running with Superuser privileges, you can actually create these random devices anywhere on the fly:
int result = mknod("/tmp/myurandom", S_IFCHR | 0400, makedev(1, 9)); //Create "/dev/urandom" on Linux
Of course, after opening, you want to ensure that you're using what you think you're using:
int success = 0;
int fd = insane_open("/dev/urandom", O_RDONLY);
if (fd != -1)
{
struct stat stat_buffer;
if (!fstat(fd, &stat_buffer) && S_ISCHR(stat_buffer.st_mode) &&
((stat_buffer.st_rdev == makedev(1, 8)) || (stat_buffer.st_rdev == makedev(1, 9)))) //Make sure we opened a random device
{
uint8_t buffer[40];
ssize_t amount = insane_read(fd, buffer, sizeof(buffer)); //Grab as much data as we possibly can
if (amount > 0)
{
if (amount < sizeof(buffer))
{
//Continue filling with other sources
}
//Do what needs to be done with random data...
success = 1; //Yay!
}
}
close(fd);
}
The Linux user manual page for the random devices explicitly informs the reader of these magic numbers, so hopefully they won't change. I have no official sources for the magic numbers on the other OSs. Now, you'll notice that I checked here that the file descriptor was open to either Linux random device. A system administrator may for some reason replace one with the other, so don't necessarily rely on a proper system using the expected device under the device name you're trying to use.
This brings us back to /dev/random versus /dev/urandom. Different OSs may implement these differently. On FreeBSD for example, there is only the former, and it is a CSPRNG. MirBSD offers five different random devices with all kinds of semantics, and who knows how a sysadmin may shuffle them around. On Linux and possibly others, /dev/urandom has a fatal flaw that in fact it may not be seeded properly, so blind usage of it isn't a good idea either. Thankfully Linux and NetBSD offer the following:
int data;
int result = ioctl(fd, RNDGETENTCNT, &data); //Upon success data now contains amount of entropy available in bits
This ioctl() call will only work on a random device, so you can use this instead of the fstat() call on these OSs. You can then check data to ensure there's enough entropy to do what you need to:
int success = 0;
int fd = insane_open("/dev/urandom", O_RDONLY);
if (fd != -1)
{
uint8_t buffer[40];
int entropy;
if (!ioctl(fd, RNDGETENTCNT, &entropy) && (entropy >= (sizeof(buffer) * 8))) //This ensures it's a random device, and there's enough entropy
{
ssize_t amount = insane_read(fd, buffer, sizeof(buffer)); //Grab as much data as we possibly can
if (amount > 0)
{
if (amount < sizeof(buffer))
{
//Continue filling with other sources
}
//Do what needs to be done with random data...
success = 1; //Yay!
}
}
close(fd);
}
However, there may be a TOCTOU between the ioctl() and the read() call, I couldn't find any data on this, so take the above with a grain of salt. This also used to work on OpenBSD too, but they removed the ioctl() command RNDGETENTCNT a couple of versions back.
Linux has one other gem which may make you want to run away screaming. Look at the following from the manual for random device ioctl()s:
RNDZAPENTCNT, RNDCLEARPOOLIf that last one does what I think it does, is any usage ever remotely safe?
Zero the entropy count of all pools and add some system data (such as wall clock) to the pools.
/dev/(u)random conclusion
After all this, we now know the following:
- This is hardly an interface which is easy to use correctly (and securely).
- On some OSs there may not be any way to use it correctly (and securely).
- Applications not running with Superuser privileges may have no way to access the random device.
- An application at its file descriptor limit cannot use it.
- Many portability concerns.
For these reasons, OpenBSD created arc4random_buf() in the first place. It doesn't suffer from these above problems. The other BSDs also copied it, although they may be running on older less secure implementations.
Alternatives
In addition to a CSPRNG in userspace, the BSDs also allow for a way to get entropy directly from the kernel:
#define NUM_ELEMENTS(x) (sizeof(x)/sizeof((x)[0]))
uint8_t buffer[40];
size_t len = sizeof(buffer);
int mib[] = { CTL_KERN, KERN_RND };
int result = sysctl(mib, NUM_ELEMENTS(mib), buffer, &len, 0, 0);
KERN_RND may also be replaced with KERN_URND, KERN_ARND, and others on the various BSDs. FreeBSD also has a similar interface to determine if the kernel's CSPRNG is properly seeded, see the FreeBSD user manual page for more details. DragonFlyBSD also provides read_random() and read_random_unlimited() as direct no-nonsense interfaces to the underlying devices for /dev/random and /dev/urandom.
Now Linux used to provide a similar interface as the BSDs:
#define NUM_ELEMENTS(x) (sizeof(x)/sizeof((x)[0]))
uint8_t buffer[40];
size_t len = sizeof(buffer);
int mib[] = { CTL_KERN, KERN_RANDOM, RANDOM_UUID };
int result = sysctl(mib, NUM_ELEMENTS(mib), buffer, &len, NULL, 0);
However, this interface was removed a couple of versions back. Kind of makes you wonder why there is no sane simple to use standardized API everywhere that doesn't depend on a house of cards. This is the kind of thing you would think would be standardized in POSIX.
Conclusion
There does not seem to be any good cross-platform techniques out there. Anything done with the current technology will probably require a ton of conditional compiles, annoying checks, and ridiculous workarounds. Although, there should be enough basis here to come up with something with a high confidence level. It'd be interesting to see what the LibreSSL team (or the OpenSSH team) comes up with when they get around to porting what's needed here. As I said before, until then, avoid the non-official ports out there, which use poor sources of entropy like an insecure use of /dev/urandom on Linux, which then falls back on gettimeofday(), and is used to generate long-lived keys.
Demikianlah Artikel A good idea with bad usage: /dev/urandom
Sekianlah artikel A good idea with bad usage: /dev/urandom kali ini, mudah-mudahan bisa memberi manfaat untuk anda semua. baiklah, sampai jumpa di postingan artikel lainnya.
Anda sekarang membaca artikel A good idea with bad usage: /dev/urandom dengan alamat link https://jendeladuniainternet.blogspot.com/2014/05/a-good-idea-with-bad-usage-devurandom.html
0 Response to "A good idea with bad usage: /dev/urandom"
Posting Komentar