Tuesday, February 22, 2011

Are your files SM? M? L? XL? Mid-Size

What I actually had in mind when I came up with the challenge was something like the following ... the sort of thing you find in SysAdmin magazine or Randall Schwartz's columns.



#!/usr/local/bin/perl5.10

use 5.010;
use strict;
use warnings;

use Readonly;

Readonly my $DOT => q{.};
Readonly my $DOTDOT => q{..};
Readonly my $ESCAPE_BACKSLASH => q{\\};

die "USAGE: $0 rootdir.\n" unless @ARGV;

my @dirs = ( @ARGV );

my %stats;
while (@dirs) {
my $one_dir = shift @dirs;
$one_dir =~ s{(\s)}{$ESCAPE_BACKSLASH$1}g; # escape spaces for glob()

ENTRY:
while ( my $entry = glob "$one_dir/*" ) {
next ENTRY if $entry eq $DOT or $entry eq $DOTDOT;
if ( -d $entry ) {
push @dirs, $entry;
}
else {
my $size = -s _;
my $len = $size == 0 ? 0 : length $size;
$stats{$len}++;
}
}
}

for my $size ( sort { $a <=> $b} keys %stats ) {
my $maxsize = 10**$size;
say sprintf( '<%8d %d', $maxsize, $stats{$size}); }


Started with some directory specified as a command-line argument, process all the directory contents: ignore '.' and '..'; add directories to the queue of directories waiting to be processed, and get the log10 size of any file, incrementing the associated count.

For all the sizes encountered, in increasing order, convert to a (unreachable) max size, and print the size and the number of files in that range.


I can do without the File::Find module, the task at hand is pretty simple. On the other hand, my tolerance for ugly punctuation has dropped in the past few years, so I need the Readonly. Without that, it becomes ...


my %stats;
while (@dirs) {
my $one_dir = shift @dirs;
$one_dir =~ s{(\s)}{\\$1}g; # escape spaces for glob()

ENTRY:
while ( my $entry = glob "$one_dir/*" ) {
next ENTRY if $entry eq q{.} or $entry eq q{..};


The dots would be more tolerable with an SQL 'in' operator, or a Perl6 Junction:



use Perl6::Junction qw/any/;
...
ENTRY:
while ( my $entry = glob "$one_dir/*" ) {
next ENTRY if $entry eq any( q{.}, q{..} );


Using a subroutine to localize the ugliness would make the double escape bearable.



sub escape_space { $_->[0] =~ s{(\s)}{\\$1}g; };

my %stats;
while (@dirs) {
my $one_dir = escape_space shift @dirs;
ENTRY:
while ( my $entry = glob "$one_dir/*" ) {
next ENTRY if $entry eq any( q{.}, q{..} );



So the final result is down to 35 lines, including blanks and closing curlies.

#!/usr/local/bin/perl5.10

use 5.010;
use strict;
use warnings;
use Perl6::Junction qw/any/;

sub escape_space { $_->[0] =~ s{(\s)}{\\$1}g; };

die "USAGE: $0 rootdir.\n" unless @ARGV;

my @dirs = ( @ARGV );

my %stats;
while (@dirs) {
my $one_dir =escape_space shift @dirs;

ENTRY:
while ( my $entry = glob "$one_dir/*" ) {
next ENTRY if $entry eq any( q{.}, q{..} );
if ( -d $entry ) {
push @dirs, $entry;
}
else {
my $size = -s _;
my $len = $size == 0 ? 0 : length $size;
$stats{$len}++;
}
}
}

for my $size ( sort { $a <=> $b} keys %stats ) {
my $maxsize = 10**$size;
say sprintf( '<%8d %d', $maxsize, $stats{$size});

Are your files SM? M? L? XL? Kwick-N-EZ

When I first thought up the programming exercise I described last week in Are your files SM? M? L? XL?, my intention was to have a trivial exercise for applicants to carry out. HR was passing through lots of applicants who had detailed database knowledge, but were not at all programmers. They couldn't name simple Unix commands, couldn't talk about how to carry out a task in Perl or shell or as a pipeline of Unix commands. I thought this exercise would be simple for any experienced programmer to carry out, never mind style or performance points.

Shortly after I came up with the idea, I realized it could mostly be done as a Unix pipeline.



find ~/ -type f -printf "%s\n" |\
perl5.10 -n -E 'say length' |\
sort |\
uniq -c |\
perl5.10 -n -E ' |\
$fill, $count, $size) = split /\s+/; |\
$exp = 10**($size-1) |\
say "$exp $count" '



Although I hadn't used the option before, man find indicated that find could indeed return the size of the file and nothing else. Trying to write this article on my home machine, I discovered that is a characteristic of GNU find, not available on the Mac. So on other machines you may need to do more, maybe use ls -l or have find print out the number of blocks a file takes up ... less accurate, less complete, but sufficient for a quick proof of concept.



So find is printing a series of file sizes, one per line. My original thought was to take the logarithm of the size and truncate to an integer. But Perl will only calculate loge, so I would need to manually multiple that by loge 10. After clobbering myself over the head for ten minutes trying to achieve that, I realized that the number of digits in the size IS the upper limit of the integer portion of log10. perl -n reads the input line by line, and applies the -e expression to each line. Specifying perl5.10 (or later) and using -E instead of -e allows me to use say instead of print, saying two characters in the command name, avoiding a \n and sparing an explicit $_. I SHOULD chomp the newline off the input before getting it's length, but I can simply subtract 1. I could subtract the character now, but I found it easier to do it later.


The output of the Perl component is a series of lines, each with a number specifying how many digits appear in the file length. sort orders them, obviously, and uniq -c replaces multiple instances of a value with a single instance and the number of times that value appears.


Little Lord Flaunteroy would chomp off the newlines at the end of each line, and eliminate the leading spaces used by uniq -c. But I'm planning to split each line on space characters, to separate the count and value fields. By splitting on one-or-more spaces, the leading spaces, however many there may be, generate a single leading field with no data, which I just ignore. In real code I would parenthesize the right hand expression and use square brackets to slice off the values I want. In a one-liner, it's simpler to add a dummy variable. Use the digit-count as an exponent to obtain an unreachable upper limit ... don't forget to drop the value by one, to make up for counting the newline a few stages back. A test with an empty file, or at least one with less than ten characters in it, will remind you to make that adjustment. All that's left is to output the results.

Wednesday, February 16, 2011

Are your files SM? M? L? XL?

Twenty years ago I was on a co-op work term where a Sun Sparc 10 was shared among 6 NCD X-terminals. The system had 32 MB shared among the users, with 1 GB hard drive total storage. Today I have 12 GB memory, many terabytes hard drive ... programming experiments, video, music, and all my photography. The largest files are larger today than in the past ... I have performance profiling data from the Sudoku experiments I wrote about that are bigger than the total file system I worked with twenty years ago.

But what's more important? small files or large?

Very large files will be rare, otherwise you would run out of space. Very small files may be very important, but even a large number of them will not take up much space. Most space will be devoted to something in between ... but where is the bulk of storage devoted?

The (mental) challenge is to determine how file size is distributed, given some starting directory. My opinion is that exponential categories are appropriate, that is, 10..99 bytes, 100..999 bytes, 1000..9999 bytes, etc. Categorizing and formatting is a personal choice, determined in part by what is convenient, so long as useful information is displayed.