Searching on the internet, I came across different solutions. I can group them in three approaches:
- naive ones that use
- cheating ones that runs
tailcommand on the system;
- mighty ones that happily jump around an opened file using
I ended up choosing (or writing) five solutions, a naive one, a cheating one and three mighty ones.
- The most concise naive solution, using built-in array functions.
- The only possible solution based on
tailcommand, which has a little big problem: it does not run if
tailis not available, i.e. on non-Unix (Windows) or on restricted environments that don't allow system functions.
- The solution in which single bytes are read from the end of file searching for (and counting) new-line characters, found here.
- The multi-byte buffered solution optimized for large files, found here.
- A slightly modified version of solution #4 in which buffer length is dynamic, decided according to the number of lines to retrieve.
All solutions work. In the sense that they return the expected result from any file and for any number of lines we ask for (except for solution #1, that can break PHP memory limits in case of large files, returning nothing). But which one is better?
To answer the question I run tests. That's how these thing are done, isn't it?
I prepared a sample 100 KB file joining together different files found in
/var/log directory. Then I wrote a PHP script that uses each one of the
five solutions to retrieve 1, 2, .., 10, 20, ... 100, 200, ..., 1000 lines
from the end of the file. Each single test is repeated ten times (that's
something like 5 × 28 × 10 = 1400 tests), measuring average elapsed
time in microseconds.
I run the script on my local development machine (Xubuntu 12.04, PHP 5.3.10, 2.70 GHz dual core CPU, 2 GB RAM) using the PHP command line interpreter. Here are the results:
Solution #1 and #2 seem to be the worse ones. Solution #3 is good only when we need to read a few lines. Solutions #4 and #5 seem to be the best ones. Note how dynamic buffer size can optimize the algorithm: execution time is a little smaller for few lines, because of the reduced buffer.
Let's try with a bigger file. What if we have to read a 10 MB log file?
Now solution #1 is by far the worse one: in fact, loading the whole 10 MB file into memory is not a great idea. I run the tests also on 1MB and 100MB file, and it's practically the same situation.
And for tiny log files? That's the graph for a 10 KB file:
Solution #1 is the best one now! Loading a 10 KB into memory isn't a big deal for PHP. Also #4 and #5 performs good. However this is an edge case: a 10 KB log means something like 150/200 lines...
You can download all my test files, sources and results here.
Solution #5 is heavily recommended for the general use case: works great with every file size and performs particularly good when reading a few lines.
Avoid solution #1 if you should read files bigger than 10 KB.
Solution #2 and #3 aren't the best ones for each test I run: #2 never runs in less than 2ms, and #3 is heavily influenced by the number of lines you ask (works quite good only with 1 or 2 lines).