Thursday, December 6, 2007

STL strings vs C strings for parsing

I'm working on a project where I need to build custom high performance HTTP server. One piece of this server is a parser for URLs in incoming requests. It is very simple and on the first glance it shouldn't be that slow compared with other parts of the server. Yet it was taking quite a lot of CPU according to the profiler. The parser is using STL and basically does several string::find() calls to find parts of URL. So I thought maybe string::find() is too slow and decided to benchmark it against strchr(). This is my benchmark code:


#include <string.h>
#include <string>
#include <time.h>
#include <iostream>

using std::string;
using std::cout;

int main() {
const char* str1 = " a ";
const string& str2 = str1;

const unsigned long iterations = 500000000l;

{
clock_t start = clock();

for (unsigned long i = 0; i < iterations; ++i) {
char* pos = strchr(str1, 'a');
}

clock_t end = clock();
double totalTime = ((double) (end - start)) / CLOCKS_PER_SEC;
double iterTime = totalTime / iterations;
double rate = 1 / iterTime;

cout << "Total time: " << totalTime << " sec\n";
cout << "Iterations: " << iterations << " it\n";
cout << "Time per iteration: " << iterTime * 1000 << " msec\n";
cout << "Rate: " << rate << " it/sec\n";
}

{
clock_t start = clock();

for (unsigned long i = 0; i < iterations; ++i) {
string::size_type pos = str2.find('a');
}

clock_t end = clock();
double totalTime = ((double) (end - start)) / CLOCKS_PER_SEC;
double iterTime = totalTime / iterations;
double rate = 1 / iterTime;

cout << "Total time: " << totalTime << " sec\n";
cout << "Iterations: " << iterations << " it\n";
cout << "Time per iteration: " << iterTime * 1000 << " msec\n";
cout << "Rate: " << rate << " it/sec\n";
}
}

Turns out strchr is much faster as long as the benchmark code is compiled with optimizations on:

ilya@denmark:~$ g++ -O3 test.cc && ./a.out
Total time: 0 sec
Iterations: 500000000 it
Time per iteration: 0 msec
Rate: inf it/sec
Total time: 15.5 sec
Iterations: 500000000 it
Time per iteration: 3.1e-05 msec
Rate: 3.22581e+07 it/sec

ilya@denmark:~$ g++ -O2 test.cc && ./a.out
Total time: 0 sec
Iterations: 500000000 it
Time per iteration: 0 msec
Rate: inf it/sec
Total time: 15.76 sec
Iterations: 500000000 it
Time per iteration: 3.152e-05 msec
Rate: 3.17259e+07 it/sec

ilya@denmark:~$ g++ -O1 test.cc && ./a.out
Total time: 0 sec
Iterations: 500000000 it
Time per iteration: 0 msec
Rate: inf it/sec
Total time: 19.23 sec
Iterations: 500000000 it
Time per iteration: 3.846e-05 msec
Rate: 2.6001e+07 it/sec

ilya@denmark:~$ g++ -O0 test.cc && ./a.out
Total time: 18.64 sec
Iterations: 500000000 it
Time per iteration: 3.728e-05 msec
Rate: 2.6824e+07 it/sec
Total time: 16.89 sec
Iterations: 500000000 it
Time per iteration: 3.378e-05 msec
Rate: 2.96033e+07 it/sec

I checked the same code with callgrind and from call graph it looks like strchr() call was inlined while string::find() wasn't. It could be the reason for the difference in the performance. Maybe compiler is even smarter and optimized whole cycle with strchr() out. I'm not sure that the benchmark is completly fair. Anyway one thing is certain: I'll should try to rewrite my URL parser using strchr() and see if the real code is faster.

Update: As anonymous commented it looks as though GCC is replacing the call to strchr() with a compiler builtin, and noticing that str1 points to a literal and doing the search at compile-time. The make the benchmark fair str1 should by supplied at runtime to prevent the optimization. I tried that (passed the string via arvs) and it does change the result. It seems that for short string C strings are faster and for long string STL strings are faster. Not quite sure why yet.

For the reference here the update benchmark code:

#include <string.h>
#include <string>
#include <time.h>
#include <iostream>

using std::string;
using std::cout;

int main(int argc, char** argv) {
const char* str1 = argv[1];
const string& str2 = argv[1];

const unsigned long iterations = 500000000l;

{
clock_t start = clock();

for (unsigned long i = 0; i < iterations; ++i) {
char* pos = strchr(str1, 'a');
}

clock_t end = clock();
double totalTime = ((double) (end - start)) / CLOCKS_PER_SEC;
double iterTime = totalTime / iterations;
double rate = 1 / iterTime;

cout << "Total time: " << totalTime << " sec\n";
cout << "Iterations: " << iterations << " it\n";
cout << "Time per iteration: " << iterTime * 1000 << " msec\n";
cout << "Rate: " << rate << " it/sec\n";
}

{
clock_t start = clock();

for (unsigned long i = 0; i < iterations; ++i) {
string::size_type pos = str2.find('a');
}

clock_t end = clock();
double totalTime = ((double) (end - start)) / CLOCKS_PER_SEC;
double iterTime = totalTime / iterations;
double rate = 1 / iterTime;

cout << "Total time: " << totalTime << " sec\n";
cout << "Iterations: " << iterations << " it\n";
cout << "Time per iteration: " << iterTime * 1000 << " msec\n";
cout << "Rate: " << rate << " it/sec\n";
}
}

Meanwhile I rewrote my URL parser using C strings and it is now 2x times faster. I guess the speedup comes from the fact that with C strings I can minimize number memory allocations for temporary strings. With C string if you want to take substring of another string you can just take pointer in the middle of the original and place '\0' where substring should end if don't mind destroying the original string. With STL (ok, standart) strings you cannot do this and have to make copies.