Is cin much slower than scanf in C++?
Posted on In QAI frequently hear that cin
is significantly slower than scanf
in C++. Is this true? And how to improve the efficiency of cin
? It is really nice to use most of time.
One discussion about that cin is very slow is here: http://apps.topcoder.com/forums/?module=Thread&threadID=508058&start=0&mc=7
In short: cin
is not always slower (can be faster actually, see below) than scanf
; add this piece of code and avoid using scanf
printf
totally:
std::ios::sync_with_stdio(false);
The special steps taken by libstdc++, at least for version 3.0, involve doing very little buffering for the standard streams, leaving most of the buffering to the underlying C library. (This kind of thing is tricky to get right.) The upside is that correctness is ensured. The downside is that writing through cout can quite easily lead to awful performance when the C++ I/O library is layered on top of the C I/O library (as it is for 3.0 by default).
However, the C and C++ standard streams only need to be kept in sync when both libraries’ facilities are in use. If your program only uses C++ I/O, then there’s no need to sync with the C streams. The right thing to do in this case is to call
#include any of the I/O headers such as ios, iostream, etc
std::ios::sync_with_stdio(false);
You must do this before performing any I/O via the C++ stream objects.
More at: http://gcc.gnu.org/onlinedocs/libstdc /manual/io_and_c.html
Now, lets have some fun:
[zma@office io]$ cat ../../python/1ton.py
i = 0
while i < 10000000:
print i
i = i + 1
[zma@office io]$ python ../../python/1ton.py > /tmp/in.txt
[zma@office io]$ cat scanf.c
#include <stdlib.h>
#include <stdio.h>
int main()
{
int i;
while ( scanf("%d", &i) != EOF);
return 0;
}
[zma@office io]$ gcc scanf.c
[zma@office io]$ time ./a.out < /tmp/in.txt
real 0m 1.645s
user 0m 1.621s
sys 0m 0.015s
[zma@office io]$ cat cin.cc
#include <iostream>
int main()
{
int i;
// std::ios_base::sync_with_stdio(false);
while (std::cin >> i);
return 0;
}
[zma@office io]$ g++ cin.cc
[zma@office io]$ time ./a.out < /tmp/in.txt
real 0m 3.864s
user 0m 3.838s
sys 0m 0.007s
[zma@office io]$ cat cin-no-sync-with-stdio.cc
#include <iostream>
int main()
{
int i;
std::ios_base::sync_with_stdio(false);
while (std::cin >> i);
return 0;
}
[zma@office io]$ g++ cin-no-sync-with-stdio.cc
[zma@office io]$ time ./a.out < /tmp/in.txt
real 0m 0.984s
user 0m 0.970s
sys 0m 0.008s
The results above should be clear enough.
A try with scala:
$ cat ReadInts.scala
object ReadInts extends App {
val start = System.nanoTime
var d = 0
try {
while (true) {
d = Console.readInt
}
} catch {
case _: Throwable => 0
}
println("Elapsed " + (System.nanoTime - start) / 1000000000.0 + "s");
}
$ sbt compile
...
$ time scala -cp target/scala-2.10/classes/ ReadInts < ./test.txt
Elapsed 1.050056043s
real 0m1.335s
user 0m1.460s
sys 0m0.079s
For comparison:
$ cat scanf.c
#include <stdlib.h>
#include <stdio.h>
int main()
{
int i;
while ( scanf("%d", &i) != EOF);
return 0;
}
$ gcc scanf.c
$ time ./a.out <../../scala/test.txt
real 0m1.083s
user 0m1.057s
sys 0m0.020s
The results of the time on Scala is impressive—it achieves similar time as the scanf
version in C. Of course, the starting of the JVM/scala takes additional 0.2+ seconds.