Note: the following answer (and the question it answers) pertains to the old C++0x version of concepts and has little relation to the version of the feature added to C++20.
First of all, Herb didn t say that concepts themselves made compiling slower. He said that conceptizing the C++ standard library made any code using the C++ standard library compile slower.
The reason for that comes down to several things.
1: Constraining templates takes compile time.
When you declare a class like this:
template<typename T> class Foo {...};
The compiler simply parses Foo and does very little. Even with two-phase lookup, the compiler simply doesn t do a whole lot in the compilation of class Foo. It stores it for later, of course, but the initial pass is relatively fast.
When you do constrain the template with a concept:
template<ConceptName C> class Foo {...};
The compiler must do some things. It must check up front that every use of the type C
conforms to the concept ConceptName
. That s extra work that the compiler would have deferred until instantiation time.
The more concept checking you have, the more compile time you spend to verify that the types match the concepts.
2: The standard C++ library uses a lot of concepts.
Look at the number of iterator concepts: input, output, forward, bidirectional, sequential, contiguous. And the committee was considering breaking them down into many more than that. Many algorithms would have multiple versions for different iterator concepts.
And this doesn t include range concepts (of which there is one for every kind of iterator concept except output), character concepts for std::string, and various other kinds of things. All of these have to be compiled and checked.
What concepts really needed to make it fast is modules. The ability for the compiler to generate a module file that contains a sequence of pre-checked symbols, and then load that file directly without having to go through the standard compilation process. Straight from parsing to symbol creation.
Remember: for each .cpp file you #include , the compiler must read that file and compile it. Even though the file is the same thing every time it does this, it still must dutifully read the file and process it. If we re talking about a concept-ized std::vector
, it has to do all of the concept checking of the template. It still has to do all of the standard symbol lookup you do when compiling. And so forth.
Imagine if the compiler didn t have to do this. Imagine if it could just load a bunch of symbols and definitions directly from the disk. No compiling at all; just bringing in symbols and definitions for other code to use.
It would be like precompiled headers only better. Precompiled headers are restricted to only have one per .cpp file, whereas you can use as many modules as you like.
Sadly, modules was yanked pretty early in the process from C++0x. And without modules, constraining the standard library with concepts will always compile more slowly than the unconstrained version.
Note that Herb misunderstands the purpose of modules (not hard, since most of the initial concepts of the feature were the things he talked about: cross-platform DLLs and such). Their core fundamental purpose is to help compile times, not to make cross-platform DLLs work. Nor is it intended that modules themselves be cross-platform.