Definitely, although with std::array
in C++11, practically only for
static data. C style arrays have three important advantages over
std::vector
:
They don t require dynamic allocation. For this reason, C style
arrays are to be preferred where you re likely to have a lot of very
small arrays. Say something like an n-dimension point:
template <typename T, int dims>
class Point
{
T myData[dims];
// ...
};
Typically, one might imagine a that dims
will be very small (2 or 3),
T
a built-in type (double
), and that you might end up with
std::vector<Point>
with millions of elements. You definitely don t
want millions of dynamic allocations of 3 double.
The support static initialization. This is only an issue for static
data, where something like:
struct Data { int i; char const* s; };
Data const ourData[] =
{
{ 1, "one" },
{ 2, "two" },
// ...
};
This is often preferable to using a vector (and std::string
), since it
avoids all order of initialization issues; the data is pre-loaded,
before any actual code can be executed.
Finally, related to the above, the compiler can calculate the actual
size of the array from the initializers. You don t have to count them.
If you have access to C++11, std::array
solves the first two issues,
and should definitely be used in preference to C style arrays in the
first case. It doesn t address the third, however, and having the
compiler dimension the array according to the number of initializers is
still a valid reason to prefer C style arrays.