Disclaimer: Grok generated document.
std::valarray is a C++ standard library container introduced in C++98 specifically for high-performance numerical computations on arrays of numeric types (like float, double, int, etc.). It was designed with the intention of allowing aggressive optimizations for mathematical operations, especially on entire arrays at once.
-
Element-wise mathematical operations with a clean syntax
std::valarray<double> a = {1, 2, 3, 4}; std::valarray<double> b = {5, 6, 7, 8}; auto c = a + b; // {6, 8, 10, 12} auto d = a * b; // {5, 12, 21, 32} auto e = sin(a) + cos(b); // applies sin/cos element-wise
-
Built-in support for mathematical functions (applied element-wise):
abs(),sqrt(),exp(),log(),sin(),cos(),tan(),acos(),asin(),atan(),sinh(),cosh(),tanh(),pow(), etc.
-
Slicing and indirect array access (advanced indexing)
std::valarray<double> v(12); std::slice s(0, 4, 3); // start=0, length=4, stride=3 v[s] = 99.0; // v[0], v[3], v[6], v[9] = 99.0 auto subset = v[std::gslice(...)]; // multi-dimensional-like slicing
-
Operations with scalars
auto x = a * 2.0 + 5.0; // every element multiplied by 2 and add 5
In the 1990s, people wanted something like MATLAB/NumPy in C++:
- Vectorized operations without explicit loops
- Potential for compiler optimizations (expression templates, loop fusion, SIMD)
- Clean mathematical syntax
Despite good intentions, std::valarray has major drawbacks and is widely considered a failed experiment:
| Issue | Explanation |
|---|---|
| Poor performance in practice | Most compilers do not apply the expected aggressive optimizations. Often slower than hand-written loops or std::vector with SIMD. |
| Expression templates missing | The original design assumed expression templates (like in Blitz++ or Eigen), but the standard version uses temporary objects → many copies. |
| Awkward, non-intuitive API | Slicing (std::slice, std::gslice, std::mask_array, std::indirect_array) is complex and hard to use correctly. |
| No dynamic allocation control | Always copies on operations unless compiler optimizes heavily (which it usually doesn’t). |
| No iterator support pre-C++11 | Made it incompatible with STL algorithms. |
| Very little real-world adoption | Almost no major numerical library (Eigen, Armadillo, Blaze, etc.) uses or recommends it. |
| Library | Advantages over std::valarray |
|---|---|
| Eigen | Extremely fast, expression templates, lazy evaluation, excellent SIMD, MATLAB-like syntax |
| blaze-lib | Modern C++ design, hybrid CPU/GPU, great performance |
| Armadillo | MATLAB-like syntax, integrates with LAPACK/BLAS |
| xtensor | NumPy-like API, multi-dimensional, lazy evaluation |
| std::mdspan (C++23) + std::simd (upcoming) | Future standard solution for multi-dimensional arrays and explicit SIMD |
Simple std::vector + loops + SIMD intrinsics/OpenMP |
Often faster and much clearer |
Use std::valarray only if:
- You are maintaining very old numerical code that already uses it.
- You are doing educational experiments with the original C++98 numerical array design.
Do NOT use std::valarray for new code if performance or clarity matters.
For any serious numerical work in modern C++, use Eigen, blaze, xtensor, or wait for the emerging standard solutions based on std::mdspan and std::simd.
In short: std::valarray is an interesting historical artifact with a great idea that never quite worked out in practice.
Disclaimer: ChatGPT generated document.
Short version first: std::valarray is a numeric array type from <valarray> designed for element-wise math with strong optimization guarantees, not a general container like std::vector. It comes with slices, masks, and a bunch of math functions that operate on whole arrays at once.
Because you asked for “everything,” I’ll go through:
- What
std::valarrayis and its design goals - Basic usage and core operations
- Slicing, masking, and indirect access
- The ecosystem: math functions, helper types, begin/end
- Performance model, aliasing rules, expression templates
- Comparison with
std::vectorand when to avoid/choose it - Modern C++ caveats and real-world advice
I’ll assume at least intermediate C++ and that you’re comfortable with templates.
Definition:
#include <valarray>
template<class T>
class std::valarray;Conceptually:
-
A numeric array intended for element-wise arithmetic and math.
-
Supports vectorized operations like
a + b,a * 2.0,sqrt(a), etc. -
Designed to allow the implementation to apply aggressive optimizations, including:
- No surprising aliasing between certain views
- Potential SIMD/vectorization
- Lazy expression evaluation via expression templates (implementation-dependent) (en.cppreference.com)
In contrast to std::vector:
valarrayis not a general-purpose container (no standard iterators before C++11, no allocator, etc.).- The standard explicitly gives implementations room to optimize it differently (even to the point of using a replacement type for math functions).
Typical constructors:
std::valarray<int> a; // default, size 0
std::valarray<double> b(10); // size 10, value-initialized
std::valarray<double> c(3.14, 8); // size 8, initialized to 3.14
int raw[] = {1,2,3,4};
std::valarray<int> d(raw, 4); // from C array
std::valarray<int> e = {1, 2, 3, 4}; // initializer_list (C++11)Copy/move semantics are straightforward (typical value semantics).
Deduction guides exist from C++17, so
std::valarray v{1,2,3}deducesint. (en.cppreference.com)
std::valarray<double> v(10);
v[0] = 1.0;
double x = v[3];operator[]does no bounds checking (likevector::operator[]).- There is no
at()member.
std::size_t n = v.size(); // number of elements
v.resize(20); // size changes, values implementation-defined
// can invalidate begin()/end() iteratorsresizemay or may not preserve values; the standard leaves behavior flexible to preserve optimization freedom (check your implementation docs if you care). (en.cppreference.com)
std::valarray<double> a(10), b(10);
a = 1.0; // all elements become 1.0 (broadcast scalar)
b = a; // element-wise copy
b += 2.0; // add 2.0 to each elementScalar assignment and scalar ops are broadcast over all elements.
valarray supports a large set of operators, both member and non-member, working element-wise:
- Arithmetic:
+,-,*,/,% - Bitwise:
&,|,^,<<,>> - Logical:
&&,||, unary! - Relational:
==,!=,<,>,<=,>=
These can work in combinations:
std::valarray<double> a = {1.0, 2.0, 3.0};
std::valarray<double> b = {10.0, 20.0, 30.0};
auto c = a + b; // {11, 22, 33}
auto d = 2.0 * a; // {2, 4, 6}
auto e = a * a + b; // {11, 24, 39}You also get compound assignments:
a += b;
a *= 2.0;Relational/logical ops return std::valarray<bool> (or implementor’s replacement type), which is crucial for masks.
<valarray> provides overloads of many math functions that operate element-wise on a valarray. (en.cppreference.com)
For std::valarray<T>:
- Elementary:
abs,exp,log,log10,sqrt - Trig:
sin,cos,tan,asin,acos,atan - Hyperbolic:
sinh,cosh,tanh
Example:
std::valarray<double> a = {0.0, 1.0, 2.0};
std::valarray<double> b = std::sin(a); // apply sin to each element
std::valarray<double> c = std::sqrt(a); // sqrt element-wisepow and atan2 have overloads:
std::valarray<double> base = {1.0, 2.0, 3.0};
std::valarray<double> exponents = {2.0, 2.0, 2.0};
auto squares = std::pow(base, exponents); // element-wise pow
auto cube = std::pow(base, 3.0); // pow against scalarKey detail: These functions are allowed to return an implementation-defined replacement type with valarray-compatible API, to enable expression templates and optimizations. (en.cppreference.com)
valarray has built-in aggregations:
std::valarray<double> a = {1.0, 2.0, 3.0, 4.0};
double s = a.sum(); // 10
double p = a.prod(); // 24
double mn = a.min(); // 1
double mx = a.max(); // 4
// shift & cshift
auto left = a.shift(1); // [2,3,4,0] (implementation-defined fill)
auto right = a.cshift(1); // circular shift: [4,1,2,3]
// apply arbitrary unary function
auto squares = a.apply([](double x){ return x * x; }); // {1,4,9,16}applyreturns a (possibly replacement) valarray-like type. (mooshak.dcc.fc.up.pt)
This is where valarray gets more interesting, and also more exotic.
Helper types (all in <valarray>): (en.cppreference.com)
std::slice/std::slice_array<T>std::gslice/std::gslice_array<T>std::mask_array<T>std::indirect_array<T>
These are “views” that let you select and modify subsets of a valarray.
std::slice describes a start, size, stride:
std::slice s(start, size, stride);Given a valarray v, v[s] returns a slice_array<T> view on those elements.
Example: pick every second element, starting at 0:
std::valarray<int> v = {0,1,2,3,4,5,6,7,8,9};
std::slice evens(0, v.size()/2 + v.size()%2, 2); // start=0, #elements=5, stride=2
std::valarray<int> ev = v[evens]; // {0,2,4,6,8}
v[evens] = 42; // write-back: v becomes {42,1,42,3,42,5,42,7,42,9}slice_array<T> objects are proxy types:
- They don’t own memory.
- Assigning to them writes back to the underlying
valarray. - They’re mainly used on the LHS of assignments.
std::gslice (general slice) extends this to N-dimensional indexing on a flat valarray:
-
Constructed with:
startindexarrayof lengths (per dimension)arrayof strides (per dimension)
std::valarray<int> v(100); // think of it as 10x10 matrix
std::size_t start = 0;
std::valarray<std::size_t> lengths = {10, 10}; // 2D: 10x10
std::valarray<std::size_t> strides = {10, 1}; // row-major
std::gslice g(start, lengths, strides);
// Get the diagonal:
std::valarray<std::size_t> diag_lengths = {10};
std::valarray<std::size_t> diag_strides = {11}; // stride between diagonal elements
std::gslice diag(0, diag_lengths, diag_strides);
auto diag_view = v[diag]; // gslice_array<int>
diag_view = 1; // sets all diagonal elements to 1gslice_array<T> is again a proxy type like slice_array<T>.
You can filter a valarray by a std::valarray<bool> mask:
std::valarray<double> data = {1, -2, 3, -4, 5};
std::valarray<bool> mask = (data > 0); // {true, false, true, false, true}
std::valarray<double> positives = data[mask]; // {1, 3, 5}
data[mask] *= 10; // only positive elements * 10data[mask]returns amask_array<double>view.- Assignment applies only to the elements where mask is
true.
You can select arbitrary elements by an index array:
std::valarray<int> data = {10, 20, 30, 40, 50};
std::valarray<std::size_t> idx = {4, 0, 3};
std::valarray<int> picked = data[idx]; // {50, 10, 40}
data[idx] += 1; // data becomes {11,20,30,41,51}data[idx]is anindirect_array<int>view.
Originally, valarray did not expose iterators; this was intentional, to keep optimizations free of aliasing concerns.
Starting C++11, you have std::begin / std::end overloads for valarray: (en.cppreference.com)
std::valarray<int> v = {1, 2, 3};
for (int& x : v) {
x *= 2;
}
// or algorithms
std::for_each(std::begin(v), std::end(v),
[](int& x){ x += 1; });Important details:
- The iterator type is implementation-defined (not
T*necessarily). begin(v)/end(v)iterators are invalidated byv.resize().- They are intended primarily so that range-for loops and algorithms can work.
Also: std::begin / std::end are not guaranteed to accept replacement types returned by expression templates (v1 + v2). So you should usually do:
auto tmp = std::valarray<int>(v1 + v2);
std::begin(tmp); // OKNot:
std::begin(v1 + v2); // not portableThe standard deliberately gives implementations a lot of freedom:
-
The math functions (
sin,cos,exp,log, etc.) andapplymay return a type that is not literallystd::valarray<T>but behaves like one. (en.cppreference.com) -
This allows implementations to:
- Use expression templates to delay evaluation.
- Fuse multiple operations into a single pass (e.g., compute
a + b + cin one loop). - Optimize memory usage.
Constraints on such replacement types:
- Must provide all const
valarraymembers. - Must allow construction of
valarray,slice_array,gslice_array,mask_array,indirect_arrayfrom it. - Any
f(const valarray<T>&)(exceptbegin/end) must also accept replacement types. - Similar for
f(const valarray<T>&, const valarray<T>&)accepting replacement combos. - Can’t add more than two extra template nesting levels compared to its arguments.
Practically: for you as a user, this means:
- You can usually treat them like
valarray. - But you sometimes need an explicit
std::valarray<T>temporary when interfacing with things like iterators or APIs expecting a genuinevalarray.
std::valarray and its helper types are specified to be free of certain forms of aliasing, enabling optimizations similar to restrict in C. (en.cppreference.com)
In broad strokes:
- The implementation may assume
a[i],b[j]don’t unexpectedly alias unless they come from overlapping views of the same underlyingvalarrayin ways allowed by the standard. - The slicing/masking abstractions are designed so that the implementation knows the structure of accesses and can vectorize easily.
This was one of the original motivations: provide something the compiler can reliably optimize for numeric code.
However:
- In modern C++, compilers often optimize
std::vectorvery well too. - The actual performance advantage of
valarrayis implementation-dependent and in practice often negligible or worse thanvector, depending on the library.
- Element-wise operations built in:
v1 + v2,sqrt(v),sin(v)etc., without writing loops. - Rich slicing/masking API out of the box (slices, masks, indirect).
- Optimization license: the standard allows the implementation to be clever.
-
Less idiomatic in modern C++:
-
Most numeric & scientific C++ ecosystems either:
- Use
std::vector+ algorithms, or - Use dedicated libraries (Eigen, Blaze, xtensor, Armadillo, etc.).
- Use
-
-
Weird semantics:
resizebehavior not clearly defined for preserving elements.- Expression-template replacement types can surprise you (type not actually
std::valarray).
-
Weaker container interface:
- No
push_back, no iterator member types, noreserve, no allocator parameter, etc. - Not part of generic container algorithms culture.
- No
-
Limited ecosystem:
- Most third-party libraries don’t take
std::valarrayparameters.
- Most third-party libraries don’t take
Because of this, many experts recommend:
- Use
std::vectoras your standard dynamic array. - Use specialized math/numeric libraries if you want vectorized operations and slices.
- Use
valarrayonly in niche situations or legacy code.
Just to show the intended style:
#include <valarray>
#include <cmath>
using std::valarray;
int main() {
const std::size_t N = 1024;
valarray<double> t(N);
valarray<double> signal(N);
const double dt = 1.0 / 44100.0;
const double freq = 440.0;
// Fill time array
for (std::size_t i = 0; i < N; ++i) {
t[i] = i * dt;
}
// Sine wave
signal = std::sin(2.0 * M_PI * freq * t);
// Apply a simple gain, then offset, then square
auto processed = std::pow(2.0 * signal + 0.1, 2.0);
// Compute some stats
double mean = processed.sum() / processed.size();
double maxv = processed.max();
double minv = processed.min();
(void)mean; (void)maxv; (void)minv;
}This is exactly the use-case valarray was designed for: concise numeric transformations.
Given your background and interest in high-quality C++:
-
Small standalone numeric tools where you:
- Want quick element-wise math with minimal code.
- Don’t need interoperability with other libraries.
-
Educational code/examples illustrating vectorized operations without bringing in Eigen/Blaze.
- In public APIs:
std::vector<T>orstd::span<T>are better choices for interoperability. - In performance-critical numeric code where you’re already using a heavy-duty numeric library.
- Whenever you need fine-grained control over allocation/growth behavior.
- Use
std::vector<T>+ ranges/algorithms (std::transform, etc.) - Wrap domain-specific operations in small functions or types.
- For slicing/masking: consider libraries (Eigen, Blaze, xtensor) that support this cleanly.
Key points to keep in your mental model:
-
std::valarray<T>= numeric array designed for element-wise operations and optimizations. -
Comes with:
- Element-wise operators and math functions (
sin,cos,sqrt, etc.). - Aggregations (
sum,min,max,prod). - Slices (
slice,gslice), masks (mask_array), and indirect views (indirect_array).
- Element-wise operators and math functions (
-
Has unusual optimization latitude: math functions &
applymay return valarray-like replacement types to enable expression templates. -
Not a general-purpose container; less idiomatic in modern C++ than
std::vector. -
Often replaced in practice by
std::vector+ numeric libraries.
If you’d like next, I can:
- Show side-by-side
valarrayvsvector + algorithmsversions of some numeric tasks, or - Go deep into how you’d implement a tiny
valarray-like expression-template engine yourself (which dovetails nicely with your C++20/expression-template interests).
Is std::valarray making a comeback in C++26? Are there new features added to it and how is it relevant?
Short answer: no, std::valarray is not making some big heroic comeback in C++26. It’s getting a bit of cleanup and will likely remain what it already is: a quirky numeric container that’s kept alive and made consistent, but not turned into a modern flagship.
I’ll split it into:
- What’s actually changing around
valarrayfor C++26 - Other proposals that merely touch
valarray - Whether this signals a “comeback”
- How relevant
valarrayis in modern C++ compared to the new toys
The only substantial C++26 work that directly targets std::valarray is Arthur O’Dwyer’s paper:
P3016 – “Resolve inconsistencies in begin/end for
valarrayand braced initializer lists” (open-std.org)
Roughly speaking, it does for valarray and initializer lists what you’d expect from a modern container / range:
- Cleans up how the free functions
std::begin,std::end,std::data, andstd::emptybehave when applied tostd::valarray. - Resolves a bunch of Library Working Group issues around those functions and to braced initializer lists. (open-std.org)
- Makes the wording more consistent so generic code using ranges and these helpers behaves predictably with
valarray.
The design in P3016 has been approved for C++26 by the Library Evolution Working Group (LEWG) and has been moving through Library Working Group (LWG) as part of the C++26 pipeline. (open-std.org)
What this means in practice:
- Range-based
forand generic algorithms already work withvalarraytoday via thestd::begin/std::endoverloads, but P3016 tightens and simplifies the rules. - It helps avoid weird corner cases with expression-template replacement types, temporaries, and initializer lists (like some of the classic “invalid range expression” surprises). (Stack Overflow)
- It does not add new algorithms, slicing features, math functions, or ranges integration; it’s maintenance/consistency work.
So: yes, valarray is being touched for C++26, but in a housekeeping sense.
There are a couple of places where valarray gets mentioned in other C++26-era papers:
Walter Brown’s proposal P2971 – “Implication for C++” introduces a new logical operator => for the language. In its library impact section, it notes that std::valarray should be updated to accommodate the new operator, similar to how it already supports && and || element-wise. (wg21.link)
If => is adopted into the language and that wording is kept, you’d see:
- A new element-wise
operator=>forstd::valarray<bool>(and maybe more, depending on wording). - Symmetry with
logical_and/logical_orand other logical facilities in<functional>.
Again, that’s tiny—it’s just one more operator in the list, not a redesign or expansion of valarray.
Modern proposals like reflection (P2996 “Reflection for C++26”) and other library papers sometimes list <valarray> alongside other headers simply because it is part of the standard library and must be accounted for in wording and examples. (open-std.org)
That’s about correctness and coverage, not new features.
Realistically: no.
If you look at:
- The official C++26 trip reports and blog posts (Herb Sutter’s, committee trip reports, etc.), the big headline items are contracts, reflection, linear algebra (BLAS-style algorithms),
std::simd, ranges work, library hardening, etc.—valarrayis never a star of the show. (LinkedIn) - The WG21 paper lists for 2024–2025:
valarrayonly appears explicitly in P3016, plus that small note in P2971. There’s no “valarray 2.0” or “ranges-aware valarray” or anything like that. (open-std.org)
Community sentiment also hasn’t changed much:
- People still describe
valarrayas “not really maintained and updated with more features” and point out that most numeric work is done withstd::vector+ libraries like Eigen/Blaze/xTensor instead. (Computational Science Stack Exchange) - It remains niche despite occasional YouTube episodes or articles rediscovering it.
So, C++26 is tidying it up, not rejuvenating it.
I’d frame its relevance in three layers:
-
std::valarrayis fully specified in the latest standard drafts and reference material; there is no deprecation or removal proposal on the table. (en.cppreference.com) -
It continues to offer:
- Element-wise operators and math functions,
- Slices, masks, indirect access,
- Some aliasing guarantees that can, in principle, enable optimizations. (en.cppreference.com)
So if you use it, you’re not using some zombie facility that’s about to be yanked.
But compare that to where C++26 is actually investing:
- Linear algebra: the BLAS-style algorithms proposal for C++26 gives you standardized linear algebra operations that are meant to be efficient and interoperable with HPC backends. (LinkedIn)
std::simd: vectorization support already merged into the draft gives you explicit, portable SIMD programming for numeric workloads. (Sutter’s Mill)std::mdspan, ranges, etc., offer modern ways of handling multi-dimensional data and composition of algorithms. (en.cppreference.com)
In that landscape, valarray is:
- An older, limited attempt at “vectorized numeric container” with a very specific design philosophy,
- Lacking integration with ranges,
mdspan, linear algebra algorithms, etc., - And generally overshadowed by
vector+ modern libraries and tools.
There are still some realistic niches:
- Quick-and-dirty numeric experiments where you want element-wise operations and slices without pulling in a heavy external library.
- Legacy or constrained environments where you don’t want dependency bloat but have compilers that optimize
valarrayreasonably well (Intel’s docs still talk about their optimized valarray implementation). (Intel) - Didactic code showing array math without introducing a full-blown linear algebra library.
But for modern, serious numerical code in the C++26 era, the direction is clearly:
std::vector/std::span/std::mdspanfor storage and views,std::simdand standardized linear algebra / numerics for performance,- Specialized libraries for anything beyond that.
-
Is
std::valarraymaking a comeback in C++26? No. It’s being cleaned up (P3016) and may gain a tiny tweak for a new logical operator, but there’s no large new investment. -
Are there new features? Only minor ones:
- Clearer, more consistent behavior for
std::begin/std::end/std::data/std::emptywhen used withvalarray(P3016). (open-std.org) - Potential support for the proposed implication operator
=>in element-wise operations if that operator is adopted (P2971). (wg21.link)
- Clearer, more consistent behavior for
-
How is it relevant? It remains a supported, standard numeric container with element-wise math and slicing, but it’s mostly in maintenance mode. In C++26, the real action for numerics is around linear algebra,
std::simd, ranges, andmdspan;valarrayis unlikely to be your primary tool unless you deliberately choose it for its specific style.
Intel’s C++ standard library implementation (part of ICC and Parallel Studio) had:
std::valarray was designed so compilers could “peek inside” operations like:
valarray<double> a, b, c;
c = a + b * 2.0;and internally generate fused, vectorized loops without temporaries.
Intel’s implementation actually did this through:
- Loop fusion
- Auto-vectorization
- Removal of intermediate objects
- Alignment-aware loads/stores
This was not true of GCC or Clang’s libstdc++, where valarray was basically just a wrapper on a dynamic array.
So with Intel's library, valarray could outperform:
std::vector+ manual loops- naive numeric code
- some BLAS-level operations
…in simple element-wise operations.
Intel documentation used to include statements like:
“Intel’s implementation of
std::valarrayis optimized for vectorization and can outperform conventional loop-based code.”
They would often show that:
c = a + b * d;could auto-vectorize better than:
for (size_t i = 0; i < n; ++i)
c[i] = a[i] + b[i] * d[i];This sometimes gave significant speedups.
Intel tools supported:
#pragma ivdep#pragma simd- vectorization reports
These could confirm that valarray expressions generated single, vectorized loops with no aliasing concerns.
This worked hand-in-hand with the whole philosophy of valarray:
- no aliasing
- predictable layout
- safe for vectorization
Intel leaned directly into that design.
valarray was created in the 90s with an explicit goal:
Enable compilers to auto-vectorize numeric code efficiently without aliasing penalties.
Intel was the only major vendor who fully embraced the idea and optimized for it.
Because of this, many people who worked in HPC, finance, or scientific computing with Intel Parallel Studio still associate:
valarray= "fast, vectorized, Intel-optimized numeric container"
Mostly no, and here’s why:
It has been replaced by the oneAPI toolchain (DPC++, ICX, etc.).
The specialized valarray optimizations are not emphasized anymore.
C++26/23 offers:
std::simd(actually portable SIMD!)std::mdspan- standard BLAS-like linear algebra
- strong optimizers in Clang/GCC/MSVC
std::simd is basically what valarray wanted to be — but realized correctly.
So the “Intel magic” wasn’t portable.
Because Intel compilers historically gave std::valarray special treatment, making it:
- faster than
std::vectorin some numeric loops - implicitly vectorized
- optimized using expression fusion
- appealing for simple numerical kernels
Most of those advantages no longer matter in the modern ecosystem:
- Intel's toolchain has moved on
- C++ got
std::simd - Numerics are shifting toward
mdspan, ranges, linear algebra
But old HPC folks still remember that valarray was the thing Intel pushed for auto-vectorized math, so the association persists.

https://stackoverflow.com/questions/8965562/what-is-the-difference-between-stdvalarray-and-stdarray
https://stackoverflow.com/questions/30018754/stdvalarray-and-parallelization