Sequential containers in C++ store elements in a linear order, where each element is positioned relative to the one before or after it. They’re designed for efficient iteration, fast access, and are typically used when element order matters, such as in time-based data.
The most commonly used sequential containers in the STL are:
std::vector
: dynamic array with contiguous memorystd::deque
: double-ended queue with fast insertions/removals at both endsstd::array
: fixed-size array on the stackstd::list
/std::forward_list
: linked lists
In quantitative finance, sequential containers are often at the heart of systems that process tick data, historical time series, price paths, and rolling windows. When processing millions of data points per day, the choice of container directly impacts latency, memory usage, and cache performance.
1. std::vector
: The Workhorse
std::vector
is a dynamic array container in C++. It’s one of the most performance-critical tools in a quant developerβs toolbox β especially for time-series data, simulation paths, and numerical computations.
π§ Stack vs Heap: The Memory Model
In C++, memory is managed in two primary regions:
Memory Type | Description | Characteristics |
---|---|---|
Stack | Automatically managed, fast, small | Function-local variables, fixed size |
Heap | Manually or dynamically managed | Used for dynamic memory like new , malloc , or STL containers |
πΈ A std::vector
behaves as follows:
- The vector object itself (pointer, size, capacity) lives on the stack.
- The elements it stores live in a dynamically allocated heap buffer.
π‘ Consequences:
- You can create a
std::vector
of size 1 million without stack overflow. - But heap allocation is slower than stack and can cause fragmentation if misused.
- When passed by value, only the stack-held metadata is copied (until you access elements).
π¦ Contiguous Memory Layout
A std::vector
stores all its elements in a single, contiguous block of memory on the heap β just like a C-style array.
Why this matters:
- CPU cache efficiency: Memory is prefetched in blocks β sequential access is blazing fast.
- Enables pointer arithmetic, SIMD (Single Instruction Multiple Data), and direct interfacing with C APIs.
π₯ For quants: When you’re processing millions of floats/doubles (e.g., price histories, simulation paths), contiguous memory layout can be a 10Γ performance multiplier vs pointer-chasing containers like std::list
.
β± Time Complexity
Operation | Time Complexity | Notes |
---|---|---|
v[i] access | O(1) | True random access due to array layout |
push_back (append) | Amortized O(1) | Reallocates occasionally |
Insertion (front/mid) | O(n) | Requires shifting elements |
Resize | O(n) | Copy and reallocate |
β οΈ When capacity is exhausted, std::vector
reallocates, usually doubling its capacity and copying all elements β which invalidates pointers and iterators.
π» Example: Using std::vector
for Rolling Average on Price Time Series
#include <vector>
#include <numeric> // for std::accumulate
int main() {
// Simulated price time series (e.g. 1 price per second)
std::vector<double> prices;
// Simulate appending tick data
for (int i = 0; i < 10'000'000; ++i) {
prices.push_back(100.0 + 0.01 * (i % 100)); // Fake tick data
}
// Compute rolling average over the last 5 prices
size_t window_size = 5;
std::vector<double> rolling_avg;
for (size_t i = window_size; i < prices.size(); ++i) {
double sum = std::accumulate(prices.begin() + i - window_size, prices.begin() + i, 0.0);
rolling_avg.push_back(sum / window_size);
}
// Print the last rolling average
std::cout << "Last rolling avg: " << rolling_avg.back() << std::endl;
return 0;
}
π§ Key Points:
std::vector<double>
stores prices in contiguous heap memory, enabling fast iteration.push_back
appends elements efficiently until capacity needs to grow.- Access via
prices[i]
or iterators is cache-friendly. - This method is fast but not optimal for rolling computations β a future section will show how
std::deque
or a circular buffer improves performance here.
2. std::deque
: Double-Ended Queue
std::deque
stands for “double-ended queue” β a general-purpose container that supports efficient insertion and deletion at both ends. Unlike std::vector
, which shines in random access, std::deque
is optimized for dynamic FIFO (first-in, first-out) or sliding window patterns.
π§ Stack vs Heap: Where Does Data Live?
- Like
std::vector
, thestd::deque
object (its metadata) lives on the stack. - The underlying elements are allocated on the heap β but not contiguously.
π‘ Consequences:
- You can append or prepend large amounts of data without stack overflow.
- Compared to
std::vector
, memory access is less cache-efficient due to fragmentation.
π¦ Contiguous Memory? No.
Unlike std::vector
, which stores all elements in one continuous memory block, std::deque
allocates multiple fixed-size blocks (typically 4KBβ8KB) and maintains a map (array of pointers) to those blocks.
Why this matters:
- Appending to front/back is fast, since it just adds blocks.
- But access patterns may be less cache-friendly, especially in tight loops.
β± Time Complexity
Operation | Time Complexity | Notes |
---|---|---|
Random access d[i] | O(1) (but slower than vector ) | Not contiguous, so slightly less efficient |
push_back , push_front | O(1) amortized | Efficient at both ends |
Insert in middle | O(n) | Requires shifting/moving like vector |
Resize | O(n) | Copying + new allocations |
β οΈ Internally more complex: uses a segmented memory model (think: mini-arrays linked together via pointer table).
βοΈ Whatβs Under the Hood?
A simplified mental model:
cppCopierModifierT** block_map; // array of pointers to blocks
T* blocks[N]; // actual blocks on the heap
- Each block holds a fixed number of elements.
deque
manages which block and offset your index points to.
β Strengths
- Fast insertion/removal at both ends (O(1)).
- No expensive shifting like
vector
. - Stable references to elements even after growth at ends (unlike
vector
).
β οΈ Trade-offs
- Not contiguous β worse cache locality than
vector
. - Random access is constant time, but slightly slower than
vector[i]
. - Slightly higher memory overhead due to internal structure (block map + block padding).
π Quant Finance Use Cases
- Rolling windows on tick or quote streams (e.g. sliding 1-minute buffer).
- Time-based buffers for short-term volatility, VWAP, or moving averages.
- Event-driven systems where data is both consumed and appended continuously.
Example:
std::deque<double> price_buffer;
// push_front for new tick, pop_back to discard old
π¬ Typical Interview Questions
- “How would you implement a rolling average over a tick stream?”
- A good answer involves
std::deque
for its O(1) append/remove at both ends.
- A good answer involves
- “Why not use
std::vector
if you’re appending at the front?”- Tests understanding of
std::vector
‘s O(n) front-insertion cost vsdeque
‘s O(1).
- Tests understanding of
- “Explain the memory layout of
deque
vsvector
, and how it affects performance.”- Looks for awareness of cache implications.
π» Code Example: Rolling Average with std::deque
cppCopierModifier#include <iostream>
#include <deque>
#include <numeric>
int main() {
std::deque<double> price_window;
size_t window_size = 5;
for (int i = 0; i < 10; ++i) {
double new_price = 100.0 + i;
price_window.push_back(new_price);
// Keep deque size fixed
if (price_window.size() > window_size) {
price_window.pop_front(); // Efficient O(1)
}
if (price_window.size() == window_size) {
double avg = std::accumulate(price_window.begin(), price_window.end(), 0.0) / window_size;
std::cout << "Rolling average: " << avg << std::endl;
}
}
return 0;
}
π§ Summary: deque
vs vector
Feature | std::vector | std::deque |
---|---|---|
Memory Layout | Contiguous heap | Segmented blocks on heap |
Random Access | O(1), fastest | O(1), slightly slower |
Front Insert/Remove | O(n) | O(1) |
Back Insert/Remove | O(1) amortized | O(1) |
Cache Efficiency | High | Medium |
Quant Use Case | Time series, simulations | Rolling buffers, queues |
3. std::array
: Fixed-Size Array
std::array
is a stack-allocated, fixed-size container introduced in C++11. It wraps a C-style array with a safer, STL-compatible interface while retaining zero overhead. In quant finance, itβs ideal when you know the size at compile time and want maximum performance and predictability.
π§ Stack vs Heap: Where Does Data Live?
std::array
stores all of its data on the stack β no dynamic memory allocation.- The container itself is the data, unlike
std::vector
orstd::deque
, which hold pointers to heap-allocated buffers.
π‘ Consequences:
- Blazing fast allocation/deallocation β no
malloc
, no heap overhead. - Extremely cache-friendly β everything is packed in a single memory block.
- But: limited size (you canβt store millions of elements safely).
π¦ Contiguous Memory? Yes.
std::array
maintains a true contiguous layout, just like a C array. This enables:
- Fast random access via
arr[i]
- Compatibility with C APIs
- SIMD/vectorization-friendly memory layout
β± Time Complexity
Operation | Time Complexity | Notes |
---|---|---|
Access a[i] | O(1) | Compile-time indexed if constexpr |
Size | O(1) | Always constant |
Insertion | N/A | Fixed size; no resizing |
βοΈ Whatβs Under the Hood?
Conceptually:
cppCopierModifiertemplate<typename T, std::size_t N>
struct array {
T elems[N]; // Inline data β lives on the stack
};
- No dynamic allocation.
- No capacity concept β just a fixed-size C array with better syntax and STL methods.
β Strengths
- Zero runtime overhead β allocation is just moving the stack pointer.
- Extremely fast access and iteration.
- Safer than raw arrays β bounds-safe with
.at()
and STL integration (.begin()
,.end()
, etc.).
β οΈ Trade-offs
- Fixed size β canβt grow or shrink at runtime.
- Risk of stack overflow if size is too large.
- Pass-by-value is expensive (copies all elements) unless using references.
π Quant Finance Use Cases
- Grids for finite difference/PDE solvers:
std::array<double, N>
- Precomputed factors (e.g., Greeks at fixed deltas)
- Fixed-size historical feature vectors for ML signals
- Small matrices or stat buffers in embedded systems or hot loops
// Greeks for delta hedging
std::array<double, 5> greeks = { 0.45, 0.01, -0.02, 0.005, 0.0001 };
π¬ Typical Interview Questions
- “Whatβs the difference between
std::array
,std::vector
, and C-style arrays?”- Tests understanding of memory layout, allocation, and safety.
- “When would you prefer
std::array
overstd::vector
?”- Looking for βwhen size is known at compile time and performance matters.β
- “Whatβs the risk of using large
std::array
s?”- Looking for awareness of stack overflow and memory constraints.
π» Code Example: Option Grid for PDE Solver
int main() {
constexpr std::size_t N = 5; // e.g. 5 grid points in a discretized asset space
std::array<double, N> option_values = {100.0, 101.5, 102.8, 104.2, 105.0};
// Simple finite-difference operation: delta = (V[i+1] - V[i]) / dx
double dx = 1.0;
for (std::size_t i = 0; i < N - 1; ++i) {
double delta = (option_values[i + 1] - option_values[i]) / dx;
std::cout << "delta[" << i << "] = " << delta << std::endl;
}
return 0;
}
π§ Summary: std::array
vs vector
/deque
Feature | std::array | std::vector | std::deque |
---|---|---|---|
Memory | Stack | Heap (contiguous) | Heap (segmented) |
Size | Fixed at compile time | Dynamic at runtime | Dynamic at runtime |
Allocation speed | Instant | Slow (heap) | Slower (complex alloc) |
Use case | PDE grids, factors | Time series, simulations | Rolling buffers |
Performance | Max | High | Medium |
4. std::list
/ std::forward_list
: Linked Lists
std::list
and std::forward_list
are the C++ STLβs doubly- and singly-linked list implementations. While rarely used in performance-critical quant finance applications, they are still part of the container toolkit and may shine in niche situations involving frequent insertions/removals in the middle of large datasets β but not for numeric data or tight loops.
π§ Stack vs Heap: Where Does Data Live?
- Both containers are heap-based: each element is dynamically allocated as a separate node.
- The container object itself (head pointer, size metadata) is on the stack.
- But the actual elements β and their pointers β are spread across the heap.
π‘ Consequences:
- Extremely poor cache locality.
- Each insertion allocates heap memory β slow.
- Ideal for predictable insertion/removal patterns, not for numerical processing.
π¦ Contiguous Memory? Definitely Not.
Each element is a node containing:
- The value
- One (
std::forward_list
) or two (std::list
) pointers
So memory looks like this:
[Node1] β [Node2] β [Node3] ... (or doubly linked)
π£ You lose any benefit of prefetching or SIMD. Iteration causes pointer chasing, leading to cache misses and latency spikes.
β± Time Complexity
Operation | Time Complexity | Notes |
---|---|---|
Insert at front | O(1) | Very fast, no shifting |
Insert/remove in middle | O(1) with iterator | No shifting needed |
Random access l[i] | O(n) | No indexing support |
Iteration | O(n), but slow | Poor cache usage |
βοΈ Whatβs Under the Hood?
Each node contains:
value
next
pointer (andprev
forstd::list
)
So conceptually:
Node {
T value;
Node* next; // and prev, if doubly linked
};
std::list
uses two pointers per node.std::forward_list
is lighter β just one pointer β but canβt iterate backward.
β Strengths
- Efficient insertions/removals anywhere β no element shifting like
vector
. - Iterator-stable: elements don’t move in memory.
- Predictable performance for queue-like workloads with frequent mid-stream changes.
β οΈ Trade-offs
- Terrible cache locality β each node is scattered.
- No random access β everything is via iteration.
- Heavy memory overhead: ~2Γβ3Γ the size of your data.
- Heap allocation per node = slow.
π Quant Finance Use Cases (Rare/Niche)
- Managing a priority-sorted list of limit orders (insert/delete in mid-stream).
- Event systems where exact insertion position matters.
- Custom allocators or object pools in performance-tuned engines.
Usually, in real systems, this is replaced by:
std::deque
for queuesstd::vector
for indexed data- Custom intrusive lists if performance really matters
π¬ Typical Interview Questions
- “Whatβs the trade-off between
std::vector
andstd::list
?”- Looking for cache locality vs insert/delete performance.
- “How would you store a stream of operations that require arbitrary insert/delete?”
- Tests knowledge of when to use linked lists despite their cost.
- “Why is
std::list
usually a bad idea for numeric workloads?”- Looking for insight into heap fragmentation and CPU cache behavior.
π» Code Example: Linked List of Events
#include <iostream>
#include <list>
struct TradeEvent {
double price;
double quantity;
};
int main() {
std::list<TradeEvent> event_log;
// Simulate inserting trades
event_log.push_back({100.0, 500});
event_log.push_back({101.2, 200});
event_log.push_front({99.5, 300}); // Insert at front
// Iterate through events
for (const auto& e : event_log) {
std::cout << "Price: " << e.price << ", Qty: " << e.quantity << std::endl;
}
return 0;
}
π§ Summary: std::list
/ std::forward_list
Feature | std::list | std::forward_list |
---|---|---|
Memory | Heap, scattered | Heap, scattered |
Access | No indexing, O(n) | Forward only |
Inserts/removals | O(1) with iterator | O(1), front-only efficient |
Memory overhead | High (2 pointers/node) | Lower (1 pointer/node) |
Cache efficiency | Poor | Poor |
Quant use case | Rare (custom order books) | Lightweight event chains |
Unless you need fast structural changes mid-stream, these containers are rarely used in quant systems. But it’s important to know when and why they exist.