C++ for Quants
  • Home
Author

Clement Daubrenet

Clement Daubrenet

moving average interview
Interview

Calculate Moving Average in C++ in O(1) – An Interview-Style Problem

by Clement Daubrenet June 26, 2025

A classic interview question in quantitative finance or software engineering roles is:
“Design a data structure that calculates the moving average of a stock price stream in O(1) time per update.”.
How to calculate the moving average in C++ in an optimal way?

Let’s tackle this with a focus on Microsoft (MSFT) stock, although the solution is applicable to any time-series financial instrument. We’ll use C++ to build an efficient, clean implementation suitable for production-grade quant systems.

1. Problem Statement

Design a class that efficiently calculates the moving average of the last N stock prices.

Your class should support:

  • void addPrice(double price): Adds the latest price.
  • double getAverage(): Returns the average of the last N prices.

Constraints:

  • The moving average must be updated in O(1) time per price.
  • Handle the case where fewer than N prices have been added.

Implement this in C++.

You will need to complete the following code:

#include <vector>


class MovingAverage {
public:
    explicit MovingAverage(int size);
    void addPrice(double price);
    double getAverage() const;
private:
    std::vector<double> buffer;
    int maxSize;
    double sum = 0.0;
};

Imagine the following historical prices for Microsoft stocks:

DayPrice (USD)
1400.0
2402.5
3405.0
4410.0
5412.0
6415.5

And assume we want to calculate the moving average of prices on 3 days, everyday, as a test.

2. A Naive Implementation in O(N)

Let’s start with a first implementation using `std::accumulate`.

It’s defined as follow in the C++ documentation:
“std::accumulate Computes the sum of the given value init and the elements in the range [first, last)“. The last iterator is not included in the operation. This is a half-open interval, written as.

std::accumulate is O(N) in time complexity.

Let’s use it to calculate the MSTF stock price moving average in C++:

#include <iostream>
#include <vector>
#include <numeric>

class MovingAverage {
public:
    explicit MovingAverage(int size) : maxSize(size) {}

    void addPrice(double price) {
        buffer.push_back(price);
        if (buffer.size() > maxSize) {
            buffer.erase(buffer.begin()); // O(N)
        }
    }

    double getAverage() const {
        if (buffer.empty()) return 0.0;
        double sum = std::accumulate(buffer.begin(), buffer.end(), 0.0); // O(N)
        return sum / buffer.size();
    }

private:
    std::vector<double> buffer;
    int maxSize;
};

int main() {
    MovingAverage ma(3); // 3-day moving average
    std::vector<double> msftPrices = {400.0, 402.5, 405.0, 410.0, 412.0, 415.5};

    for (size_t i = 0; i < msftPrices.size(); ++i) {
        ma.addPrice(msftPrices[i]);
        std::cout << "Day " << i + 1 << " - Price: " << msftPrices[i]
                  << ", 3-day MA: " << ma.getAverage() << std::endl;
    }

    return 0;
}

Every new day the moving average is re-calculated from scratch, not ideal! But it gives the right results:

➜  build ./movingaverage 
Day 1 - Price: 400, 3-day MA: 400
Day 2 - Price: 402.5, 3-day MA: 401.25
Day 3 - Price: 405, 3-day MA: 402.5
Day 4 - Price: 410, 3-day MA: 405.833
Day 5 - Price: 412, 3-day MA: 409
Day 6 - Price: 415.5, 3-day MA: 412.5

3. An optimal Implementation in O(1)

Let’s now do it in an iterative way and just add the value to the former sum to calculate a performant moving average in C++:

#include <iostream>
#include <vector>

class MovingAverage {
public:
    explicit MovingAverage(int size)
        : buffer(size, 0.0), maxSize(size) {}

    void addPrice(double price) {
        sum -= buffer[index];        // Subtract the value being overwritten
        sum += price;                // Add the new value
        buffer[index] = price;       // Overwrite the old value
        index = (index + 1) % maxSize;

        if (count < maxSize) count++;
    }

    double getAverage() const {
        return count == 0 ? 0.0 : sum / count;
    }

private:
    std::vector<double> buffer;
    int maxSize;
    int index = 0;
    int count = 0;
    double sum = 0.0;
};

int main() {
    // Example: 3-day moving average for Microsoft stock prices
    MovingAverage ma(3);
    std::vector<double> msftPrices = {400.0, 402.5, 405.0, 410.0, 412.0, 415.5};

    for (size_t i = 0; i < msftPrices.size(); ++i) {
        ma.addPrice(msftPrices[i]);
        std::cout << "Day " << i + 1
                  << " - Price: " << msftPrices[i]
                  << ", 3-day MA: " << ma.getAverage()
                  << std::endl;
    }

    return 0;
}

This time, no re-calculation, each time a new price gets in, we add it and average it on the fly.

It’s O(1).

Let’s run it:

➜  build ./movingaverage
Day 1 - Price: 400, 3-day MA: 400
Day 2 - Price: 402.5, 3-day MA: 401.25
Day 3 - Price: 405, 3-day MA: 402.5
Day 4 - Price: 410, 3-day MA: 405.833
Day 5 - Price: 412, 3-day MA: 409
Day 6 - Price: 415.5, 3-day MA: 412.5

The same results but way faster!

And you pass your interview to calculate the moving average in C++ in the most performant way.

4. Common STL Container Member Functions & Tips Learned

Syntax / FunctionDescriptionNotes / Gotchas
container.begin()Returns iterator to the first elementUse in loops and STL algorithms like accumulate or sort
container.end()Returns iterator past the last elementNot included in iteration ([begin, end))
container.size()Returns number of elementsType is size_t (unsigned) — be careful when comparing with int
container.empty()Returns true if container has no elementsSafer than checking size() == 0
container.front()Accesses the first elementUndefined behavior if the container is empty
container.back()Accesses the last elementAlso undefined on empty containers
container.push_back(x)Appends x to the endO(1) amortized for vector, always O(1) for deque
container.pop_back()Removes last elementUnsafe if empty
container.erase(it)Removes element at iterator itO(N) for vector, O(1) for deque
std::accumulate(begin, end, init)Computes sum (or custom op) over rangeDefined in <numeric>; use for totals or averages
std::vector<T> v(n, init)Creates a vector of size n with all elements initialized to initUseful for fixed-size buffers
std::deque<T>Double-ended queue: fast push_front and pop_frontSlower than vector for random access, but great for sliding windows

🧠 Bonus Tips:

  • Use auto in range-based loops to avoid type headaches: cppCopierModifierfor (auto val : vec) { ... }
  • Use v[i] only when you’re sure i < v.size() — or you’ll hit UB (undefined behavior).
  • Avoid vector.erase(begin()) in performance-critical code — it’s O(N).
  • Prefer std::vector::reserve() if you’re going to fill it all at once (not needed in circular buffer use).
June 26, 2025 0 comments
c++ for performance
Performance

C++ for Performance: 5 Ideas to Speed Up Your Quantitative Code

by Clement Daubrenet June 26, 2025

In quantitative finance, milliseconds can mean millions. Whether you’re pricing exotic derivatives, processing high-frequency trades, or running Monte Carlo simulations, performance is non-negotiable. C++ remains the go-to language for building ultra-fast systems thanks to its low-level control and fine-tuned memory management. What are 5 tricks to optimize C++ for performance?

1. Prefer Stack Allocation Over Heap

Heap allocations (new/delete) are costly due to the overhead of dynamic memory management and potential fragmentation. Stack allocation, on the other hand, is faster, safer, and automatically cleaned up when the scope ends:

Stack vs. Heap Memory Table

ParameterStackHeap
Data type structureLinear (LIFO: Last In, First Out)Hierarchical access possible; no fixed structure
Basic allocationMemory is allocated contiguously and sequentiallyMemory can be contiguous, but is not guaranteed (depends on allocator and fragmentation)
Allocation & DeallocationAutomatic by compiler (on function entry/exit)Manual (new/delete, malloc/free) or via smart pointers
CostVery low overheadHigher overhead due to allocator logic and possible fragmentation
Limit of space sizeFixed limit per thread (set by OS, typically MBs)Limited by total available system memory
Access timeVery fast (predictable layout, cache-friendly)Slower (more indirection, potential page faults)
FlexibilityFixed-size, defined at compile timeDynamically resizable
SizeTypically smallCan grow large (useful for big data structures)
ResizeNot resizable after allocationResizable (e.g., with realloc, std::vector::resize())

So if you want speed, choose the stack allocation:

// Slower: heap allocation
MyMatrix* mat = new MyMatrix(1000, 1000); 
process(*mat);
delete mat;

// Faster: stack allocation
MyMatrix mat(1000, 1000); 
process(mat);

Use smart pointers or containers only when dynamic allocation is necessary, and favor std::array or std::vector with reserved capacity for fixed-size needs.

2. Avoid Virtual Functions in Hot Paths

A virtual function lets C++ decide at runtime which version of a function to call, based on the actual object type, not the pointer or reference type.

Virtual functions use vtables for dynamic dispatch, introducing a level of indirection that prevents inlining and hurts CPU branch prediction.

A vtable (short for virtual table) is a mechanism used by C++ to implement runtime polymorphism — specifically, virtual function calls.

When a class has at least one virtual function, the compiler generates:

  • A vtable: a table of function pointers for that class.
  • A vptr (virtual table pointer): a hidden pointer added to each object instance, pointing to the appropriate vtable.

In tight loops or latency-critical sections, replacing virtual calls with alternatives like templates or function pointers can significantly improve performance.

// Slower: virtual dispatch
struct Instrument {
    virtual double price() const = 0;
};

double sumPrices(const std::vector<Instrument*>& instruments) {
    double total = 0;
    for (const auto* instr : instruments) {
        total += instr->price(); // Virtual call
    }
    return total;
}

Using templates is way more performant:

#include <iostream>
#include <vector>

struct Bond {
    double price() const { return 100.0; }
};

template<typename T>
double sumPrices(const std::vector<T>& instruments) {
    double total = 0;
    for (const auto& instr : instruments) {
        total += instr.price();  // Resolved at compile time, can be inlined
    }
    return total;
}

3. Use reserve() for Vectors

When using std::vector, every time you push back an element beyond its current capacity, it must allocate a new memory block, copy existing elements, and deallocate the old one — which is expensive. In performance-critical paths like simulations or data loading, this overhead adds up quickly.

If you know (or can estimate) the number of elements in advance, call vector.reserve(n) to allocate memory once upfront. This avoids repeated reallocations and boosts speed significantly.

std::vector<double> prices;

// Inefficient: multiple reallocations as vector grows
for (int i = 0; i < 1'000'000; ++i) {
    prices.push_back(i * 0.01);
}

// Better: allocate memory once
std::vector<double> fast_prices;
fast_prices.reserve(1'000'000);  // Preallocate
for (int i = 0; i < 1'000'000; ++i) {
    fast_prices.push_back(i * 0.01);
}

Why Not Always Use the Stack?
The stack is fast because:

  • Allocation/deallocation is automatic.
  • It’s contiguous and cache-friendly.
  • No fragmentation or dynamic bookkeeping.

But it comes with strict limitations: the stack size is limited and it’s not resizable (fixed at compile time or needs C99-style VLAs with compiler extension).

4. Leverage Compiler Optimizations

Modern C++ compilers are incredibly powerful but you have to ask for the performance. By default, they prioritize portability and safety over speed. Turning on aggressive optimization flags like -O2, -O3, -march=native, and -flto enables advanced techniques like loop unrolling, inlining, vectorization, and dead code elimination.

These flags can deliver huge speedups especially for compute-heavy quant workloads like Monte Carlo simulations, matrix operations, or pricing curves.

# Basic optimization
g++ -O2 mycode.cpp -o myapp

# Aggressive + hardware-specific + link-time optimization
g++ -O3 -march=native -flto mycode.cpp -o myapp

🧠 Key Flags:

  • -O2: General optimizations (safe default).
  • -O3: Adds aggressive loop optimizations and inlining.
  • -march=native: Tailors code to your CPU (uses AVX, SSE, etc.).
  • -flto: Link-time optimization — lets compiler optimize across translation units.

⚠️ Use profiling tools like perf, gprof, or valgrind to validate the gains and sometimes -O3 can make things faster, but also larger or harder to debug.

5. Minimize Lock Contention

In multi-threaded quant systems, excessive use of std::mutex can serialize threads, causing performance bottlenecks. Lock contention happens when multiple threads fight to acquire the same lock, leading to context switches and degraded latency.

To reduce contention:

  • Keep critical sections short.
  • Use std::atomic for simple shared data.
  • Prefer lock-free structures or per-thread buffers where possible.

Example: Avoiding mutex with std::atomic

std::atomic<int> counter = 0;

// Thread-safe increment without a lock
void safeIncrement() {
    counter.fetch_add(1, std::memory_order_relaxed);
}


June 26, 2025 0 comments
cost-of-carry futures
Futures

Pricing Futures Using Cost-of-Carry in C++

by Clement Daubrenet June 26, 2025

Futures contracts derive their value from the underlying asset, but their price is not always equal to the spot price. The cost-of-carry model explains this difference by accounting for interest rates, storage costs, and dividends. It’s a fundamental concept in pricing futures and understanding arbitrage relationships. In this article, we’ll break down the formula and implement a clean, flexible version in C++. Whether you’re pricing equity, commodity, or FX futures, this model offers a solid foundation. How to calculate the cost-of-carry in C++?

1. What’s a Future?

A future is a standardized financial contract that obligates the buyer to purchase, or the seller to sell, an underlying asset at a predetermined price on a specified future date.

Unlike forwards, futures are traded on exchanges and are marked to market daily, meaning gains and losses are settled each day until the contract expires. They are commonly used for hedging or speculation across a wide range of assets, including commodities, equities, interest rates, and currencies.

Futures contracts help investors manage risk by locking in prices, and they also play a key role in price discovery in global markets.

2. The Cost-of-Carry Formula

The cost-of-carry model provides a theoretical price for a futures contract based on the current spot price of the asset and the costs (or benefits) of holding the asset until the contract’s expiration. These costs include financing (via interest rates), storage (for physical goods), and dividends or income lost by holding the asset instead of investing it elsewhere.

The formula is:

[math] \Large F_t = S_t \cdot e^{(r + u – q)(T – t)} [/math]

Where:

  • [math] F_t [/math]: Theoretical futures price at time [math] t [/math]
  • [math] S_t [/math]: Spot price of the underlying asset
  • [math] r [/math]: Risk-free interest rate (annualized)
  • [math] u [/math]: Storage cost (as a percentage, annualized)
  • [math] q [/math]: Dividend yield or convenience yield (annualized)
  • [math] T – t [/math]: Time to maturity in years

This formula assumes continuous compounding. In practice:

  • For equity index futures, [math] u = 0 [/math], but [math] q > 0 [/math]
  • For commodities like oil or gold, [math] u > 0 [/math], and [math] q = 0 [/math]
  • For FX futures, [math] r [/math] and [math] q [/math] represent the interest rate differential between two currencies

The cost-of-carry explains why futures can trade at a premium or discount to the spot price, depending on these inputs.

An example of this cost, when the spot price stays the same:

Now how to calculate the cost-of-carry in C++?

3. A Flexible C++ Implementation

Below is a self-contained example that takes spot price, interest rate, storage cost, dividend yield, and time to maturity as inputs and outputs the futures price.

Here’s a complete, ready-to-run C++ program:

#include <iostream>
#include <cmath>

struct FuturesPricingInput {
    double spot_price;
    double risk_free_rate;
    double storage_cost;
    double dividend_yield;
    double time_to_maturity; // in years
};

double price_futures(const FuturesPricingInput& input) {
    double exponent = (input.risk_free_rate + input.storage_cost - input.dividend_yield) * input.time_to_maturity;
    return input.spot_price * std::exp(exponent);
}

int main() {
    FuturesPricingInput input {
        100.0,   // spot_price
        0.05,    // risk_free_rate (5%)
        0.01,    // storage_cost (1%)
        0.02,    // dividend_yield (2%)
        0.5      // time_to_maturity (6 months)
    };

    double futures_price = price_futures(input);
    std::cout << "Futures price: " << futures_price << std::endl;

    return 0;
}

Let’s write that code in a futures.cpp file, create a CMakeLists.txt:

cmake_minimum_required(VERSION 3.10)
project(futures)
set(CMAKE_CXX_STANDARD 17)
add_executable(futures ../futures.cpp)

And compile it:

mkdir build 
cd build
cmake ..
make

The result is the price of our future:

➜  build ./futures 
Futures price: 102.02
June 26, 2025 0 comments
Gamma Greek Calculation
Greeks

Options Greeks in C++: Derive and Calculate Gamma

by Clement Daubrenet June 26, 2025

The Greeks are a set of metrics that describe these sensitivities. Among them, Gamma measures how fast Delta (the sensitivity of the option price to the underlying price) changes as the underlying price itself changes. So, how to calculate gamma?

In this article, we’ll focus on deriving the Gamma of a European option under the Black-Scholes model and implementing its calculation in C++. Whether you’re building a pricing engine or refining a hedging strategy, Gamma plays a key role in capturing the curvature of your position’s risk profile.

1. What is Gamma?

Gamma is one of the key Greeks in options trading. It measures the rate of change of Delta with respect to changes in the underlying asset price.

In other words:

  • Delta tells you how much the option price changes when the underlying asset moves by a small amount.
  • Gamma tells you how much Delta itself changes when the underlying price moves.

Interpretation:

  • High Gamma means Delta is very sensitive to price changes — common for near-the-money options with short time to expiry.
  • Low Gamma indicates Delta is more stable — typical for deep in-the-money or far out-of-the-money options.

2. The Gamma Formula

In the context of options pricing, Gamma arises naturally from the Black-Scholes Partial Differential Equation, which describes how the value of an option V(S,t) evolves with respect to the underlying asset price S and time t:

[math]\Large \frac{\partial V}{\partial t} + \frac{1}{2} \sigma^2 S^2 \frac{\partial^2 V}{\partial S^2} + r S \frac{\partial V}{\partial S} – r V = 0[/math]

In this equation:

  • [math]\frac{\partial V}{\partial S}[/math] is Delta
  • [math]\frac{\partial^2 V}{\partial S^2}[/math] is Gamma
  • [math]r[/math] is the risk-free rate
  • [math]\sigma[/math] is the volatility

From the Black-Scholes PDE, we isolate Gamma as the second derivative of the option value with respect to the underlying price. Under the Black-Scholes model, the closed-form expression for Gamma is:

[math]\Large \Gamma = \frac{N'(d_1)}{S \cdot \sigma \cdot \sqrt{T}}[/math]

Where:

  • [math]N'(d_1)[/math] is the standard normal probability density at [math]d_1[/math]
  • [math]S[/math] is the spot price of the underlying asset
  • [math]\sigma[/math] is the volatility
  • [math]T[/math] is the time to expiration

To derive the closed-form for Gamma, we start from the Black-Scholes formula for the price of a European call option:

[math]\Large C = S N(d_1) – K e^{-rT} N(d_2)[/math]

Here, the dependence on the spot price [math]S[/math] is explicit in both [math]S N(d_1)[/math] and [math]d_1[/math] itself, which is a function of [math]S[/math]:

[math]\Large d_1 = \frac{\ln(S/K) + (r + \frac{1}{2} \sigma^2)T}{\sigma \sqrt{T}}[/math]

To compute Gamma, we take the second partial derivative of [math]C[/math] with respect to [math]S[/math]. This involves applying the chain rule twice due to [math]d_1[/math]’s dependence on [math]S[/math].

After simplification (and canceling terms involving [math]d_2[/math]), what remains is:

[math]\Large \Gamma = \frac{N'(d_1)}{S \cdot \sigma \cdot \sqrt{T}}[/math]

This elegant result shows that Gamma depends only on the standard normal density at [math]d_1[/math], scaled by spot price, volatility, and time.

3. Implementation in Vanilla C++

Now that we’ve derived the closed-form formula for Gamma, let’s implement it in C++. The following code calculates the Gamma of a European option using the Black-Scholes model. It includes:

  • A helper to compute the standard normal PDF
  • A function to compute d1d_1d1​
  • A gamma function implementing the formula
  • A main() function that runs an example with sample inputs

Here’s the complete implementation to calculate gamma:

#include <iostream>
#include <cmath>

// Standard normal probability density function
double norm_pdf(double x) {
    return (1.0 / std::sqrt(2 * M_PI)) * std::exp(-0.5 * x * x);
}

// Compute d1 used in the Black-Scholes formula
double compute_d1(double S, double K, double r, double sigma, double T) {
    return (std::log(S / K) + (r + 0.5 * sigma * sigma) * T) / (sigma * std::sqrt(T));
}

// Compute Gamma using the closed-form Black-Scholes formula
double gamma(double S, double K, double r, double sigma, double T) {
    double d1 = compute_d1(S, K, r, sigma, T);
    return norm_pdf(d1) / (S * sigma * std::sqrt(T));
}

int main() {
    double S = 100.0;     // Spot price
    double K = 100.0;     // Strike price
    double r = 0.05;      // Risk-free interest rate
    double sigma = 0.2;   // Volatility
    double T = 1.0;       // Time to maturity in years

    double gamma_val = gamma(S, K, r, sigma, T);
    std::cout << "Gamma: " << gamma_val << std::endl;

    return 0;
}

After compilation, running the executable will calculate gamma:

➜  build ./gamma        
Gamma: 0.018762
June 26, 2025 0 comments
automatic diff
Greeks

Using Automatic Differentiation for Greeks Computation in C++

by Clement Daubrenet June 25, 2025

Traders and risk managers rely on accurate values for Delta, Gamma, Vega, and other Greeks to hedge portfolios and manage exposure. Today, we’re going to see how to use automatic differentiation for greeks calculation.

Traditionally, these sensitivities are calculated using finite difference methods, which approximate derivatives numerically. While easy to implement, this approach suffers from trade-offs between precision, performance, and stability. Enter Automatic Differentiation (AD): a modern technique that computes derivatives with machine precision and efficient performance.

In this article, we explore how to compute Greeks using automatic differentiation in C++ using the lightweight AD library Adept.

1. Background: Greeks and Finite Differences

The Delta of a derivative is the rate of change of its price with respect to the underlying asset price:

[math] \Large {\Delta = \displaystyle \frac{\partial V}{\partial S}} [/math]

With finite differences, we might compute:

[math] \Large { \Delta \approx \frac{V(S + \epsilon) – V(S)}{\epsilon} } [/math]

This introduces:

  • Truncation error if ε is too large,
  • Rounding error if ε is too small,
  • And 2N evaluations for N Greeks.

2. Automatic Differentiation: Another Way

Automatic Differentiation (AD) is a technique for computing exact derivatives of functions expressed as computer programs. It is not the same as:

  • Numerical differentiation (e.g., finite differences), which approximates derivatives and is prone to rounding/truncation errors.
  • Symbolic differentiation (like in computer algebra systems), which manipulates expressions analytically but can become unwieldy and inefficient.

AD instead works by decomposing functions into elementary operations and systematically applying the chain rule to compute derivatives alongside function evaluation.

🔄 How It Works

  1. Operator Overloading (in C++)
    AD libraries in C++ (like Adept or CppAD) define a custom numeric type—say, adouble—which wraps a real number and tracks how it was computed. Arithmetic operations (+, *, sin, exp, etc.) are overloaded to record both the value and the derivative.
  2. Taping
    During function execution, the AD library records (“tapes”) every elementary operation (e.g., x * y, log(x), etc.) into a computational graph. Each node stores the local derivative of the output with respect to its inputs.
  3. Chain Rule Application
    Once the function is evaluated, AD applies the chain rule through the computational graph to compute the final derivatives.
    • Forward mode AD: computes derivatives with respect to one or more inputs.
    • Reverse mode AD: computes derivatives of one output with respect to many inputs (more efficient for scalar-valued functions with many inputs, like in ML or risk models).

🎯 Why It’s Powerful

  • Exact to machine precision (unlike finite differences)
  • Fast and efficient: typically only 5–10x the cost of the original function
  • Compositional: works on any function composed of differentiable primitives, no matter how complex

📦 In C++ Quant Contexts

AD shines in quant applications where:

  • Analytical derivatives are difficult to compute or unavailable (e.g., path-dependent or exotic derivatives),
  • Speed and accuracy matter (e.g., calibration loops or sensitivity surfaces),
  • You want to avoid hard-coding Greeks and re-deriving math manually.

3. C++ Implementation Using CppAD

Install CppAD and link it with your C++ project:

git clone https://github.com/coin-or/CppAD.git
cd CppAD
mkdir build
cd build
cmake ..
make
make install

    Then link the library to your CMakeLists.txt:

    cmake_minimum_required(VERSION 3.10)
    project(CppADGreeks)
    
    set(CMAKE_CXX_STANDARD 17)
    
    # Find and link CppAD
    find_path(CPPAD_INCLUDE_DIR cppad/cppad.hpp PATHS /usr/local/include)
    find_library(CPPAD_LIBRARY NAMES cppad_lib PATHS /usr/local/lib)
    
    if (NOT CPPAD_INCLUDE_DIR OR NOT CPPAD_LIBRARY)
        message(FATAL_ERROR "CppAD not found")
    endif()
    
    include_directories(${CPPAD_INCLUDE_DIR})
    link_libraries(${CPPAD_LIBRARY})
    
    # Example executable
    add_executable(automaticdiff ../automaticdiff.cpp)
    target_link_libraries(automaticdiff ${CPPAD_LIBRARY})

    And the following code is in my automaticdiff.cpp:

    #include <cppad/cppad.hpp>
    #include <iostream>
    #include <vector>
    #include <cmath>
    
    template <typename T>
    T norm_cdf(const T& x) {
        return 0.5 * CppAD::erfc(-x / std::sqrt(2.0));
    }
    
    int main() {
        using CppAD::AD;
    
        std::vector<AD<double>> X(1);
        X[0] = 105.0;  // Spot price
    
        CppAD::Independent(X);
    
        double K = 100.0, r = 0.05, sigma = 0.2, T = 1.0;
        AD<double> S = X[0];
    
        AD<double> d1 = (CppAD::log(S / K) + (r + 0.5 * sigma * sigma) * T) / (sigma * std::sqrt(T));
        AD<double> d2 = d1 - sigma * std::sqrt(T);
    
        AD<double> price = S * norm_cdf(d1) - K * CppAD::exp(-r * T) * norm_cdf(d2);
    
        std::vector<AD<double>> Y(1);
        Y[0] = price;
    
        CppAD::ADFun<double> f(X, Y);
    
        std::vector<double> x = {105.0};
        std::vector<double> delta = f.Jacobian(x);
    
        std::cout << "Call Price: " << CppAD::Value(price) << std::endl;
        std::cout << "Delta (∂Price/∂S): " << delta[0] << std::endl;
    
        return 0;
    }
    

    After compiling and running this code, we get a delta:

    ➜  build ./automaticdiff
    Call Price: 13.8579
    Delta (∂Price/∂S): 0.723727

    4. Explanation of the Code

    Here’s the key part of the code again:

    cppCopierModifierstd::vector<AD<double>> X(1);
    X[0] = 105.0;  // Spot price S
    CppAD::Independent(X);       // Start taping operations
    
    // Constants
    double K = 100.0, r = 0.05, sigma = 0.2, T = 1.0;
    AD<double> S = X[0];
    
    // Black-Scholes price formula
    AD<double> d1 = (CppAD::log(S / K) + (r + 0.5 * sigma * sigma) * T) / (sigma * std::sqrt(T));
    AD<double> d2 = d1 - sigma * std::sqrt(T);
    AD<double> price = S * norm_cdf(d1) - K * CppAD::exp(-r * T) * norm_cdf(d2);
    
    std::vector<AD<double>> Y(1); Y[0] = price;
    CppAD::ADFun<double> f(X, Y);  // Create a differentiable function f: S ↦ price
    
    std::vector<double> x = {105.0};
    std::vector<double> delta = f.Jacobian(x);  // Get ∂price/∂S

    ✅ What This Code Does (Conceptually)

    Step 1 – You Define a Function:

    The code defines a function f(S) = Black-Scholes call price as a function of spot S. But unlike normal code, you define this using AD types (AD<double>), so CppAD can trace the computation.

    Step 2 – CppAD Records All Operations:

    When you run this line:

    CppAD::Independent(X);

    CppAD starts building a computational graph. Every operation you do on S (which is X[0]) is recorded:

    • log(S / K)
    • d1
    • d2
    • norm_cdf(d1)
    • …
    • final result price

    This is like taping a math program:
    CppAD now knows how the output (price) was built from the input (S).

    Step 3 – You “seal” the function:

    CppAD::ADFun<double> f(X, Y);
    
    
    
    
    
    

    This finalizes the tape into a function:

    [math] \Large{f: S \mapsto \text{price}} [/math]

    🤖 How AD Gets the Derivative

    Using:

    std::vector<double> delta = f.Jacobian(x);

    CppAD:

    • Evaluates the forward pass to get intermediate values (just like normal code),
    • Then walks backward through the graph, applying the chain rule at each node to compute:

    [math] \Large{ \Delta = \frac{\partial \text{price}}{\partial S} } [/math]

    This is the exact derivative, not an approximation.

    ⚙️ Summary: How the Code Avoids Finite Difference Pitfalls

    FeatureYour CppAD CodeFinite Differences
    Number of evaluations1 forward pass + internal backprop2+ full evaluations
    Step size tuning?❌ None needed✅ Must choose ϵ\epsilonϵ carefully
    Derivative accuracy✅ Machine-accurate❌ Approximate
    Performance on multiple Greeks✅ Fast with reverse mode❌ Expensive (1 per Greek)
    Maintenance cost✅ Code reflects math structure❌ Rewriting required for each sensitivity

    June 25, 2025 0 comments
    yield curve
    Bonds

    Building a Yield Curve in C++: Theory and Implementation

    by Clement Daubrenet June 25, 2025

    In fixed income markets, the yield curve is a fundamental tool that maps interest rates to different maturities. It underpins the pricing of bonds, swaps, and other financial instruments. How to build a yield curve in C++?

    Traders, quants, and risk managers rely on it daily to discount future cash flows. This article explores how to build a basic zero-coupon yield curve using C++, starting from market data and ending with interpolated discount factors. We’ll also compare our results with QuantLib, the industry-standard quantitative finance library.

    1. What’s a Yield Curve?

    At its core, the yield curve captures how interest rates change with the length of time you lend money. Here’s an example of hypothetical yields across maturities:

    MaturityYield (%)
    Overnight (ON)1.00
    1 Month1.20
    3 Months1.35
    6 Months1.50
    1 Year1.80
    2 Years2.10
    5 Years2.60
    10 Years3.00
    30 Years3.20

    Plotted on a graph, these points form the yield curve — typically upward sloping, as longer maturities usually command higher yields due to risk and time value of money.

    Regular Yield Curve

    However, in stressed environments, the curve can flatten or even invert (e.g., 2-year yield > 10-year yield), often signaling economic uncertainty:

    Inverted Yield Curve

    Here is the plot of a hypothetical inverted yield curve, where shorter-term yields are higher than longer-term ones. This often reflects market expectations of economic slowdown or interest rate cuts in the future.

    When investors expect future economic weakness, they begin to anticipate interest rate cuts by the central bank. As a result:

    • Demand increases for long-term bonds (seen as safe havens), which pushes their prices up and their yields down.
    • Meanwhile, short-term rates may remain high due to current central bank policy (e.g. inflation control).

    This flips the curve: short-term yields become higher than long-term yields.

    2. Key Concepts Behind Yield Curve Construction

    Before jumping into code, it’s important to understand how yield curves are built from market data. In practice, we don’t observe a complete yield curve directly, we build (or “bootstrap”) it from liquid instruments such as:

    • Deposits (e.g., overnight to 6 months)
    • Forward Rate Agreements (FRAs) and Futures
    • Swaps (e.g., 1-year to 30-year)

    These instruments give us information about specific points on the curve. QuantLib uses these to construct a continuous curve using interpolation (e.g., linear, log-linear) between observed data points.

    Some essential building blocks in QuantLib include:

    • RateHelpers: Abstractions that turn market quotes (like deposit or swap rates) into bootstrapping constraints.
    • DayCount conventions and Calendars: Needed for accurate date and interest calculations.
    • YieldTermStructure: The central object representing the term structure of interest rates, which you can query for zero rates, discount factors, or forward rates.

    QuantLib lets you define all of this in a modular way, so you can plug in market data and generate an accurate, arbitrage-free curve for pricing and risk analysis.

    3. Implement in C++ with Quantlib

    To construct a yield curve in QuantLib, you’ll typically bootstrap it from a mix of deposit rates and swaps. QuantLib provides a flexible interface to handle real-world instruments and interpolate the curve. Below is a minimal C++ example that constructs a USD zero curve from deposit and swap rates:

    #include <ql/quantlib.hpp>
    #include <iostream>
    
    using namespace QuantLib;
    
    int main() {
        Calendar calendar = UnitedStates(UnitedStates::GovernmentBond);
        Date today(25, June, 2025);
        Settings::instance().evaluationDate() = today;
    
        // Market quotes
        Rate depositRate = 0.015; // 1.5% for 3M deposit
        Rate swapRate5Y = 0.025;  // 2.5% for 5Y swap
    
        // Quote handles
        Handle<Quote> dRate(boost::make_shared<SimpleQuote>(depositRate));
        Handle<Quote> sRate(boost::make_shared<SimpleQuote>(swapRate5Y));
    
        // Day counters and conventions
        DayCounter depositDayCount = Actual360();
        DayCounter curveDayCount = Actual365Fixed();
        Thirty360 swapDayCount(Thirty360::BondBasis);
    
        // Instrument helpers
        std::vector<boost::shared_ptr<RateHelper>> instruments;
    
        instruments.push_back(boost::make_shared<DepositRateHelper>(
            dRate, 3 * Months, 2, calendar, ModifiedFollowing, false, depositDayCount));
    
        instruments.push_back(boost::make_shared<SwapRateHelper>(
            sRate, 5 * Years, calendar, Annual, Unadjusted, swapDayCount,
            boost::make_shared<Euribor6M>()));
    
        // Construct the yield curve
        boost::shared_ptr<YieldTermStructure> yieldCurve =
            boost::make_shared<PiecewiseYieldCurve<ZeroYield, Linear>>(
                today, instruments, curveDayCount);
    
        // Example output: discount factor at 2Y
        Date maturity = calendar.advance(today, 2, Years);
        std::cout << "==== Yield Curve ====" << std::endl;
        for (int y = 1; y <= 5; ++y) {
        Date maturity = calendar.advance(today, y, Years);
        double discount = yieldCurve->discount(maturity);
        double zeroRate = yieldCurve->zeroRate(maturity, curveDayCount, Compounded, Annual).rate();
    
        std::cout << "Maturity: " << y << "Y\t"
                  << "Yield: " << std::fixed << std::setprecision(2) << 100 * zeroRate << "%\t"
                  << "Discount: " << std::setprecision(5) << discount
                  << std::endl;
        }
    
        return 0;
    }

    This snippet shows how to:

    • Set up today’s date and calendar,
    • Define deposit and swap instruments as bootstrapping inputs,
    • Build a PiecewiseYieldCurve using QuantLib’s helper classes,
    • Query the curve for a discount factor.

    You can easily extend this with more instruments (FRA, futures, longer swaps) for a realistic curve.

    4. Compile, Execute and Plot

    After installing quantlib, I compile my code with the following CMakeLists.txt:

    cmake_minimum_required(VERSION 3.10)
    project(QuantLibImpliedVolExample)
    
    set(CMAKE_CXX_STANDARD 17)
    
    find_package(PkgConfig REQUIRED)
    pkg_check_modules(QUANTLIB REQUIRED QuantLib)
    
    include_directories(${QUANTLIB_INCLUDE_DIRS})
    link_directories(${QUANTLIB_LIBRARY_DIRS})
    
    add_executable(yieldcurve ../yieldcurve.cpp)
    target_link_libraries(yieldcurve ${QUANTLIB_LIBRARIES})

    I run:

    mkdir build
    cd build
    cmake ..
    make

    And run:

    ➜ build ./yieldcurve
    ==== Yield Curve ====
    Maturity: 1Y Yield: 1.68% Discount: 0.98345
    Maturity: 2Y Yield: 1.89% Discount: 0.96324
    Maturity: 3Y Yield: 2.10% Discount: 0.93946
    Maturity: 4Y Yield: 2.31% Discount: 0.91272
    Maturity: 5Y Yield: 2.52% Discount: 0.88306

    And we can plot it too:

    So this is the plot of the zero yield curve from 1 to 5 years, based on your QuantLib output. It shows a smooth, gently upward-sloping curve: typical for a healthy interest rate environment.

    But why is it so… Linear?

    This is due to both the limited number of inputs and the interpolation method applied.

    In the example above, the curve is built using QuantLib’s PiecewiseYieldCurve<ZeroYield, Linear>, which performs linear interpolation between the zero rates of the provided instruments. With only a short-term and a long-term point, this leads to a straight-line interpolation between the two, hence the linear shape.

    In reality, yield curves typically exhibit curvature:

    • They rise steeply in the short term,
    • Then gradually flatten as maturity increases.

    This reflects the market’s expectations about interest rates, inflation, and economic growth over time.

    To better approximate real-world behavior, the curve can be constructed using:

    • A richer set of instruments (e.g., deposits, futures, swaps across many maturities),
    • More appropriate interpolation techniques such as LogLinear or Cubic.

    5. A More Realistic Curve

    This version uses LogLinear interpolation, which interpolates on the log of the discount factors. It results in a more natural term structure than simple linear interpolation, even with only two instruments.

    For a truly realistic curve, additional instruments should be incorporated, especially swaps with intermediate maturities:

    #include <ql/quantlib.hpp>
    #include <iostream>
    #include <iomanip>
    
    using namespace QuantLib;
    
    int main() {
        Calendar calendar = UnitedStates(UnitedStates::GovernmentBond);
        Date today(25, June, 2025);
        Settings::instance().evaluationDate() = today;
    
        // Market quotes
        Rate depositRate = 0.015;    // 1.5% for 3M deposit
        Rate swapRate2Y = 0.020;     // 2.0% for 2Y swap
        Rate swapRate5Y = 0.025;     // 2.5% for 5Y swap
        Rate swapRate10Y = 0.030;    // 3.0% for 10Y swap
    
        // Quote handles
        Handle<Quote> dRate(boost::make_shared<SimpleQuote>(depositRate));
        Handle<Quote> sRate2Y(boost::make_shared<SimpleQuote>(swapRate2Y));
        Handle<Quote> sRate5Y(boost::make_shared<SimpleQuote>(swapRate5Y));
        Handle<Quote> sRate10Y(boost::make_shared<SimpleQuote>(swapRate10Y));
    
        // Day counters and conventions
        DayCounter depositDayCount = Actual360();
        DayCounter curveDayCount = Actual365Fixed();
        Thirty360 swapDayCount(Thirty360::BondBasis);
    
        // Instrument helpers
        std::vector<boost::shared_ptr<RateHelper>> instruments;
    
        instruments.push_back(boost::make_shared<DepositRateHelper>(
            dRate, 3 * Months, 2, calendar, ModifiedFollowing, false, depositDayCount));
    
        instruments.push_back(boost::make_shared<SwapRateHelper>(
            sRate2Y, 2 * Years, calendar, Annual, Unadjusted, swapDayCount,
            boost::make_shared<Euribor6M>()));
    
        instruments.push_back(boost::make_shared<SwapRateHelper>(
            sRate5Y, 5 * Years, calendar, Annual, Unadjusted, swapDayCount,
            boost::make_shared<Euribor6M>()));
    
        instruments.push_back(boost::make_shared<SwapRateHelper>(
            sRate10Y, 10 * Years, calendar, Annual, Unadjusted, swapDayCount,
            boost::make_shared<Euribor6M>()));
    
        // Construct the yield curve with Cubic interpolation
        boost::shared_ptr<YieldTermStructure> yieldCurve =
            boost::make_shared<PiecewiseYieldCurve<ZeroYield, Cubic>>(
                today, instruments, curveDayCount);
    
        // Output: yield curve at each year from 1Y to 10Y
        std::cout << "==== Yield Curve ====" << std::endl;
        for (int y = 1; y <= 10; ++y) {
            Date maturity = calendar.advance(today, y, Years);
            double discount = yieldCurve->discount(maturity);
            double zeroRate = yieldCurve->zeroRate(maturity, curveDayCount, Compounded, Annual).rate();
    
            std::cout << "Maturity: " << y << "Y\t"
                      << "Yield: " << std::fixed << std::setprecision(2) << 100 * zeroRate << "%\t"
                      << "Discount: " << std::setprecision(5) << discount
                      << std::endl;
        }
    
        return 0;
    }

    And let’s run it again:

    ➜  build ./yieldcurve
    ==== Yield Curve ====
    Maturity: 1Y    Yield: 1.67%    Discount: 0.98355
    Maturity: 2Y    Yield: 2.00%    Discount: 0.96122
    Maturity: 3Y    Yield: 2.20%    Discount: 0.93680
    Maturity: 4Y    Yield: 2.37%    Discount: 0.91058
    Maturity: 5Y    Yield: 2.51%    Discount: 0.88320
    Maturity: 6Y    Yield: 2.64%    Discount: 0.85524
    Maturity: 7Y    Yield: 2.75%    Discount: 0.82674
    Maturity: 8Y    Yield: 2.86%    Discount: 0.79790
    Maturity: 9Y    Yield: 2.96%    Discount: 0.76912
    Maturity: 10Y   Yield: 3.05%    Discount: 0.74018

    Which gives a better looking yield curve:

    June 25, 2025 0 comments
    Best Quant Cities
    Jobs

    Top Cities for High-Paying C++ Quantitative Roles

    by Clement Daubrenet June 25, 2025

    If you’re a skilled C++ developer with an interest in high-performance finance, the world of quantitative trading offers some of the most lucrative roles available today. These positions, often found at hedge funds, proprietary trading firms, and investment banks, demand a blend of low-latency programming expertise, mathematical insight, and real-time systems knowledge. While demand exists globally, a handful of cities stand out for consistently offering the highest compensation packages.

    From Wall Street to Canary Wharf, certain global financial hubs continue to dominate the quant talent market. These cities not only house the world’s top funds and trading desks but also offer competitive salaries, generous bonuses, and exposure to cutting-edge infrastructure. In this article, we break down the top five cities for high-paying C++ quantitative roles, supported by up-to-date salary data and market trends.

    1. New York City, USA

    Here’s a detailed look at New York City, the top destination globally for high-paying C++ quantitative developer roles:

    💰 Salary Ranges & Market Averages

    • Built In reports the average base salary for a Quant Developer in NYC is $326,667, with an additional cash compensation of $50,000, bringing average total comp to $376,667 — with a typical range of $180k – $500k indeed.com+6oxfordknight.co.uk+6ziprecruiter.com+6linkedin.com+5builtin.com+5reddit.com+5.
    • Glassdoor estimates total compensation at $242,376 annually, with average base salary around $138,351 glassdoor.com—likely reflecting more mid-career roles.
    • ZipRecruiter lists the average at ~$185,700/year (~$89/hour as of June 2025) indeed.com+15ziprecruiter.com+15payscale.com+15.

    🏢 Top Firms & Roles

    • HFT firms (e.g., Jane Street, Citadel, Renaissance) often offer $250k–$300k base for C++ quant devs, with bonuses regularly doubling take-home pay investopedia.com+5l.ny.nyc.associationcareernetwork.com+5oxfordknight.co.uk+5.
    • Citadel-level roles report total comp from $200k to $700k, with a median of $550k for Quant Devs in NYC payscale.com+15levels.fyi+15builtin.com+15.
    • Selby Jennings lists openings with salary bands of $300k–$400k l.ny.nyc.associationcareernetwork.com+5glassdoor.com+5linkedin.com+5.
    • Other firms like Xantium, Barclays, and Bloomberg offer ranges from $155k to $300k+$, often with discretionary bonuses builtin.com+2oxfordknight.co.uk+2xantium.com+2.

    🔍 3. Role Levels & Progression

    • Entry & mid-level Quant Dev roles typically range from $150k to $225k base, often with significant bonuses in year one and thereafter builtin.com+1xantium.com+1.
    • Senior/front-office developers in top firms frequently see base pay ≥ $250k, and total comp that can push $500k+ payscale.com+14linkedin.com+14indeed.com+14.

    🧠 4. Compensation Structure

    • Compensation is a blend of base + bonus + potential stock/equity. Bonuses can equal or exceed base salaries, especially in top-tier firms .
    • Reddit users note: “Quant Developers: $300k–1M … top graduates … guarantees in excess of $500k for the 1st year” at elite shops reddit.com+1efinancialcareers.com+1.

    📍 5. Role Types & Relevance of C++

    • Most high-paying roles are C++-centric, especially in low-latency trading, execution systems, and infrastructure for quant analytics.
    • Job listings for C++ Low Latency Trading Systems Dev show base pay typically $150k–$300k, with bonus upsides indeed.com+3linkedin.com+3bloomberg.avature.net+3.

    🧩 👉 Summary Snapshot

    Role LevelBase PayTotal Compensation
    Entry–Mid (Hedge/Fund)$150k–$225k$225k–$350k (with bonus)
    Senior/Elite$250k–$350k+$400k–$700k+ (with bonus/equity)
    Average (mid-career)$326k base$376k total comp

    Bottom line:
    New York City remains the gold standard for C++ quantitative developers. At top-tier firms, you’ll see base salaries of $250k+ and total compensation reaching $500k–$700k, especially at hedge funds and prop trading shops. Even outside the elite circles, mid-tier roles offer $150k–$200k base with solid bonus structures. C++ expertise in low-latency systems is an exceptionally valued skill in this market.

    2. San Francisco Bay Area / Silicon Valley, USA

    Here’s a detailed exploration of San Francisco Bay Area / Silicon Valley, demonstrating why it ranks as one of the top-paying regions for C++ quantitative developers:

    💼 Salary Overview for Quantitative Developers

    • Indeed reports the average salary for a Quantitative Developer in San Francisco itself at $196,116/year wellfound.comglassdoor.com+8indeed.com+8indeed.com+8.
    • ZipRecruiter lists the rate at $199,978/year, or about $96.14/hour, as of April 2025 ziprecruiter.com+2ziprecruiter.com+2ziprecruiter.com+2.
    • Glassdoor shows a median total compensation around $306,000/year, with typical base pay between $131k–$173k and additional compensation of $116k–$217k glassdoor.com+1levels.fyi+1.

    📊 Comparison with California & National Averages

    • These San Francisco figures exceed the California Quant Dev average of $167,506/year (~ $80.53/hr) glassdoor.com+2ziprecruiter.com+2glassdoor.com+2.
    • Built In’s national data shows an average base of around $196k and total comp around $248k, placing Bay Area roles significantly above national mean builtin.com+1levels.fyi+1.

    🛠️ C++ Developer Base Salaries

    • Indeed reports average base pay for general C++ Developers in San Francisco at $177,272/year, with a range from $123k to $254k salary.com+5indeed.com+5indeed.com+5.
    • Salary.com notes senior C++ Developer base ranges from $177,904 to $212,377, averaging $193,584 salary.com.

    🚀 Silicon Valley Premium

    • C++ developer salaries in the broader Bay Area average $162k, with wide dispersion—from $95k to $460k—reflecting startup equity upside wellfound.com.
    • San Jose (next to SF) offers higher comps for Quant Dev roles: total compensation averages $429,654/year, with base around $225,027/year wellfound.com+3glassdoor.com+3ziprecruiter.com+3.

    💵 Total Compensation Breakdown

    Role TypeBase Pay RangeTotal Comp Range
    Quant Dev (SF)$131k–$173k$248k–$390k+
    General C++ Dev (SF)$123k–$212k—
    Quant Dev (San Jose)~$225k (base)~$430k total avg

    ⚙️ Role Types & C++ Relevance

    High-paying roles typically emphasize low-latency C++ engineering within algorithmic trading engines, pricing libraries, and high-performance analytics platforms.

    • Compensation structure includes base + cash bonus + sometimes stock/equity, especially at startups and tech-influenced trading shops.

    🎯 Bottom Line

    • Base pay for Bay Area quant C++ roles ranges $130k–$225k+, depending on seniority and location.
    • Total compensation regularly reaches $250k–$400k in San Francisco, with $430k+ in San Jose, especially at elite or startup-oriented firms.

    3. London, UK

    💷 Salary Range – Base & Total Compensation

    • Payscale indicates the average base salary for Quant Developers with C++ experience in London is £65k–£100k, with total pay (including bonus) ranging from £70k–£125k uk.indeed.com+14payscale.com+14efinancialcareers-canada.com+14.
    • Morgan McKinley reports base ranges for Quant Developers are £105k–£150k overall, breaking down as:
      • £70k–100k for 0–3 years
      • £105k–150k for 3–5 years
      • £150k–195k for 5+ years morganmckinley.com+1morganmckinley.com+1.
    • Glassdoor cites average base at £90,339 with total compensation around £127,470 uk.indeed.com+10glassdoor.ca+10efinancialcareers-canada.com+10.

    📈 Premium Compensation for C++ & HFT Roles

    • Listings from eFinancialCareers for top quant firms show up to £200k base plus bonus reddit.com+6efinancialcareers-canada.com+6efinancialcareers.com+6.
    • Specialized roles offering £130k–£140k base + £50k–£70k bonus appear regularly clientserver.com+2efinancialcareers-canada.com+2reddit.com+2.
    • Oxford Knight advertises C++ quant developer roles in systematic equities with £150k–£350k total compensation efinancialcareers.co.uk+14oxfordknight.co.uk+14efinancialcareers-canada.com+14.

    🎯 Senior & Front-Office Engineer Earnings

    • Roles in hedge funds and high-frequency trading often target senior candidates with £150k–£350k total compensation, emphasizing front-office C++ expertise .
    • Client Server listings feature C++/Python Quant Dev roles with £110k–£175k base, plus potential bonuses worth multiple base salaries clientserver.com.

    🧩 Skill Demand & Market Pressure

    • ITJobsWatch data shows the UK median for Quant Developer roles is £140k, with London’s median at £150k morganmckinley.com+7itjobswatch.co.uk+7oxfordknight.co.uk+7.
    • C++ and low-latency expertise feature prominently in job ads (~60% mention C++), especially within hedge funds and algorithmic trading shops .

    ⚖️ Junior to Senior Progression Path

    Experience LevelBase PayTotal Compensation
    Entry (0–3 yrs)£65k–£100k£70k–£125k
    Mid (3–5 yrs)£105k–£150k£150k–£200k
    Senior (5+ yrs)£150k–£200k+£200k–£350k+
    • Bonuses often range from £20k to £70k+, and in top-tier roles, they can double base compensation efinancialcareers.com+4morganmckinley.com+4morganmckinley.com+4morganmckinley.com+3payscale.com+3morganmckinley.com+3morganmckinley.com+1morganmckinley.com+1reddit.com.

    🧭 Why London Holds Its Ground

    • As a major global financial hub, London hosts numerous hedge funds, trading desks, and investment banks that depend on low-latency C++ infrastructure .
    • Market demand remains strong, with job vacancies growing and salaries up ~9% year-on-year, according to ITJobsWatch .

    ✅ Summary Insight

    London offers compelling opportunities for C++ quantitative developers:

    • Base salaries typically: £65k–£150k, rising with seniority.
    • Total compensation often ranges from £100k to £350k+ at elite firms.
    • Top-tier roles at HFT/hedge funds pay aggressively, reflecting C++’s strategic value in low-latency systems.
    June 25, 2025 0 comments
    C++ containers
    Data Structures

    C++ Sequential Containers for Quants: Speed and Memory

    by Clement Daubrenet June 25, 2025

    Sequential containers in C++ store elements in a linear order, where each element is positioned relative to the one before or after it. They’re designed for efficient iteration, fast access, and are typically used when element order matters, such as in time-based data.

    The most commonly used sequential containers in the STL are:

    • std::vector: dynamic array with contiguous memory
    • std::deque: double-ended queue with fast insertions/removals at both ends
    • std::array: fixed-size array on the stack
    • std::list / std::forward_list: linked lists

    In quantitative finance, sequential containers are often at the heart of systems that process tick data, historical time series, price paths, and rolling windows. When processing millions of data points per day, the choice of container directly impacts latency, memory usage, and cache performance.

    1. std::vector: The Workhorse

    std::vector is a dynamic array container in C++. It’s one of the most performance-critical tools in a quant developer’s toolbox — especially for time-series data, simulation paths, and numerical computations.

    🧠 Stack vs Heap: The Memory Model

    In C++, memory is managed in two primary regions:

    Memory TypeDescriptionCharacteristics
    StackAutomatically managed, fast, smallFunction-local variables, fixed size
    HeapManually or dynamically managedUsed for dynamic memory like new, malloc, or STL containers

    🔸 A std::vector behaves as follows:

    • The vector object itself (pointer, size, capacity) lives on the stack.
    • The elements it stores live in a dynamically allocated heap buffer.

    💡 Consequences:

    • You can create a std::vector of size 1 million without stack overflow.
    • But heap allocation is slower than stack and can cause fragmentation if misused.
    • When passed by value, only the stack-held metadata is copied (until you access elements).

    📦 Contiguous Memory Layout

    A std::vector stores all its elements in a single, contiguous block of memory on the heap — just like a C-style array.

    Why this matters:

    • CPU cache efficiency: Memory is prefetched in blocks — sequential access is blazing fast.
    • Enables pointer arithmetic, SIMD (Single Instruction Multiple Data), and direct interfacing with C APIs.

    💥 For quants: When you’re processing millions of floats/doubles (e.g., price histories, simulation paths), contiguous memory layout can be a 10× performance multiplier vs pointer-chasing containers like std::list.

    ⏱ Time Complexity

    OperationTime ComplexityNotes
    v[i] accessO(1)True random access due to array layout
    push_back (append)Amortized O(1)Reallocates occasionally
    Insertion (front/mid)O(n)Requires shifting elements
    ResizeO(n)Copy and reallocate

    ⚠️ When capacity is exhausted, std::vector reallocates, usually doubling its capacity and copying all elements — which invalidates pointers and iterators.

    💻 Example: Using std::vector for Rolling Average on Price Time Series

    #include <vector>
    #include <numeric>  // for std::accumulate
    
    int main() {
        // Simulated price time series (e.g. 1 price per second)
        std::vector<double> prices;
    
        // Simulate appending tick data
        for (int i = 0; i < 10'000'000; ++i) {
            prices.push_back(100.0 + 0.01 * (i % 100)); // Fake tick data
        }
    
        // Compute rolling average over the last 5 prices
        size_t window_size = 5;
        std::vector<double> rolling_avg;
    
        for (size_t i = window_size; i < prices.size(); ++i) {
            double sum = std::accumulate(prices.begin() + i - window_size, prices.begin() + i, 0.0);
            rolling_avg.push_back(sum / window_size);
        }
    
        // Print the last rolling average
        std::cout << "Last rolling avg: " << rolling_avg.back() << std::endl;
    
        return 0;
    }

    🧠 Key Points:

    • std::vector<double> stores prices in contiguous heap memory, enabling fast iteration.
    • push_back appends elements efficiently until capacity needs to grow.
    • Access via prices[i] or iterators is cache-friendly.
    • This method is fast but not optimal for rolling computations — a future section will show how std::deque or a circular buffer improves performance here.

    2. std::deque: Double-Ended Queue

    std::deque stands for “double-ended queue” — a general-purpose container that supports efficient insertion and deletion at both ends. Unlike std::vector, which shines in random access, std::deque is optimized for dynamic FIFO (first-in, first-out) or sliding window patterns.

    🧠 Stack vs Heap: Where Does Data Live?

    • Like std::vector, the std::deque object (its metadata) lives on the stack.
    • The underlying elements are allocated on the heap — but not contiguously.

    💡 Consequences:

    • You can append or prepend large amounts of data without stack overflow.
    • Compared to std::vector, memory access is less cache-efficient due to fragmentation.

    📦 Contiguous Memory? No.

    Unlike std::vector, which stores all elements in one continuous memory block, std::deque allocates multiple fixed-size blocks (typically 4KB–8KB) and maintains a map (array of pointers) to those blocks.

    Why this matters:

    • Appending to front/back is fast, since it just adds blocks.
    • But access patterns may be less cache-friendly, especially in tight loops.

    ⏱ Time Complexity

    OperationTime ComplexityNotes
    Random access d[i]O(1) (but slower than vector)Not contiguous, so slightly less efficient
    push_back, push_frontO(1) amortizedEfficient at both ends
    Insert in middleO(n)Requires shifting/moving like vector
    ResizeO(n)Copying + new allocations

    ⚠️ Internally more complex: uses a segmented memory model (think: mini-arrays linked together via pointer table).

    ⚙️ What’s Under the Hood?

    A simplified mental model:

    cppCopierModifierT** block_map;   // array of pointers to blocks
    T* blocks[N];    // actual blocks on the heap
    
    • Each block holds a fixed number of elements.
    • deque manages which block and offset your index points to.

    ✅ Strengths

    • Fast insertion/removal at both ends (O(1)).
    • No expensive shifting like vector.
    • Stable references to elements even after growth at ends (unlike vector).

    ⚠️ Trade-offs

    • Not contiguous → worse cache locality than vector.
    • Random access is constant time, but slightly slower than vector[i].
    • Slightly higher memory overhead due to internal structure (block map + block padding).

    📈 Quant Finance Use Cases

    • Rolling windows on tick or quote streams (e.g. sliding 1-minute buffer).
    • Time-based buffers for short-term volatility, VWAP, or moving averages.
    • Event-driven systems where data is both consumed and appended continuously.

    Example:

    std::deque<double> price_buffer;
    // push_front for new tick, pop_back to discard old

    💬 Typical Interview Questions

    1. “How would you implement a rolling average over a tick stream?”
      • A good answer involves std::deque for its O(1) append/remove at both ends.
    2. “Why not use std::vector if you’re appending at the front?”
      • Tests understanding of std::vector‘s O(n) front-insertion cost vs deque‘s O(1).
    3. “Explain the memory layout of deque vs vector, and how it affects performance.”
      • Looks for awareness of cache implications.

    💻 Code Example: Rolling Average with std::deque

    cppCopierModifier#include <iostream>
    #include <deque>
    #include <numeric>
    
    int main() {
        std::deque<double> price_window;
        size_t window_size = 5;
    
        for (int i = 0; i < 10; ++i) {
            double new_price = 100.0 + i;
            price_window.push_back(new_price);
    
            // Keep deque size fixed
            if (price_window.size() > window_size) {
                price_window.pop_front();  // Efficient O(1)
            }
    
            if (price_window.size() == window_size) {
                double avg = std::accumulate(price_window.begin(), price_window.end(), 0.0) / window_size;
                std::cout << "Rolling average: " << avg << std::endl;
            }
        }
    
        return 0;
    }
    
    
    
    
    

    🧠 Summary: deque vs vector

    Featurestd::vectorstd::deque
    Memory LayoutContiguous heapSegmented blocks on heap
    Random AccessO(1), fastestO(1), slightly slower
    Front Insert/RemoveO(n)O(1)
    Back Insert/RemoveO(1) amortizedO(1)
    Cache EfficiencyHighMedium
    Quant Use CaseTime series, simulationsRolling buffers, queues

    3. std::array: Fixed-Size Array

    std::array is a stack-allocated, fixed-size container introduced in C++11. It wraps a C-style array with a safer, STL-compatible interface while retaining zero overhead. In quant finance, it’s ideal when you know the size at compile time and want maximum performance and predictability.

    🧠 Stack vs Heap: Where Does Data Live?

    • std::array stores all of its data on the stack — no dynamic memory allocation.
    • The container itself is the data, unlike std::vector or std::deque, which hold pointers to heap-allocated buffers.

    💡 Consequences:

    • Blazing fast allocation/deallocation — no malloc, no heap overhead.
    • Extremely cache-friendly — everything is packed in a single memory block.
    • But: limited size (you can’t store millions of elements safely).

    📦 Contiguous Memory? Yes.

    std::array maintains a true contiguous layout, just like a C array. This enables:

    • Fast random access via arr[i]
    • Compatibility with C APIs
    • SIMD/vectorization-friendly memory layout

    ⏱ Time Complexity

    OperationTime ComplexityNotes
    Access a[i]O(1)Compile-time indexed if constexpr
    SizeO(1)Always constant
    InsertionN/AFixed size; no resizing

    ⚙️ What’s Under the Hood?

    Conceptually:

    cppCopierModifiertemplate<typename T, std::size_t N>
    struct array {
        T elems[N];  // Inline data — lives on the stack
    };
    
    • No dynamic allocation.
    • No capacity concept — just a fixed-size C array with better syntax and STL methods.

    ✅ Strengths

    • Zero runtime overhead — allocation is just moving the stack pointer.
    • Extremely fast access and iteration.
    • Safer than raw arrays — bounds-safe with .at() and STL integration (.begin(), .end(), etc.).

    ⚠️ Trade-offs

    • Fixed size — can’t grow or shrink at runtime.
    • Risk of stack overflow if size is too large.
    • Pass-by-value is expensive (copies all elements) unless using references.

    📈 Quant Finance Use Cases

    • Grids for finite difference/PDE solvers: std::array<double, N>
    • Precomputed factors (e.g., Greeks at fixed deltas)
    • Fixed-size historical feature vectors for ML signals
    • Small matrices or stat buffers in embedded systems or hot loops
    // Greeks for delta hedging
    std::array<double, 5> greeks = { 0.45, 0.01, -0.02, 0.005, 0.0001 };

    💬 Typical Interview Questions

    1. “What’s the difference between std::array, std::vector, and C-style arrays?”
      • Tests understanding of memory layout, allocation, and safety.
    2. “When would you prefer std::array over std::vector?”
      • Looking for “when size is known at compile time and performance matters.”
    3. “What’s the risk of using large std::arrays?”
      • Looking for awareness of stack overflow and memory constraints.

    💻 Code Example: Option Grid for PDE Solver

    int main() {
        constexpr std::size_t N = 5; // e.g. 5 grid points in a discretized asset space
        std::array<double, N> option_values = {100.0, 101.5, 102.8, 104.2, 105.0};
    
        // Simple finite-difference operation: delta = (V[i+1] - V[i]) / dx
        double dx = 1.0;
        for (std::size_t i = 0; i < N - 1; ++i) {
            double delta = (option_values[i + 1] - option_values[i]) / dx;
            std::cout << "delta[" << i << "] = " << delta << std::endl;
        }
    
        return 0;
    }
    
    
    
    
    

    🧠 Summary: std::array vs vector/deque

    Featurestd::arraystd::vectorstd::deque
    MemoryStackHeap (contiguous)Heap (segmented)
    SizeFixed at compile timeDynamic at runtimeDynamic at runtime
    Allocation speedInstantSlow (heap)Slower (complex alloc)
    Use casePDE grids, factorsTime series, simulationsRolling buffers
    PerformanceMaxHighMedium

    4. std::list / std::forward_list: Linked Lists

    std::list and std::forward_list are the C++ STL’s doubly- and singly-linked list implementations. While rarely used in performance-critical quant finance applications, they are still part of the container toolkit and may shine in niche situations involving frequent insertions/removals in the middle of large datasets — but not for numeric data or tight loops.

    🧠 Stack vs Heap: Where Does Data Live?

    • Both containers are heap-based: each element is dynamically allocated as a separate node.
    • The container object itself (head pointer, size metadata) is on the stack.
    • But the actual elements — and their pointers — are spread across the heap.

    💡 Consequences:

    • Extremely poor cache locality.
    • Each insertion allocates heap memory → slow.
    • Ideal for predictable insertion/removal patterns, not for numerical processing.

    📦 Contiguous Memory? Definitely Not.

    Each element is a node containing:

    • The value
    • One (std::forward_list) or two (std::list) pointers

    So memory looks like this:

    [Node1] → [Node2] → [Node3] ...   (or doubly linked)

    💣 You lose any benefit of prefetching or SIMD. Iteration causes pointer chasing, leading to cache misses and latency spikes.

    ⏱ Time Complexity

    OperationTime ComplexityNotes
    Insert at frontO(1)Very fast, no shifting
    Insert/remove in middleO(1) with iteratorNo shifting needed
    Random access l[i]O(n)No indexing support
    IterationO(n), but slowPoor cache usage

    ⚙️ What’s Under the Hood?

    Each node contains:

    • value
    • next pointer (and prev for std::list)

    So conceptually:

    Node {
    T value;
    Node* next; // and prev, if doubly linked
    };
    • std::list uses two pointers per node.
    • std::forward_list is lighter — just one pointer — but can’t iterate backward.

    ✅ Strengths

    • Efficient insertions/removals anywhere — no element shifting like vector.
    • Iterator-stable: elements don’t move in memory.
    • Predictable performance for queue-like workloads with frequent mid-stream changes.

    ⚠️ Trade-offs

    • Terrible cache locality — each node is scattered.
    • No random access — everything is via iteration.
    • Heavy memory overhead: ~2×–3× the size of your data.
    • Heap allocation per node = slow.

    📈 Quant Finance Use Cases (Rare/Niche)

    • Managing a priority-sorted list of limit orders (insert/delete in mid-stream).
    • Event systems where exact insertion position matters.
    • Custom allocators or object pools in performance-tuned engines.

    Usually, in real systems, this is replaced by:

    • std::deque for queues
    • std::vector for indexed data
    • Custom intrusive lists if performance really matters

    💬 Typical Interview Questions

    1. “What’s the trade-off between std::vector and std::list?”
      • Looking for cache locality vs insert/delete performance.
    2. “How would you store a stream of operations that require arbitrary insert/delete?”
      • Tests knowledge of when to use linked lists despite their cost.
    3. “Why is std::list usually a bad idea for numeric workloads?”
      • Looking for insight into heap fragmentation and CPU cache behavior.

    💻 Code Example: Linked List of Events

    #include <iostream>
    #include <list>
    
    struct TradeEvent {
        double price;
        double quantity;
    };
    
    int main() {
        std::list<TradeEvent> event_log;
    
        // Simulate inserting trades
        event_log.push_back({100.0, 500});
        event_log.push_back({101.2, 200});
        event_log.push_front({99.5, 300});  // Insert at front
    
        // Iterate through events
        for (const auto& e : event_log) {
            std::cout << "Price: " << e.price << ", Qty: " << e.quantity << std::endl;
        }
    
        return 0;
    }
    
    
    
    
    
    

    🧠 Summary: std::list / std::forward_list

    Featurestd::liststd::forward_list
    MemoryHeap, scatteredHeap, scattered
    AccessNo indexing, O(n)Forward only
    Inserts/removalsO(1) with iteratorO(1), front-only efficient
    Memory overheadHigh (2 pointers/node)Lower (1 pointer/node)
    Cache efficiencyPoorPoor
    Quant use caseRare (custom order books)Lightweight event chains

    Unless you need fast structural changes mid-stream, these containers are rarely used in quant systems. But it’s important to know when and why they exist.

    June 25, 2025 0 comments
    Bonds Pricer
    Bonds

    A Bonds Pricer Implementation in C++ with Quantlib

    by Clement Daubrenet June 24, 2025

    Bond pricing is a fundamental skill for any quant. Whether you’re managing fixed-income portfolios or calculating risk metrics, understanding how a bond is priced is essential. In this article, we’ll walk through the core formula, explore how clean and dirty prices differ, and implement it in C++ using QuantLib. How to build a bonds pricer?

    1.The Bond Pricing Formula

    When you buy a bond, you’re essentially buying a stream of future cash flows: fixed coupon payments and the return of the face value at maturity. The price you pay for that stream depends on how much those future payments are worth today — in other words, their present value.


    🔹 The Dirty Price: What You’re Really Paying

    The dirty price (or full price) is the total value of the discounted future cash flows:

    [math] \Large{\text{Dirty Price} = \sum_{i=1}^{N} \frac{C_i}{(1 + y)^{t_i}}} [/math]

    Where:

    • [math]C_i[/math] is the cash flow at time [math]t_i[/math] — typically the coupon, and the last cash flow includes the face value.
    • [math]y[/math] is the periodic yield, i.e., the annual yield divided by the number of coupon payments per year.
    • [math]t_i[/math] is the year fraction between today and the [math]i^\text{th}[/math] payment, calculated using day count conventions (like Actual/360, 30/360, etc.).

    In simpler terms: money in the future is worth less than money today, and the discounting reflects that.


    🔹 Accrued Interest: What You Owe the Seller

    Most bonds trade between coupon dates, meaning the seller has already “earned” part of the next coupon. To make things fair, the buyer compensates them via accrued interest:

    [math] \Large \text{Accrued Interest} = \text{Coupon} \times \frac{\text{Days since last payment}}{\text{Days in period}} [/math]

    So if you buy the bond halfway through the coupon cycle, you’ll owe the seller half the coupon. This ensures the next payment (which you’ll receive in full) is properly split based on ownership time.


    🔹 Clean Price: The Market Quote

    Bond prices are typically quoted clean, without accrued interest:

    [math] \Large \text{Clean Price} = \text{Dirty Price} – \text{Accrued Interest} [/math]

    This keeps things tidy when quoting and trading, and lets the system calculate accrued interest automatically behind the scenes.

    Together, these three equations form the backbone of bond pricing. In the next section, we’ll show how QuantLib brings them to life — no need to write your own discounting engine.

    3. Setting Up the Bond in QuantLib

    Now that we’ve covered the theory behind bond pricing, let’s see how to implement it using QuantLib. One of QuantLib’s biggest strengths is how it mirrors the real-world structure of financial instruments — every component reflects a real aspect of how bonds are modeled, priced, and managed in production systems.

    Here’s what we need to set up a bond in QuantLib:

    📅 Calendar, Date, and Schedule

    • Calendar: Tells QuantLib which days are business days. This is crucial for calculating coupon dates and settlement dates correctly. We’ll use TARGET(), a commonly used calendar for euro-denominated instruments.
    • Date: QuantLib’s custom date class used to define evaluation, issue, maturity, and coupon dates.
    • Schedule: Automatically generates all coupon dates between the start and maturity dates. You specify the frequency (e.g., semiannual), the calendar, and how to adjust dates if they fall on weekends or holidays.

    In other words: this trio defines when things happen.

    💵 FixedRateBond

    • This is the actual bond object.
    • It takes in:
      • Settlement days (e.g., T+2),
      • Face value (e.g., 1000),
      • The coupon schedule,
      • The fixed coupon rate(s),
      • Day count convention (e.g., Actual/360),
      • Date adjustment rules.
    • Once constructed, it contains all the future cash flows, knows when they occur, and how much they are.

    Think of FixedRateBond as your contractual definition of the bond.

    📈 YieldTermStructure

    • This represents the discount curve.
    • You can define it using:
      • A flat yield (e.g., 4.5% across all maturities),
      • A bootstrap from market instruments (swaps, deposits, etc.),
      • Or even a custom curve from CSV or historical data.
    • QuantLib uses this curve to discount each cash flow to present value.

    This is your interest rate environment — essential for pricing.

    🧠 DiscountingBondEngine

    • This is the pricing engine: the part of QuantLib that ties it all together.
    • Once you set the bond’s pricing engine to a DiscountingBondEngine, it knows how to price it using the curve.
    • It computes:
      • The dirty price,
      • The clean price,
      • The accrued interest,
      • And other risk measures like duration and convexity.

    You can think of it as the calculator that applies all the math we just discussed.

    4. Implementation

    Let’s implement:

    #include <ql/quantlib.hpp>
    #include <iostream>
    #include <iomanip>
    
    using namespace QuantLib;
    
    int main() {
        // Set the evaluation date
        Date today(24, June, 2025);
        Settings::instance().evaluationDate() = today;
    
        // Bond parameters
        Real faceValue = 1000.0;
        Rate couponRate = 0.05; // 5%
        Date maturity(24, June, 2030);
        Frequency frequency = Semiannual;
        Integer settlementDays = 2;
        DayCounter dayCounter = Actual360();
        BusinessDayConvention convention = Unadjusted;
        Calendar calendar = TARGET();
    
        // Build the schedule of coupon payments
        Schedule schedule(today, maturity, Period(frequency), calendar,
                          convention, convention, DateGeneration::Backward, false);
    
        // Create the fixed-rate bond
        FixedRateBond bond(settlementDays, faceValue, schedule,
                           std::vector<Rate>{couponRate}, dayCounter, convention);
    
        // Build a flat yield curve (4.5%)
        Handle<YieldTermStructure> yieldCurve(
            boost::make_shared<FlatForward>(today, 0.045, dayCounter));
    
        // Set the pricing engine using the discounting curve
        bond.setPricingEngine(boost::make_shared<DiscountingBondEngine>(yieldCurve));
    
        // Output the results
        std::cout << std::fixed << std::setprecision(2);
        std::cout << "Clean Price   : " << bond.cleanPrice() << std::endl;
        std::cout << "Dirty Price   : " << bond.dirtyPrice() << std::endl;
        std::cout << "Accrued Interest: " << bond.accruedAmount() << std::endl;
    
        return 0;
    }

    After compilation and run, we get:

    ➜  build ./pricer  
    Clean Price   : 102.01
    Dirty Price   : 102.04
    Accrued Interest: 0.03

    🔹 Clean Price = 102.01

    This is the market-quoted price of the bond, excluding any interest that has accrued since the last coupon date.
    It means that the bond is trading at 102.01% of its face value — i.e., £1,020.10 for a bond with a £1,000 face value.


    🔹 Dirty Price = 102.04

    This is the actual amount you’d pay if you bought the bond today.
    It includes both:

    • The clean price (102.01), and
    • The interest the seller has “earned” since the last coupon.

    So:

    [math]
    \text{Dirty Price} = \text{Clean Price} + \text{Accrued Interest} = 102.01 + 0.03 = 102.04
    [/math]


    🔹 Accrued Interest = 0.03

    This is the amount of coupon interest that has accrued since the last coupon date but hasn’t been paid yet.
    The buyer pays this to the seller because the seller has held the bond for part of the current coupon period.

    In your case, it’s 0.03% of face value, or £0.30 on a £1,000 bond — meaning you’re very close to the previous coupon date.

    June 24, 2025 0 comments
    Value at Risk
    VaR

    Value at Risk (VaR): Definition, Equation, and C++ Implementation

    by Clement Daubrenet June 24, 2025

    Value at Risk (VaR) is a statistical measure used to quantify the level of financial risk within a portfolio over a specific time frame. It answers the question: “What is the maximum expected loss with a given confidence level over a given time horizon?”

    Widely used in trading, risk management, and regulatory frameworks (e.g., Basel III), VaR helps institutions understand the tail risk of their holdings.

    1.Formal Definition and Example

    Value at Risk (VaR) at confidence level [math] \alpha [/math] (e.g., 95% or 99%) is defined as the threshold loss value L such that:

    [math]\large \mathbb{P}(\text{Loss} > L) = 1 – \alpha[/math]

    Or, written differently:

    [math]\large \text{VaR}_\alpha = \inf \left\{ l \in \mathbb{R} : \mathbb{P}(\text{Loss} \leq l) \geq \alpha \right\}[/math]

    Some examples to make it a bit more concrete:


    🔹 Example 1 — Conservative Institution

    For a 1-day 99% VaR of $5 million, you expect that on 99% of trading days, your portfolio will not lose more than $5 million.
    However, in the remaining 1% of days (about 2-3 days per year), the losses could exceed $5 million.


    🔹 Example 2 — Moderate Risk Portfolio

    For a 10-day 95% VaR of €2 million, there is a 95% chance that over any 10-day period, losses will not exceed €2 million.
    Conversely, there’s a 5% chance (about one 10-day period in 20) that you could lose more than €2 million.


    🔹 Example 3 — Intraday Trading Desk

    For a 1-hour 90% VaR of £100,000, you are 90% confident that in any given hour, the desk will not lose more than £100,000.
    But in 1 out of every 10 hours, losses could exceed £100,000.

    2. How Traders Like to Vizualize VaR?

    traders and risk managers often use VaR summary tables to visualize potential losses across different confidence levels, time horizons, and portfolio segments. Here’s how it’s typically structured:


    ✅ Standard VaR Table

    Confidence LevelTime HorizonVaR (USD)
    95%1 Day$1,200,000
    99%1 Day$2,300,000
    95%10 Days$3,800,000
    99%10 Days$7,200,000
    • This format lets traders quickly gauge the magnitude of risk at different percentiles.
    • The 10-day VaR is often used for regulatory reporting (e.g., Basel rules).
    • The square-root-of-time rule is sometimes used to scale 1-day VaR to longer horizons.

    ✅ Portfolio Breakdown Table

    Desk / Asset Class1-Day 95% VaR1-Day 99% VaR
    Equities$500,000$850,000
    Fixed Income$300,000$620,000
    FX$250,000$470,000
    Commodities$150,000$280,000
    Total$980,000$1,750,000
    • Often used in daily risk reports.
    • May include a correlation adjustment between asset classes (not a simple sum).

    ✅ 3. VaR vs. PnL Table (Backtesting)

    DateActual PnL1-Day 99% VaRBreach?
    2025-06-18-$3.5M$2.8M✅
    2025-06-19-$1.2M$2.8M❌
    2025-06-20-$2.9M$2.8M✅
    • Used for Value at Risk backtesting, checking how often real losses exceed VaR.
    • Frequent breaches might indicate model underestimation of tail risk.

    3. The Different Ways to Calculate VaR

    To calculate Value at Risk (VaR), different methods are used depending on the assumptions you’re willing to make and the data available. Some approaches rely on statistical models, while others lean purely on historical data:

    –The Parametric (Variance-Covariance) VaR
    Assumes returns are normally distributed and calculates VaR using the portfolio’s mean, standard deviation, and the Z-score of the desired confidence level.

    -The Historical Simulation VaR
    Uses actual historical returns to simulate potential losses, without assuming any specific return distribution.

    -The Monte Carlo Simulation VaR
    Generates thousands of possible return scenarios using stochastic models (e.g., Geometric Brownian Motion) to estimate potential losses under a wide range of outcomes.

    4. A Zoom on Parametric VaR

    The Parametric VaR method assumes that portfolio returns are normally distributed, making it one of the fastest and most widely used VaR models in practice.

    The general formula is:

    [math]\large \text{VaR}_\alpha = \mu + z_\alpha \cdot \sigma[/math]

    Where:

    • [math]\mu[/math] = Expected return (often set to 0 for short-term horizons)
    • [math]\sigma[/math] = Standard deviation (volatility) of portfolio returns
    • [math]z_\alpha[/math] = Z-score for the chosen confidence level (e.g., -1.6449 for 95%, -2.3263 for 99%)

    This formula gives you the threshold loss you should not exceed with probability [math]\alpha[/math].
    For example, at 99% confidence, Parametric VaR tells you: “There is only a 1% chance that the portfolio will lose more than VaR in a day.”

    Because it’s simple and fast to compute, Parametric VaR is used in real-time risk monitoring, but it can underestimate tail risk when returns deviate from normality (e.g., skew or fat tails).

    5. Implement Parametric VaR in C++

    Here is a basic implementation, like we did elsewhere for greeks, without using any libraries:

    #include <iostream>
    #include <vector>
    #include <cmath>
    #include <numeric>
    #include <stdexcept>
    
    // Compute the mean of returns
    double computeMean(const std::vector<double>& returns) {
        double sum = std::accumulate(returns.begin(), returns.end(), 0.0);
        return sum / returns.size();
    }
    
    // Compute standard deviation of returns
    double computeStdDev(const std::vector<double>& returns, double mean) {
        double accum = 0.0;
        for (double r : returns) {
            accum += (r - mean) * (r - mean);
        }
        return std::sqrt(accum / (returns.size() - 1));
    }
    
    // Get Z-score for given confidence level
    double getZScore(double confidenceLevel) {
        if (confidenceLevel == 0.95) return -1.64485;
        if (confidenceLevel == 0.99) return -2.32635;
        throw std::invalid_argument("Unsupported confidence level");
    }
    
    // Compute Parametric VaR
    double computeParametricVaR(const std::vector<double>& returns,
                                double confidenceLevel,
                                double portfolioValue) {
        double mean = computeMean(returns);  // often assumed 0 for short-term VaR
        double stdDev = computeStdDev(returns, mean);
        double z = getZScore(confidenceLevel);
    
        double var = portfolioValue * (mean + z * stdDev);
        return std::abs(var); // Loss is a positive number
    }
    
    int main() {
        std::vector<double> returns = {-0.01, 0.003, 0.0045, -0.002, 0.005}; // Daily returns
        double confidenceLevel = 0.99;
        double portfolioValue = 1'000'000; // $1M
    
        double var = computeParametricVaR(returns, confidenceLevel, portfolioValue);
        std::cout << "1-day 99% Parametric VaR: $" << var << std::endl;
    
        return 0;
    }

    Let’s compile it:

    ➜  build cmake ..
    -- Configuring done (0.1s)
    -- Generating done (0.0s)
    -- Build files have been written to: /Users/clementdaubrenet/var/build
    ➜  build make       
    [ 50%] Building CXX object CMakeFiles/var.dir/var.cpp.o
    [100%] Linking CXX executable var
    [100%] Built target var

    And run it:

    ➜  build ./var           
    1-day 99% Parametric VaR: $14530.1

    The 1-day 99% VaR for this portfolio is $14530.1.

    June 24, 2025 0 comments
    • 1
    • 2

    @2025 - All Right Reserved.


    Back To Top
    • Home