Using Automatic Differentiation for Greeks Computation in C++

by Clement Daubrenet
automatic diff

Traders and risk managers rely on accurate values for Delta, Gamma, Vega, and other Greeks to hedge portfolios and manage exposure. Today, we’re going to see how to use automatic differentiation for greeks calculation.

Traditionally, these sensitivities are calculated using finite difference methods, which approximate derivatives numerically. While easy to implement, this approach suffers from trade-offs between precision, performance, and stability. Enter Automatic Differentiation (AD): a modern technique that computes derivatives with machine precision and efficient performance.

In this article, we explore how to compute Greeks using automatic differentiation in C++ using the lightweight AD library Adept.

1. Background: Greeks and Finite Differences

The Delta of a derivative is the rate of change of its price with respect to the underlying asset price:

[math] \Large {\Delta = \displaystyle \frac{\partial V}{\partial S}} [/math]

With finite differences, we might compute:

[math] \Large { \Delta \approx \frac{V(S + \epsilon) – V(S)}{\epsilon} } [/math]

This introduces:

  • Truncation error if ε is too large,
  • Rounding error if ε is too small,
  • And 2N evaluations for N Greeks.

2. Automatic Differentiation: Another Way

Automatic Differentiation (AD) is a technique for computing exact derivatives of functions expressed as computer programs. It is not the same as:

  • Numerical differentiation (e.g., finite differences), which approximates derivatives and is prone to rounding/truncation errors.
  • Symbolic differentiation (like in computer algebra systems), which manipulates expressions analytically but can become unwieldy and inefficient.

AD instead works by decomposing functions into elementary operations and systematically applying the chain rule to compute derivatives alongside function evaluation.

🔄 How It Works

  1. Operator Overloading (in C++)
    AD libraries in C++ (like Adept or CppAD) define a custom numeric type—say, adouble—which wraps a real number and tracks how it was computed. Arithmetic operations (+, *, sin, exp, etc.) are overloaded to record both the value and the derivative.
  2. Taping
    During function execution, the AD library records (“tapes”) every elementary operation (e.g., x * y, log(x), etc.) into a computational graph. Each node stores the local derivative of the output with respect to its inputs.
  3. Chain Rule Application
    Once the function is evaluated, AD applies the chain rule through the computational graph to compute the final derivatives.
    • Forward mode AD: computes derivatives with respect to one or more inputs.
    • Reverse mode AD: computes derivatives of one output with respect to many inputs (more efficient for scalar-valued functions with many inputs, like in ML or risk models).

🎯 Why It’s Powerful

  • Exact to machine precision (unlike finite differences)
  • Fast and efficient: typically only 5–10x the cost of the original function
  • Compositional: works on any function composed of differentiable primitives, no matter how complex

📦 In C++ Quant Contexts

AD shines in quant applications where:

  • Analytical derivatives are difficult to compute or unavailable (e.g., path-dependent or exotic derivatives),
  • Speed and accuracy matter (e.g., calibration loops or sensitivity surfaces),
  • You want to avoid hard-coding Greeks and re-deriving math manually.

3. C++ Implementation Using CppAD

Install CppAD and link it with your C++ project:

git clone https://github.com/coin-or/CppAD.git
cd CppAD
mkdir build
cd build
cmake ..
make
make install

    Then link the library to your CMakeLists.txt:

    cmake_minimum_required(VERSION 3.10)
    project(CppADGreeks)
    
    set(CMAKE_CXX_STANDARD 17)
    
    # Find and link CppAD
    find_path(CPPAD_INCLUDE_DIR cppad/cppad.hpp PATHS /usr/local/include)
    find_library(CPPAD_LIBRARY NAMES cppad_lib PATHS /usr/local/lib)
    
    if (NOT CPPAD_INCLUDE_DIR OR NOT CPPAD_LIBRARY)
        message(FATAL_ERROR "CppAD not found")
    endif()
    
    include_directories(${CPPAD_INCLUDE_DIR})
    link_libraries(${CPPAD_LIBRARY})
    
    # Example executable
    add_executable(automaticdiff ../automaticdiff.cpp)
    target_link_libraries(automaticdiff ${CPPAD_LIBRARY})

    And the following code is in my automaticdiff.cpp:

    #include <cppad/cppad.hpp>
    #include <iostream>
    #include <vector>
    #include <cmath>
    
    template <typename T>
    T norm_cdf(const T& x) {
        return 0.5 * CppAD::erfc(-x / std::sqrt(2.0));
    }
    
    int main() {
        using CppAD::AD;
    
        std::vector<AD<double>> X(1);
        X[0] = 105.0;  // Spot price
    
        CppAD::Independent(X);
    
        double K = 100.0, r = 0.05, sigma = 0.2, T = 1.0;
        AD<double> S = X[0];
    
        AD<double> d1 = (CppAD::log(S / K) + (r + 0.5 * sigma * sigma) * T) / (sigma * std::sqrt(T));
        AD<double> d2 = d1 - sigma * std::sqrt(T);
    
        AD<double> price = S * norm_cdf(d1) - K * CppAD::exp(-r * T) * norm_cdf(d2);
    
        std::vector<AD<double>> Y(1);
        Y[0] = price;
    
        CppAD::ADFun<double> f(X, Y);
    
        std::vector<double> x = {105.0};
        std::vector<double> delta = f.Jacobian(x);
    
        std::cout << "Call Price: " << CppAD::Value(price) << std::endl;
        std::cout << "Delta (∂Price/∂S): " << delta[0] << std::endl;
    
        return 0;
    }
    

    After compiling and running this code, we get a delta:

    ➜  build ./automaticdiff
    Call Price: 13.8579
    Delta (∂Price/∂S): 0.723727

    4. Explanation of the Code

    Here’s the key part of the code again:

    cppCopierModifierstd::vector<AD<double>> X(1);
    X[0] = 105.0;  // Spot price S
    CppAD::Independent(X);       // Start taping operations
    
    // Constants
    double K = 100.0, r = 0.05, sigma = 0.2, T = 1.0;
    AD<double> S = X[0];
    
    // Black-Scholes price formula
    AD<double> d1 = (CppAD::log(S / K) + (r + 0.5 * sigma * sigma) * T) / (sigma * std::sqrt(T));
    AD<double> d2 = d1 - sigma * std::sqrt(T);
    AD<double> price = S * norm_cdf(d1) - K * CppAD::exp(-r * T) * norm_cdf(d2);
    
    std::vector<AD<double>> Y(1); Y[0] = price;
    CppAD::ADFun<double> f(X, Y);  // Create a differentiable function f: S ↦ price
    
    std::vector<double> x = {105.0};
    std::vector<double> delta = f.Jacobian(x);  // Get ∂price/∂S

    ✅ What This Code Does (Conceptually)

    Step 1 – You Define a Function:

    The code defines a function f(S) = Black-Scholes call price as a function of spot S. But unlike normal code, you define this using AD types (AD<double>), so CppAD can trace the computation.

    Step 2 – CppAD Records All Operations:

    When you run this line:

    CppAD::Independent(X);

    CppAD starts building a computational graph. Every operation you do on S (which is X[0]) is recorded:

    • log(S / K)
    • d1
    • d2
    • norm_cdf(d1)
    • final result price

    This is like taping a math program:
    CppAD now knows how the output (price) was built from the input (S).

    Step 3 – You “seal” the function:

    CppAD::ADFun<double> f(X, Y);
    
    
    
    
    
    

    This finalizes the tape into a function:

    [math] \Large{f: S \mapsto \text{price}} [/math]

    🤖 How AD Gets the Derivative

    Using:

    std::vector<double> delta = f.Jacobian(x);

    CppAD:

    • Evaluates the forward pass to get intermediate values (just like normal code),
    • Then walks backward through the graph, applying the chain rule at each node to compute:

    [math] \Large{ \Delta = \frac{\partial \text{price}}{\partial S} } [/math]

    This is the exact derivative, not an approximation.

    ⚙️ Summary: How the Code Avoids Finite Difference Pitfalls

    FeatureYour CppAD CodeFinite Differences
    Number of evaluations1 forward pass + internal backprop2+ full evaluations
    Step size tuning?❌ None needed✅ Must choose ϵ\epsilonϵ carefully
    Derivative accuracy✅ Machine-accurate❌ Approximate
    Performance on multiple Greeks✅ Fast with reverse mode❌ Expensive (1 per Greek)
    Maintenance cost✅ Code reflects math structure❌ Rewriting required for each sensitivity

    You may also like