Grok Conversation

In C++, is there method to implement map able to store objects of many types (numerics, strings, vectors)? I mean something like that:universal_map map;universal_map map_1; universal_map map_2;map.put("key", "value"); map.put("some_key", 42); map.put("some_array", std::vector<universal_map>{});
Yes, in C++ it is possible to create such a “universal map” (something similar to JSON-like objects, Python dicts, or nlohmann::json), but there is no built-in solution in the standard library that is both convenient and type-safe at the same time.Here are the most popular and practical approaches in 2025:
Approach
Type safety
Convenience
Performance
Most similar to your example
Maintained / popular in 2025
std::variant + std::map<std::string, std::variant<...>>
good
medium
very good
★★★☆☆
standard library
std::any + std::map<std::string, std::any>
weak
★★★★☆
good
★★★★☆
standard library
nlohmann::json
weak–medium
★★★★★
medium–good
★★★★★
extremely popular
boost::any / boost::variant
weak / good
★★★★☆
good
★★★★☆
still used, but declining
custom recursive struct Value + std::variant
good
★★★☆☆
very good
★★★★☆
many libraries do this
toml::value, yaml-cpp, simdjson::ondemand::value
medium
★★★★☆
varies
★★★★☆
domain-specific
Most common real-world choices in 2024–20261. The winner for most projects → nlohmann::json
cpp
#include <nlohmann/json.hpp>
using json = nlohmann::json;

json cfg;

cfg["name"]          = "Alice";
cfg["age"]           = 31;
cfg["scores"]        = {92, 88, 95};
cfg["active"]        = true;
cfg["settings"]["theme"] = "dark";           // nested object
cfg["tags"]          = json::array({"cpp", "performance", "modern"});

// very convenient access
std::string name = cfg["name"];
int age          = cfg.value("age", 0);       // default value
auto scores      = cfg["scores"].get<std::vector<int>>();
This is probably what you actually want in 95% of cases.2. Modern variant-based solution (C++17/20/23) – quite clean
cpp
#include <variant>
#include <map>
#include <string>
#include <vector>

using Value = std::variant<
    std::monostate,               // null
    bool,
    int64_t,
    double,
    std::string,
    std::vector<Value>,           // ← recursion
    std::map<std::string, Value>  // ← recursion
>;

class UniversalMap
{
    std::map<std::string, Value> data;

public:
    template<class T>
    void put(std::string key, T&& value)
    {
        data[std::move(key)] = std::forward<T>(value);
    }

    // helper getters with default
    template<class T>
    T get(const std::string& key, T default_value = {}) const
    {
        if (auto it = data.find(key); it != data.end())
            if (auto* v = std::get_if<T>(&it->second))
                return *v;
        return default_value;
    }
};
Usage:
cpp
UniversalMap m;

m.put("pi",     3.141592);
m.put("name",   std::string("Violet"));
m.put("active", true);
m.put("scores", std::vector<Value>{10, 20, 30});

m.put("person", std::map<std::string, Value>{
    {"name", "Alex"},
    {"age",  27}
});

m.put("nested", std::vector<Value>{
    std::map<std::string, Value>{{"x",1}, {"y",2}},
    std::map<std::string, Value>{{"x",3}, {"y",4}}
});
Quick comparison – which one to choose?
You want
Recommended choice in 2025–2026
Very convenient syntax
nlohmann/json
Maximum type safety
recursive std::variant
Best compile-time perf
own variant solution
Must not have any dependencies
std::variant or std::any
Need to serialize/deserialize anyway
nlohmann/json (or toml11, yaml-cpp)
Writing configuration / settings library
usually nlohmann/json
Most practical answer for most people in 2025:Just use
cpp
#include <nlohmann/json.hpp>
It gives you almost exactly the syntax you showed in the question.Do you want to go dependency-free and are willing to write ~100–200 lines of boilerplate, or would you prefer the nlohmann::json route? 😄
Explain nlohmann::json serialization
nlohmann::json (most often called just "json.hpp" or "nlohmann/json") has excellent support for serialization (C++ → JSON string / file / binary formats) and deserialization (JSON → C++).There are several levels of convenience — from trivial built-in types to fully custom classes/structs.1. Built-in & standard-library types (zero code needed)
cpp
#include <nlohmann/json.hpp>
using json = nlohmann::json;

int main()
{
    json j;

    // primitives + containers work out of the box
    j["pi"]      = 3.141592653589793;
    j["happy"]   = true;
    j["nothing"] = nullptr;
    j["answer"]  = 42;
    j["list"]    = {1, 0, 2};
    j["object"]  = { {"currency", "EUR"}, {"value", 349.95} };
    j["vec"]     = std::vector<int>{10, 20, 30};
    j["map"]     = std::map<std::string, double>{{"temp", 23.5}, {"pressure", 1013.25}};

    // serialization = dumping to string
    std::string s = j.dump();           // compact
    std::string pretty = j.dump(4);     // indented with 4 spaces

    std::cout << pretty << "\n";
}
All these types are supported automatically: bool, integers, float/double, std::string, std::nullptr_t, all STL containers (std::vector, std::array, std::map, std::unordered_map, …), std::optional, std::pair, std::tuple, std::chrono durations (with limitations), etc.2. User-defined types — ADL-based (most common & clean way)You provide two free functions in the same namespace as your type (or in namespace nlohmann):
cpp
#include <nlohmann/json.hpp>
using json = nlohmann::json;

namespace my_app
{
    struct Person
    {
        std::string name;
        int         age          = 0;
        bool        is_student   = false;
        std::vector<std::string> hobbies;
    };

    // ── Serialization ─────────────────────────────────────────────
    void to_json(json& j, const Person& p)
    {
        j = json{
            {"name",       p.name},
            {"age",        p.age},
            {"is_student", p.is_student},
            {"hobbies",    p.hobbies}
        };
    }

    // ── Deserialization ───────────────────────────────────────────
    void from_json(const json& j, Person& p)
    {
        // Option A: safe + explicit
        j.at("name").get_to(p.name);
        j.at("age").get_to(p.age);
        j.at("is_student").get_to(p.is_student);
        j.at("hobbies").get_to(p.hobbies);

        // Option B: shorter but throws if key missing
        // p.name       = j.value("name", "");
        // p.age        = j.value("age", 0);
        // p.is_student = j.value("is_student", false);
        // p.hobbies    = j.value("hobbies", std::vector<std::string>{});
    }
}
Usage — extremely natural:
cpp
my_app::Person alice{"Alice", 28, true, {"reading", "hiking"}};

json j = alice;                        // ← calls to_json automatically
std::string serialized = j.dump(2);

std::cout << serialized << "\n";
// {
//   "name": "Alice",
//   "age": 28,
//   "is_student": true,
//   "hobbies": ["reading", "hiking"]
// }

json j2 = json::parse(serialized);
my_app::Person copy = j2.get<my_app::Person>();   // ← calls from_json
3. Even less code — macros (very popular since ~v3.7+)
cpp
struct Config
{
    std::string host     = "localhost";
    uint16_t    port     = 8080;
    bool        ssl      = true;
    std::vector<int> timeouts{5, 15, 60};

    // one line!
    NLOHMANN_DEFINE_TYPE_INTRUSIVE(Config, host, port, ssl, timeouts)
};
Or if you prefer not to expose member names:
cpp
NLOHMANN_DEFINE_TYPE_NON_INTRUSIVE(Config, host, port, ssl, timeouts)
These macros generate exactly the same to_json / from_json functions shown above.4. Other common patterns
Goal
How
Skip field if default / empty
Manually write to_json and add conditionally
Rename field
Use different key in to_json / from_json
Custom enum → string
NLOHMANN_JSON_SERIALIZE_ENUM macro
std::variant / std::optional
Usually needs custom adl_serializer specialization
Third-party types (glm::vec3, etc.)
Specialize nlohmann::adl_serializer<T>
Binary formats (CBOR, BSON, MsgPack)
json::to_cbor(j), json::from_cbor(v), etc.
Very large JSON / streaming
Use SAX interface (parser.parse()) or json::parse with iterator
Quick reference — most used methods
cpp
json j = ...;

// Serialize
std::string        str  = j.dump();             // compact
std::string        nice = j.dump(4);            // pretty
std::vector<uint8_t> cb = json::to_cbor(j);     // binary

// Deserialize
json j2 = json::parse(str);                     // from string
json j3 = json::from_cbor(binary_vector);

// Convert to C++ type
Person p = j.get<Person>();
j.get_to(p);                // existing object
auto val = j.value("/path/to/value", default_value);
The library is still very actively maintained (v3.12.x in 2025–2026), header-only, fast, and one of the most widely used JSON solutions in modern C++.Which part would you like to see more examples of — enums, variants, skipping fields, binary formats, error handling during deserialization, or something else?
How is data structure in nlohmann::json implemented?
nlohmann::json (the popular single-header JSON library for modern C++) stores every JSON value using a very efficient tagged union design internally.The central class is called nlohmann::basic_json (most people just use the convenience alias nlohmann::json = basic_json<…> with default template arguments).Core internal representation (2025/2026 – v3.11+ / v3.12.x)Every json object contains roughly these members:
cpp
class basic_json
{
    // ── 1. Type tag ────────────────────────────────
    value_t m_type = value_t::null;   // 1 byte enum

    // ── 2. The actual payload ──────────────────────
    json_value m_value {};            // tagged union

    // ── Optional small fields ──────────────────────
    // (some extra bits used only for certain types)
};
1. value_t – the type tag (enum, usually 1 byte)
cpp
enum class value_t : std::uint8_t
{
    null,         // 0
    object,       // 1
    array,        // 2
    string,       // 3
    boolean,      // 4
    number_integer,
    number_unsigned,
    number_float,
    binary,       // CBOR/BSON/MsgPack/UBJSON support
    discarded     // internal helper
};
This small enum tells you (and the library) what is actually stored inside the union.2. json_value – the tagged union itselfIt's implemented as a C++ union (not std::variant!) with an anonymous union containing pointers or small values:
cpp
union json_value
{
    object_t* object;           // std::map<std::string, basic_json>   by default
    array_t*  array;            // std::vector<basic_json>
    string_t* string;           // std::string
    binary_t* binary;           // std::vector<std::uint8_t>
    boolean_t boolean;          // bool
    number_integer_t   number_integer;   // int64_t   by default
    number_unsigned_t  number_unsigned;  // uint64_t
    number_float_t     number_float;     // double

    // ── very small optimization for small strings (SSO-like) ──
    // (not always enabled – depends on compile-time macro)
};
Most important points:
  • Objectsstd::map<std::string, json> (ordered by default)
  • Arraysstd::vector<json>
  • Stringsstd::string* (heap-allocated, but SSO possible in some configurations)
  • Numbers → stored directly in the union (no heap allocation)
  • bool, null → also stored directly (very cheap)
→ This means small JSON documents (numbers, booleans, short strings, small arrays/objects) can be very memory-efficient.Memory layout – typical sizes (64-bit system)
JSON type
Size of json object
Heap allocation?
Notes
null
~16–24 B
no
just tag
bool
~16–24 B
no
number (int64)
~16–24 B
no
short string
~32–48 B
usually no (SSO)
depends on libstdc++/libc++ SSO capacity
long string
~32–48 B + string
yes
small array
~32–48 B + vector
yes (capacity grows)
vector capacity usually starts small
large object
~32–48 B + map
yes
red-black tree nodes
Typical empty json object is ~24 bytes on 64-bit platforms (1 byte tag + padding + max union size which is usually a pointer = 8 B + some extra bits).Design trade-offs
Goal
Choice made by nlohmann/json
Convenient API
very high priority
Type punning safety
union + explicit tag
Memory usage for small values
good (direct storage + SSO possible)
Ordered objects
yes (std::map) by default
Fast lookup
acceptable (not as fast as unordered_map)
Customizability
very high (basic_json has many template parameters)
Binary formats support
yes (binary_t)
You can change almost everything via template parameters of basic_json:
cpp
using my_json = nlohmann::basic_json<
    std::unordered_map,           // faster lookup, unordered objects
    std::pmr::vector,             // polymorphic allocator
    std::pmr::string,             // ...
    bool,                         // boolean
    std::int64_t, std::uint64_t, double,
    std::vector<std::uint8_t>
>;
Many production projects use customized versions with std::unordered_map when object order doesn't matter.Summary – most important sentencenlohmann::json uses a classic C-style tagged union (enum + union of pointers + small types) rather than std::variant, combined with heap-allocated containers (std::vector / std::map) for arrays and objects.This gives good balance between usability, performance, and memory consumption for typical JSON sizes (< few MB).If you're curious about exact line numbers or want to see how a specific type is handled (e.g. small-string optimization, binary values, move semantics), the single header json.hpp is surprisingly readable – search for union json_value or struct value_t. 😄Do you want more detail on any particular aspect (custom allocators, SSO behavior, binary/CBOR layout, move semantics, etc.)?
Compare to simdjson
nlohmann::json vs simdjson — comparison (as of early 2026)Both libraries are very popular in the C++ world, but they serve fundamentally different goals and use very different internal designs.
Aspect
nlohmann::json
simdjson
Winner (2026 perspective)
Primary goal
Maximum convenience, "JSON feels like a first-class type"
Maximum parsing speed (often GB/s range)
Depends on your need
Parsing speed
~80–150 MB/s (typical benchmarks)
1–4+ GB/s (often 10–30× faster than nlohmann)
simdjson (by far)
Serialization speed
~50–150 MB/s
Very fast when using the DOM builder (~1–2 GB/s possible)
simdjson (usually)
Ease of use / API
★★★★★ — very intuitive, almost Python-like
★★☆☆☆ — more verbose, especially On-Demand API
nlohmann (much nicer)
Memory usage
Higher (full DOM, std::map/std::vector, strings copied)
Much lower possible (especially On-Demand – almost zero-copy)
simdjson
Mutability
Full read-write DOM, easy to modify
DOM mode: yes; On-Demand: read-only / forward-only
nlohmann
Random access
Yes – full DOM tree
DOM mode: yes; On-Demand: no (forward-only, must consume in order)
nlohmann
Lazy / on-demand parsing
No – everything parsed upfront
Yes – very strong On-Demand mode (parses only what you actually request)
simdjson
Custom struct (de)ser
Excellent (macros, ADL to_json/from_json, very clean)
Good (reflection coming in future versions), but more manual today
nlohmann (currently easier)
Binary formats
CBOR, BSON, MessagePack, UBJSON
CBOR, MessagePack (partial)
nlohmann
Dependencies
None (header-only)
None (header-only possible, but often built as lib)
Tie
Typical size
Single ~400–600 kB header
Smaller core header (~200 kB), but SIMD code can be large when compiled
nlohmann slightly more convenient
Best for
Config files, small/medium JSON, prototyping, tools, when developer time > CPU
High-throughput servers, log processing, huge JSON files, low-latency APIs
2025–2026 community view
Still #1 for convenience, "go-to when perf doesn't matter"
"The fastest", but many people moved to Glaze / yyjson / others for even better trade-offs
Depends…
Key architectural differences
  • nlohmann::json
    Classic full DOM approach → parses the entire document into a tree of objects/arrays/strings/numbers.
    Uses tagged union + std::map (ordered) or customizable to std::unordered_map.
    Very flexible, but pays the price in memory allocations and speed.
  • simdjson
    Stage 1 — ultra-fast SIMD-based tape/tokenizer (~few GB/s) that builds a flat "tape" of tokens (very cache-friendly).
    Two main APIs:
    • DOM mode → builds a full tree (still much faster than nlohmann because stage 1 is so quick)
    • On-Demand modelazy / zero-copy / forward-only — you iterate through keys/values only once, parser advances only as far as you read → lowest memory, lowest latency for partial access
Most people who switch to simdjson use On-Demand for real speed gains.Real-world numbers (approximate, from 2024–2025 benchmarks)
Scenario
nlohmann::json
simdjson (On-Demand)
simdjson (DOM)
Glaze (newer competitor)
Parse twitter.json (~600 kB)
~0.1 GB/s
~2–3 GB/s
~1.5–2 GB/s
~1–1.5 GB/s
Full round-trip (parse+print)
~0.08 GB/s
N/A (not direct)
~0.8–1.2 GB/s
~1.2–1.4 GB/s
Extract 1 field from 10 MB
slow
extremely fast
slowish
medium
When to choose which (2026 recommendation)Use nlohmann::json if:
  • You want beautiful, ergonomic code
  • JSON is small (< few MB) or performance is not critical
  • You frequently build/modify JSON documents
  • You need rich features (binary formats, custom serializers, allocators)
Use simdjson if:
  • You process large JSON streams/files (>10–100 MB)
  • Latency or throughput matters (microservices, logs, telemetry, databases)
  • You mostly read (extract few fields, validate, route) and rarely modify
  • You can live with forward-only access or use DOM mode when needed
Many teams in 2025–2026 actually end up using both in the same project:
  • nlohmann::json for configuration / small objects / serialization to strings
  • simdjson for ingesting large payloads from network / disk
Newer libraries like Glaze (C++23 reflection-based) are stealing some spotlight — often faster than both for struct ↔ JSON roundtrips, and almost as convenient as nlohmann.Do you have a specific use-case in mind (config file, server request body, huge log file, struct serialization…)? That usually decides the winner very clearly. 😄
Compare with Glaze library
nlohmann::json vs Glaze — comparison (early 2026 perspective)Glaze (github.com/stephenberry/glaze) is a modern, header-only C++ JSON (and more) library that exploded in popularity around 2023–2025, especially among people who want very high performance + minimal boilerplate for struct ↔ JSON roundtrips. It frequently appears in 2025 discussions as one of the top choices when replacing nlohmann::json in performance-sensitive code.
Aspect
nlohmann::json
Glaze
Winner (2026 view)
Primary goal
Convenience, rich DOM, "JSON as first-class citizen"
Extremely fast struct (de)serialization + low boilerplate reflection
Depends on use-case
Parsing speed (full doc)
~80–150 MB/s
~1–1.4 GB/s (often 10–15× faster than nlohmann)
Glaze
Serialization speed
~50–150 MB/s
~1–1.4 GB/s (frequently #1 or #2 in benchmarks)
Glaze
Round-trip (struct ↔ JSON)
Slow (requires manual to_json/from_json)
Extremely fast (compile-time mapping)
Glaze (by huge margin)
Ease of use for structs
★★★★☆ (macros or ADL functions, some boilerplate)
★★★★★ (almost zero code for aggregate structs)
Glaze
Generic JSON DOM
Excellent (json class, very ergonomic)
Good (glz::json_t similar to nlohmann), but not the main focus
nlohmann
Memory usage
Higher (full tree of allocations)
Lower (in-memory, direct mapping to your structs)
Glaze
Mutability / random access
Full mutable DOM
DOM mode exists, but focus is on direct struct I/O
nlohmann
Reflection / boilerplate
Manual or macro-based
Compile-time reflection (C++20/23), minimal code for structs
Glaze
C++ standard required
C++11 (very wide compatibility)
C++20 / C++23 recommended (stronger in newer compilers)
nlohmann (broader compat)
Binary formats
CBOR, BSON, MsgPack, UBJSON
BEVE (own fast binary), CBOR, MsgPack, others
Tie / slight edge to nlohmann
Error handling
Very good (exceptions or error codes)
Excellent (strong error messages, no exceptions by default)
Glaze often praised
Typical size
Single ~500–600 kB header
Comparable header-only
Tie
Best for
Configs, prototyping, tools, full DOM manipulation
High-perf servers, config loading, game engines, struct-heavy code
2025–2026 community
Still default "safe" choice, huge ecosystem
Rapidly rising "fast & modern" choice, many migrations from nlohmann
Glaze gaining fast
Core architectural differences
  • nlohmann::json
    Full mutable DOM tree built from tagged union + std::map/std::vector.
    You parse → get a json object → read/write freely → dump.
    Serialization of custom types needs explicit to_json / from_json (or macros).
  • Glaze
    Focuses on zero/low-copy, in-memory (de)serialization directly to/from your C++ structs.
    Uses compile-time reflection (C++20 concepts + macros / member lists) to map struct members to JSON keys automatically.
    For many structs you write almost no code:
    cpp
    struct Person {
        std::string name;
        int age{};
        bool active{};
        std::vector<std::string> hobbies;
    };
    
    // That's it — no to_json/from_json needed!
    Person p;
    glz::read_json(p, json_string);   // or read_file
    glz::write_json(p, out_buffer);
    It also offers glz::json_t (DOM-like) when you need generic JSON handling.
Real-world benchmark snapshot (mid-2025, from Glaze repo & independent tests)
Library
Round-trip time (s)
Write (MB/s)
Read (MB/s)
Notes
Glaze
~1.0
~1400
~1200
Often fastest overall
simdjson (on-demand)
N/A
N/A
~1100–1200
Best for partial / streaming reads
yyjson
~1.2
~1000
~1100
Very strong C library
nlohmann::json
~15
~80–90
~80–90
10–15× slower on full round-trips
Glaze frequently beats or matches simdjson on full-document parse/write when you deserialize straight into structs (no DOM intermediate step).When to choose which (2026 recommendation)Choose Glaze if:
  • You mainly (de)serialize structs/classes (configs, messages, game data, RPC, etc.)
  • Performance matters (server, startup time, mobile, games…)
  • You use C++20/23 and aggregates / reflection-friendly code
  • You hate writing serialization boilerplate
Choose nlohmann::json if:
  • You need a rich, mutable DOM (build JSON dynamically, inspect arbitrary data)
  • Broad compiler compatibility (C++11/14 projects)
  • You already have a large codebase using it (ecosystem, examples, Stack Overflow)
  • You frequently work with generic / unknown JSON shapes
Many teams in 2025–2026 are doing gradual migrations:
  • Keep nlohmann for dynamic JSON parts
  • Switch to Glaze for performance-critical struct paths
  • Or fully replace nlohmann when possible (especially in new projects)
Glaze is still younger, so its ecosystem/docs are smaller, but it's actively maintained and gaining momentum very quickly.What is your typical use-case — mostly struct serialization, config files, parsing unknown JSON from network, or something else? That usually makes the choice obvious. 😄
Explain SSO behavior
Small String Optimization (SSO) in the context of nlohmann::json refers to how strings are stored inside a json object — specifically whether short strings avoid heap allocation.Short answer firstnlohmann::json does NOT implement its own SSO.
Instead, it fully delegates string storage to std::string (the default string_t type).
→ Whether SSO happens (and how many characters fit without heap allocation) depends 100% on your standard library implementation (libstdc++, libc++, MSVC's std::string, etc.), not on nlohmann::json itself.How it works in practiceInside nlohmann::basic_json the string payload looks like this (simplified):
cpp
union json_value
{
    // ...
    string_t* string;   // ← pointer to std::string
    // ...
};
  • When you do j = "short"; or j["key"] = "hello";
  • nlohmann creates a std::string (usually on the heap via new / allocator)
  • stores the pointer to that std::string inside the union
→ If your std::string uses SSO for that length → no extra heap allocation for the characters
→ If the string is too long → std::string allocates on the heap anyway
Typical SSO capacities in 2025–2026 (very approximate)
Standard library
SSO buffer size (excluding null terminator)
Typical max SSO length
Notes / platform
libstdc++ (GCC)
15–16 bytes
15 chars
Most Linux distros
libc++ (Clang)
22–23 bytes
22 chars
macOS, many BSDs
MSVC / STL
15 bytes
15 chars
Windows
Some embedded / custom
7–31 bytes
varies
Examples:
cpp
json j;

// Usually SSO (no heap alloc for chars) on most implementations
j = "hello";                    // 5 chars → almost always SSO
j = "quite_long_but_still_ok";  // 22 chars → maybe SSO on libc++, no on others

// Almost never SSO
j = std::string(30, 'x');       // 30 chars → heap in all mainstream impls
Memory & performance consequences
Scenario
Heap allocations (typical)
Comment
Many small keys/values ("id", "name", "active", …)
0 extra for string content
Great — very common in JSON
Long strings (URLs, base64, messages)
1 heap alloc per string (from std::string)
Unavoidable
Very many tiny strings (<8 chars)
Often zero heap for content
Excellent cache locality
Creating / moving many json objects
Still pays for moving the pointer
Move is cheap
Can you get better SSO / no-heap strings?Yes — several ways (all require custom basic_json specialization):
  1. Use short-string optimized string type
    cpp
    #include <boost/container/small_vector.hpp>  // or your own SSO string
    using sso_string = /* some 32-byte SSO string */;
    
    using my_json = nlohmann::basic_json<
        std::map, std::vector, sso_string, ...>;
  2. Use fixed-capacity small string (very aggressive)Libraries like folly::small_string<24>, ssstr, or custom union-based ones.
  3. In extremely memory-sensitive code people sometimes switch to flat buffers / custom string pools — but then you usually leave nlohmann::json.
Summary – practical takeaway
  • For most real JSON (keys are short, many values are short) → nlohmann::json + modern std::string already gives you SSO "for free" on small strings.
  • You usually don't notice the difference unless you profile very carefully or work with millions of tiny objects.
  • If you're bottlenecked on small-string allocations → either
    a) switch to a faster JSON lib (simdjson on-demand, Glaze direct-to-struct), or
    b) customize nlohmann with a better string_t.
Do you want to see how to measure whether SSO is active in your setup (small test code), or are you thinking about replacing std::string with something more aggressive? 😄
RapidJSON comparison
RapidJSON vs nlohmann::json — comparison (as of early 2026)RapidJSON (from Tencent, released ~2012–2013, still actively used/maintained) is one of the classic high-performance C++ JSON libraries. It was long considered the "fast" choice before simdjson (2018+) and newer libs like Glaze appeared. nlohmann::json focuses on modern, ergonomic API.
Aspect
nlohmann::json
RapidJSON
Winner (2026 perspective)
Primary goal
Convenience & modern C++ feel
High performance + low memory
Depends on priority
Parsing speed
~80–150 MB/s (full DOM)
~300–700 MB/s (depending on config, insitu/dom)
RapidJSON (3–5× faster)
Serialization speed
~50–150 MB/s
~250–500 MB/s
RapidJSON
Round-trip (struct ↔ JSON)
Slow (manual ADL or macros)
Fast when using templates / custom code
RapidJSON (when optimized)
Ease of use / API
★★★★★ — Python-like, very intuitive
★★☆☆☆ — more manual, SAX/DOM styles, verbose
nlohmann (much nicer)
Memory usage
Higher (many small allocations, std::map)
Very low (arena / memory pool allocator, in-situ parsing possible)
RapidJSON
Mutability / DOM
Full mutable DOM, easy random access
DOM (mutable), but designed for low-allocation; in-situ mode read-only-ish
nlohmann (easier mutation)
Custom struct ser/deser
Excellent (macros, ADL to_json/from_json)
Good (templates, manual mapping, no reflection)
nlohmann (less boilerplate)
Lazy / on-demand parsing
No
Partial (SAX style streaming possible)
RapidJSON better for streaming
Binary formats
CBOR, BSON, MsgPack, UBJSON
No native (but can be extended)
nlohmann
SIMD usage
No
Yes (some hand-optimized paths, but not as aggressive as simdjson)
RapidJSON has some advantage
C++ standard
C++11+ (very broad)
C++03+ (extremely portable)
RapidJSON (older codebases)
Dependencies
None (header-only)
None (header-only)
Tie
Typical size
~500–600 kB header
Smaller (~200–300 kB)
RapidJSON lighter
Best for
Configs, tools, prototyping, dynamic JSON
Games, servers, mobile, large files, low-latency/high-throughput
2025–2026 community
Still default for convenience, huge adoption
Still used where perf/memory critical, but losing to Glaze/simdjson
nlohmann more "default"
Core architectural differences
  • nlohmann::json
    Full mutable DOM built with tagged union + std::map/std::vector + std::string*.
    Everything parsed upfront → convenient but pays in allocations & speed.
    Great for build/modify/dump workflows.
  • RapidJSON
    DOM + SAX (streaming) modes.
    Very aggressive on memory:
    • In-situ parsing — parses directly into input buffer (zero-copy for strings/numbers when possible)
    • MemoryPoolAllocator — arena-style, very few individual allocations
    • Generic templates allow custom allocators / encodings easily
      Designed for games/mobile where you want predictable low memory and fast parse.
Real-world benchmark snapshot (mid-2025, from independent sources like Glaze's json_performance repo)
Library
Round-trip Time (s)
Write (MB/s)
Read (MB/s)
Notes (single-core, modern hardware)
Glaze
~1.0
~1400
~1200
Often fastest overall (2025)
simdjson
N/A (on-demand)
N/A
~1100–1200
Best for partial reads
RapidJSON
~3.7–4
~280–420
~400–500
Still very strong
nlohmann
~15
~80–90
~80–90
4–10× slower on round-trips
→ RapidJSON is typically 3–5× faster than nlohmann on parse/serialize, but Glaze and simdjson have overtaken it in many scenarios by 2025–2026.When to choose which (2026 recommendation)Choose RapidJSON if:
  • You need very low memory footprint (mobile, embedded, games)
  • You parse large JSON frequently and want predictable performance
  • You can accept more manual code / SAX style for streaming
  • You're in a legacy codebase or need C++03 compatibility
Choose nlohmann::json if:
  • Developer experience & readable code is priority #1
  • JSON is small/medium or not in hot path
  • You build/modify JSON a lot (dynamic objects, nested maps)
  • You want binary formats (CBOR, BSON) out of the box
Many projects in 2025–2026 are moving away from both:
  • To Glaze → for fast struct ↔ JSON with almost no boilerplate (C++20/23)
  • To simdjson → when you mostly extract few fields from huge payloads
  • Or keep nlohmann only for non-critical parts
If your use-case is mostly struct serialization (configs, messages), Glaze usually wins today.
If it's generic DOM manipulation, nlohmann still feels best.
If it's raw speed + low memory, RapidJSON remains very competitive (but no longer the undisputed king).
What is your main scenario — high-throughput server, game engine, config loading, large file processing, or something else? That usually picks the clear winner. 😄
C++ reflection libraries for variants
In C++ (as of January 2026), true compile-time reflection for std::variant (and most other types) is not yet available in the standard — the first batch of static reflection features landed in draft C++26 (via P2996R12 and related papers), but compilers are still in early experimental stages (Clang fork by Bloomberg, GCC 16 experiments, EDG prototype).Full C++26 reflection will eventually make reflecting over std::variant alternatives trivial at compile time (e.g. generating visitors, serializers, string names, or even custom variant-like types automatically), but right now you must rely on libraries, macros, or manual boilerplate.Here’s a realistic overview of what’s available today for reflecting over variants (introspection of alternatives, names, indices, generating visitors, serialization, etc.).1. No reflection → Manual / Macro approaches (most common in production 2025–2026)
Approach / Library
Reflection style
Variant support level
Boilerplate
Performance
Notes / Status 2026
Manual std::visit + overloads
None
Excellent (std::visit)
Medium–high
Optimal
Baseline – type-safe, no extra deps
Boost.Mp11 / Hana
MP / type lists
Good (via mp11::mp_at, etc.)
Medium
Excellent
Compile-time heavy
magic_get / PFR
Field reflection
Weak (structs only, not variants directly)
Low
Good
Not for variants
reflect-cpp (getml)
C++20 struct reflection
Very good (rfl::Variant alternative to std::variant)
Very low
Very high
Fast JSON/Avro/etc. ser/de; rfl::TaggedUnion
Glaze
C++20/23 struct refl.
Good (via glz::json_t or custom)
Very low
Extremely high
Often fastest JSON round-trip
nlohmann/json
ADL + macros
Good (custom to_json/from_json)
Medium
Medium
Most convenient DOM
simdjson
On-demand / DOM
Partial (manual visitors)
Medium
Extremely high
Best for huge payloads
2. Libraries that offer some form of variant reflection / introspection
  • reflect-cpp (github.com/getml/reflect-cpp)
    → One of the nicest for variants in 2025–2026.
    It provides rfl::Variant<Ts...> (faster-compiling alternative to std::variant) and rfl::TaggedUnion (tagged variant with explicit names → great for serialization).
    Reflection is struct-focused, but variants benefit indirectly via fast compile-time mapping.
    Usage example:
    cpp
    #include <rfl.hpp>
    
    using MyVar = rfl::Variant<int, std::string, bool>;
    
    struct Data {
        std::string name;
        MyVar value;
    };
    
    // automatic JSON round-trip
    auto json = rfl::to_json(Data{"test", 42});
    → Recommended if you want fast serialization + variants without much code.
  • metapp (github.com/wqking/metapp)
    → True runtime reflection library (C++11+).
    Supports std::variant directly — you can query type info, active index, cast, etc. at runtime.
    Non-intrusive, lightweight, powerful for meta-programming / scripting / serialization.
    Example:
    cpp
    #include <metapp.h>
    
    metapp::Variant v = std::variant<int, std::string>{42};
    const auto metaType = metapp::getMetaType(v);
    if (metaType.getTypeKind() == metapp::tkVariant) {
        auto index = v.get<std::variant<int, std::string>>().index();
        // ...
    }
    → Good when you truly need runtime type discovery (e.g. dynamic systems, debug tools).
  • Boost.Describe + Boost.Mp11 (or similar MP libs)
    → Can list types in a std::variant via std::variant_alternative_t<I, V> in a std::index_sequence.
    → No names or automatic visitors, but you can build compile-time lists.
3. Best practical patterns right now (C++20/23)A. Exhaustive visitor with overload (zero deps, C++17+)
cpp
template<class... Ts> struct overload : Ts... { using Ts::operator()...; };
template<class... Ts> overload(Ts...) -> overload<Ts...>;

std::visit(overload{
    [](int i)          { std::print("int: {}\n", i); },
    [](std::string s)  { std::print("str: {}\n", s); },
    [](auto&& other)   { static_assert(false, "unsupported"); }
}, my_variant);
B. Compile-time index ↔ type mapping (common pattern)
cpp
template <std::size_t I, class V>
using alt_t = std::variant_alternative_t<I, V>;

template <class V, class F>
void visit_by_index(V&& var, std::size_t idx, F&& f) {
    std::visit([&](auto&& v) {
        using T = std::decay_t<decltype(v)>;
        constexpr auto table = std::array{ /* lambdas for each alternative */ };
        table[idx](std::forward<decltype(v)>(v));
    }, std::forward<V>(var));
}
C. With reflection coming in C++26 (experimental today)Once compilers support P2996-style reflection (^^ operator + std::meta), you’ll be able to do things like:
cpp
// future C++26 pseudocode
constexpr auto refl = ^^MyVariant;
constexpr auto alts = std::meta::nonstatic_data_members_of(refl); // or variant alternatives
for_each(alts, []<auto R>{
    using T = std::meta::type_of(R);
    // generate visitor, serializer, etc.
});
Right now, only experimental forks let you experiment with this.Recommendation (January 2026)
  • You want fast & low-boilerplate serialization + variantsreflect-cpp (rfl::Variant) or Glaze
  • You need true runtime reflection / meta info on variantsmetapp
  • You want maximum performance + no deps + manual control → stick to std::variant + std::visit + overloads
  • You want the future-proof path → experiment with Clang P2996 fork + wait for C++26 proper (2026–2028 adoption wave)
What exact problem are you trying to solve with variant reflection (serialization, visitor generation, dynamic dispatch, debugging, something else)? That usually narrows it down to 1–2 best choices. 😄