On the new opcode abstractions

While the new opcode abstractions are a step forward for compiler code clarity, they underscore the absence of a shared specification. Without one, the compiler and interpreter operate on implicit agreements that are prone to strange errors.

It’s almost like one skipped this step in the process.

A bytecode specification plays a crucial role in any virtual machine-based language. It is the authoritative contract between the compiler, which emits bytecode, and the interpreter, which runs it.

A well-defined, machine-verifiable specification makes this contract explicit rather than implied. It outlines:

  • The list of opcodes
  • Their binary representation
  • Operand types
  • Instruction lengths and invariants

This isn’t just documentation — when shared between the compiler and VM, the spec can automatically validate correctness and generate boilerplate safely, avoiding many classes of errors that otherwise show up only at runtime.


The interpreter still consumes raw bytes directly (like before!). If the NEW compiler-side opcode abstractions diverge — in ordering, layout, or semantics — nothing is enforcing that the interpreter remains in sync.


What Can Go Wrong Without a Spec?

A LOT

These problems are subtle. Code may compile but misbehave in edge cases, especially with less common opcodes or combinations.


The goal: a single definition, verified across both layers. In other words, a formal machine-verifiable specification that itself needs to be tested as well. (Yes, it is not rare that specifications themselves have mistakes)

Thanks for having a look and taking interest!

What is a machine-verifiable specification?

This PR encodes these things in the Opcode types:

  • The list of opcodes
  • Their binary representation
  • Operand types
  • Instruction lengths and invariants

Some of these things are enforced by the type system, for others —in debug mode—, they are asserted.

These problems are subtle. Code may compile but misbehave in edge cases, especially with less common opcodes or combinations.

The presence of a formal spec doesn’t ensure that the interpreter implements it correctly. If we were to make a spec now, it would simply describe what the interpreter does today (warts and all). I don’t think that would be useful. Perhaps if we were designing a language from scratch this would be useful, but due to backwards compatibility, the correct behaviour of sclang is its current behaviour. In my opinion, it would be more useful to have specific tests for unusual parts of the language (I’ve add a few regarding int literals), although simply compiling the class library and running the test suite is pretty thorough.

You mention that much of the structure is encoded in the type asserted in debug builds, and that’s not bad. The missing piece is that there’s no shared source of truth between both layers. Right now, the interpreter consumes raw bytes, and the compiler emits them via abstractions that are, in a way, created detached from it. If either side changes (even innocently, a tiny thing), there’s no structural way to detect desyncs other than runtime behavior or failing tests

I accept your point, formal specs don’t catch bugs on their own. But they are fundamental. Besides, yhey can enable tools that do:

Round-trip tests
Auto-generated opcode tables
scripts to compare
Nice way to review what each opcode expects

Even just a declarative table could make tests more meaningful and bridge the gap between the high-level abstractions and the raw reads.

Jordan, I mean this as a constructive critique. If we are going to touch the current raw bytecodes, we should do it in a good way. Readability is not enough (actually, now we have to change two layers to modify one thing). I think a specification is not controversial here.

This is of course just my opinion.

I mean this as a constructive critique.

This all sounds good but I still don’t understand what change you are proposing to the c++ code? Is there some tool that does all this? Some project that does this which we should be copying?

no shared source of truth between both layers.

I’ve attempted to encode this ‘truth’ in code so it can be enforced.

With this PR:

  • If a bytecode changes (the identifier bit, not the operands) and the value doesn’t collide, it will work so long as the labelled goto table is updated (I can’t find a way to enforce this, perhaps a generator could be used for this but it seems overkill?), if it collides, it won’t compile.
  • If the number of operands change, neither the compiler nor the interpreter will compile because in the former case, not enough arguments are given to the emit method, and in the latter, the structured binding won’t have enough arguments.
  • If the types of the operands change, this one is a bit more subtle as structured bindings don’t let you declare types of the operands, but in most cases, where the operand is used, there will be a compiler error in the interpreter. In the compiler there will always be a compile error.

If we are going to touch the current raw bytecodes, we should do it in a good way.

If you have a concrete way to enforce any more of the opcodes structure, or to otherwise makes things more explicit, or any other specific feedback, I’d love to know and would gladly make the changes. I’m just struggling to understand what you are actually recommending.

Let’s see… A practical approach in C++ is to define one canonical opcode list and share it between the compiler and interpreter. For example, X-macro or constexpr array of structs in a header. In this case (a speculation for now_, each entry encodes the opcode name and metadata. This single list is then expanded to generate enums, tables, etc.

Going back to the main point: something like this ensures one source of truth: the compiler and interpreter include the same header/array.

To be fancy, why not the shared list to generate interpreter code? For instance, an X-macro can generate the switch/case blocks or a dispatch table automatically.

C++ experts can add specificities to an implementation, something that delivers an early warning, “single source of truth” without much redesign.

(Of course, I tried to reply your implementation question rather than talk about how important is to have a specification)

EDIT: With C++20 concepts or with type-tagging operands in the table, this will be safer.

X-macro’s are horrid, I refuse to use them.

The problem with a ‘constexpr array of structs in a header’ is that you now have to refer to them by index, not by name. What you need to do is make the number a part of the structure instead. This is exactly what I have done. If we had c++26 and the upcoming reflection, this could be simplified, but given we don’t, I think this is the best compromise.

Going back to the main point: something like this ensures one source of truth : the compiler and interpreter include the same header/array.

Yup that’s what they do. The compiler builds the opcodes, the interpreter uses them, but they include the same header. One source of ‘truth’. In fact, that was the whole point of this PR.

It appears you are just restating what I’ve done in the PR, but without considering the compromises one has to make during implementation. If you think there is a better approach or any changes given the technical constraints of the project to date, I’d be more than happy to consider it!

EDIT: With C++20 concepts or with type-tagging operands in the table, this will be safer.

No it won’t, all the constraints (bar the goto lables) of the opcodes can be expressed in C++17. Concepts just give you a better alternative to sfinae (which I don’t use because it isn’t needed).

Operands are typed in this PR.

Oh, let’s slow down…

my critique wasn’t about implementation details, but about fundamental architectural principles. You seem to have misconstrued my point about specifications as merely suggesting alternative C++

Yes, your PR implements a shared header file. but my concern was broader - about having an explicit, documented contract between compiler and interpreter components. This isn’t just about code organization - it’s about system architecture.

Your dismissive response to suggestions (“X-macro’s are horrid”) misses the forest for the trees.

The improvements in readability are valuable, but they don’t automatically verify guarantees that a proper specification would. Your approach does address some of the type-safety concerns through C++17 constraints, but doesn’t provide the same level of formalism that would make the contract truly explicit.

I stand by my original point: a language like that benefits significantly from having a formal, machine-verifiable specification that serves as a contract between components.

Also, think of subtle desynchronization. Types enforce some constraints, but they don’t document intention or verify semantic correctness across the boundary. I raised these points not to criticize your specific implementation, but to emphasize that a formal specification matters -

I think that’s all to be said in the paragraph above.

Structured specification goes beyond what well-typed C++17 code can do.

Since a lot of the post was on implementation, just to remind us: C++20 concepts offer more than just replacing SFINAE - they provide clearer semantics that would benefit this situation.

Your implementation works, but a formal specification would centralize semantic relationships between a complex boundary rather than scattering constraints throughout the code. I hope I am not the only one who sees the difference.

I see the point on writing a detailed specification of what the bytecodes do, but I think moss has really done this. The pr applies that specification, enforcing as much as possible in the type system, as I’ve tried to explain by giving examples, and showing that this pr meets many, if not all, of the definition of a machine verifiable specification you have given. If you think there is a specific case where this could be improved, I’d be happy to make changes.

Using concepts for operands is a bad idea. By using a specific type (as my PR does) it will tell you that the type of the operand needed should be of type X (e.g., Operand::BinayMathNibble), whereas yours produces a complicated concept violation. Fundamentally, concepts are used to match a set of types, whereas in the opcodes, only one type should be accepted as opcodes have a definitive signature as the existing bytecode documentation tells us.

@jordan @julian @jamshark70

I was thinking and reading the code, and I believe this issue is not actually a dichotomy, but there are a lot of space for hybrid solutions.

For example, including a formal bytecode specification that serves as the contract between compiler and interpreter.

  1. But this approach would preserve your code as-is while making the specification explicit.

  2. The crucial more challenging implementation would include verification tools.

  3. Your PR uses a token-threaded dispatch. A ‘hybrid implementation’ COULD maintain this for performance, but add verification hooks:

  4. The PR already includes test additions in TestOpcodes.sc. A “hybrid approach” would expand these to verify against the specification.

Benefits, as I see it now:

a) your performance-optimized implementation would remain intact, with the specification adding minimal overhead in release builds

b) toggling SC_DEBUG or SC_BYTECODE_VALIDATION would be more robust

c) specification would serve as the basis for generating documentation

d) tests could verify correctness against the specification rather than just testing behavior

e) hybrid approach could be implemented gradually


@jordan implementation-focused approach has already improved code organization by moving bit shifting and magic numbers into dedicated classes, but these classes can also have formal semantic definitions

dispatch mechanisms: The tension between inlined bytecode operations and general object-oriented message passing

A specific example is SuperCollider’s method inlining strategy:

// Current approach often mixes implementation and semantics
if(condition) { trueExpr } { falseExpr }

Under the hood, this becomes specialized bytecode rather than a method call
But the specification needs to define when this happens vs. regular dispatch

For example , see: falling back from inlining · Issue #3567 · supercollider/supercollider · GitHub

What is being verified exactly?

1 Like

SuperCollider doesn’t currently have a bytecode stack validation similar to the proposed idea (formal spec). While the interpreter can detect certain runtime issues, such as underflow or overflow conditions during execution, nope formal static validation that ensures bytecode correctness

Something like this would be compatible with the code so far. This would catch errors where different paths attempt to reach the same location with inconsistent states - a subtle but serious bug


bool BytecodeValidator::validateAgainstSpecification(
    const std::vector<uint8_t>& bytecode) 
{
    
// Walk through  validating stack effects
for (size_t i = 0; i < bytecode.size(); ) {
    uint8_t opcode = bytecode[i];
    int currentDepth = stackDepths[i];

    // opcode specification
    auto spec = lookupOpcodeSpec(opcode);

    // new stack depth after this instruction
    int newDepth = currentDepth + spec.stackEffect;

    // stack doesn't underflow or exceed maximum
    if (newDepth < 0 || newDepth > MAX_STACK_DEPTH) {
        return false;
    }

    //  control flow operations 
    if (isJumpOpcode(opcode)) {
        int jumpOffset = decodeJumpOffset(bytecode, i);
        int targetPos = i + jumpOffset;

        // Verify jump target 
        if (targetPos < 0 || targetPos >= bytecode.size()) {
            return false;
        }

        // Verify stack depth at jump target
        if (stackDepths[targetPos] == -1) {
            
            stackDepths[targetPos] = newDepth;
        } else if (stackDepths[targetPos] != newDepth) {
            // Stack depth mismatch at jump target
            return false;
        }
    }

    
    i += spec.instructionSize;

    
    if (i < bytecode.size() && !isJumpTarget(i)) {
        stackDepths[i] = newDepth;
    }
}

return true;
}

See: Chunks of Bytecode · Crafting Interpreters

For example, rather than the function containing logic to calculate stack effects from bytecode operations, it could simply call methods on the opcode classes:

// Current 
auto spec = lookupOpcodeSpec(opcode);
int newDepth = currentDepth + spec.stackEffect;

// With opcode classes
auto opcodeObj = OpcodeFactory::createFromByte(opcode);
int newDepth = currentDepth + opcodeObj->getStackEffect();

I would choose this as the first step towards a gradual hybrid architecture. This is not a bad example for a start.

@jamshark70 @julian Just to point out that this would be a totally relevant change in the context of that old discussion

if(condition) { trueExpr } { falseExpr }

Such elegance, if treated too raw, can blur the line between implementation and semantics, especially in the context of bytecode generation/interpretation,.