Strange (undocumented it seems) list comprehension context `{;` and other hidden gems in the parser, like infinite list generators

In the Bison (parser) code there’s this odd bit (last line below) allowing some kind of list comprehensions to start with {; instead of {: (i.e. semicolon instead of colon)

generator : '{' ':' exprseq { pushls(&generatorStack, $3); pushls(&generatorStack, 1); } ',' qual '}'
			{
				PyrSlot slot;
				SetSymbol(&slot, getsym("r"));
				PyrSlotNode* selectornode = newPyrSlotNode(&slot);

				PyrParseNode *block = (PyrParseNode*)newPyrBlockNode(0, 0, (PyrParseNode*)$6, false);
				PyrParseNode *blocklit = (PyrParseNode*)newPyrPushLitNode(NULL, block);
				$$ = (intptr_t)newPyrCallNode(selectornode, (PyrParseNode*)blocklit, 0, 0);
			}
		| '{' ';' exprseq { pushls(&generatorStack, $3); pushls(&generatorStack, 2); } ',' qual '}'
			{
				$$ = $6;
			}
		;

That last code line is actually effective in that it begins the “generator context” so you can write odd stuff like

// normal comprehension
{: x*2, x <- (2..6)}.all 
// -> [ 4, 6, 8, 10, 12 ]

// oddity
{; x*2, x <- (2..6)}
// -> 2

I’m not sure what the latter flavor are useful for. Any ideas?

2 Likes

I think I figured it out. It’s for when you want to explicitly yield from the generator/comprehension

r = r { loop { {; yield(x*2), x <- (2..6)} } };
r.nextN(7);
// -> [ 4, 6, 8, 10, 12, 4, 6 ]

It gets more interesting perhaps that–like in a routine–you can yield more than once “in a row” in that first expression in the generator.

r = r { loop { {; yield(x*2); yield(x**3), x <- (2..6)} } };
r.nextN(12);
// -> [ 4, 8.0, 6, 27.0, 8, 64.0, 10, 125.0, 12, 216.0, 4, 8.0 ]

This became obvious after looking at “action” part of parser code for generators/comprehensions; the yield is removed too (in addition to the r preamble) for “action 2”, which is what {; puts on the generator stack.

nextqual	:
				{
					// innermost part
					int action = popls(&generatorStack);
					PyrParseNode* expr = (PyrParseNode*)popls(&generatorStack);

					switch (action)
					{
						case 1 :
						{
							PyrSlot slot;
							SetSymbol(&slot, getsym("yield"));
							PyrSlotNode* selectornode = newPyrSlotNode(&slot);

							$$ = (intptr_t)newPyrCallNode(selectornode, expr, 0, 0);
						} break;
						case 2 :
						{
							$$ = (intptr_t)expr;
						} break;
					}
				}
			| ',' qual
				{ $$ = $2; }
			;
2 Likes

super interesting thanks. I use the regular list comprehension all the time - wonder if I’ll see a use for this one

I do wish the syntax was {x:x<-[1,2,3], etc } though mathematician in me just types that without thinking!

you will find lots of interesting things if you examine the Bison code (: i have skimmed it a few times and i didn’t even know this was in there!

I guess it was for the ease of parsing. Two tokens after each other are pretty unambiguous:

{/*haha*/:42, true}.next // works
{{/*haha still*/;88.yield, true}}.r.next // works

({ // Surprise, but I'm not a function
	:42,
	true
}.next) // works
({{ // perhaps more confusing
	;88.yield, // whaat, leading ;-)
	true
}}.r.next)

({: // I'm obviously not a function
	42,
	true
}.next) // works
({{; // ibid
	88.yield,
	true
}}.r.next)

With real (non-comment/whitespace) expression allowed between { and the : it would need lookahead to figure out what that thing/construct is. (SC’s parser doesn’t use the GLR feature in Bison.)

1 Like

I think I found another interesting bit related to list comprehensions; you can use infinite series and these (just like “predictible” finite ones) are optimized in the geneartor to not pre-generate & store the array.

{: x+y, x<-(1..), y<-(0, 100..300) }.nextN(9) 
// -> [ 1, 101, 201, 301, 2, 102, 202, 302, 3 ]

{: x+y, x<-(1, 5..), y<-(0, 100..300) }.nextN(9)
// -> [ 1, 101, 201, 301, 5, 105, 205, 305, 9 ]

There are no ..) occurences in the docs for the list comprehensions, so I think this also counts as “discovery”, albeit a much predictable one that someone would have tried. (The optimization code can be found by searching for s_series in the Bison code.)

Interestingly, that’s different code generation path than the one used for either SimpleNumber.series or SimpleNumber.seriesIter in sclang. SimpleNumber.series, which calls into the _SimpleNumberSeries primitive, doesn’t seem to like infinite series:

series(1, 3, inf).iter.nextN(9)
// ERROR: Primitive '_SimpleNumberSeries' failed.
series(1, 3, nil).iter.nextN(9) // ibid

Alas when you write (1..) in a non-comprehension context, you actually get this less tolerant implementation.

{ (3..5) do: _.yield }.r.next // -> 3 
{ (3..) do: _.yield }.r.next //  ERROR: Primitive '_SimpleNumberSeries' failed.
{ (3..inf) do: _.yield }.r.next // ibid
{ (3, 5..inf) do: _.yield }.r.next // ibid
{ (3..nil) do: _.yield }.r.next // ibid

seriesIter which is fully contained in a classlib method (no underlying primitive called), does accept inf for the endpoint.

3.seriesIter.nextN(5) // -> [ 3, 4, 5, 6, 7 ]

But alas there’s no syntactic shortcut that will call into this one, it seems.

Ah, I see that series's primitive (_SimpleNumberSeries) actually creates the array… for ints at least… but also for doubles; so there’s no hack here to avoid storage:

int prSimpleNumberSeries(struct VMGlobals* g, int numArgsPushed) {
    PyrSlot* a = g->sp - 2;
    PyrSlot* b = g->sp - 1;
    PyrSlot* c = g->sp;

    if (IsInt(a) && (IsInt(b) || IsNil(b)) && IsInt(c)) {
        // ....
        PyrObject* obj = newPyrArray(g->gc, size, 0, true);
        // ....
    } else {
        double first, second, last, step;
        // ....
        PyrObject* obj = newPyrArray(g->gc, size, 0, true);
        // ....
    }
    // ....
}
1 Like

And there’s another less obvious one for comprehensions. The grammar from the docs only gives

<generator> ::= <name> "<-" <exprseq>

but the parser actually supports another “paired” one with blank simply as “comma” (because actual comma is taken)…

{: [x,y], x y <- (3, 5..9)}.all
// -> [ [ 3, 0 ], [ 5, 1 ], [ 7, 2 ], [ 9, 3 ] ]

With a series you get a simple counter in the 2nd assignment position. I’m not sure how to use it with a do-able expression to assign something else.

Alas Dictionary and subclasses redefine do to iterate only over values not key-value pairs, but define a new method keysValuesDo for the latter. So alas you can’t use x y <- to iterate over a dictionary key-value pairs in a list comprehension (directly like that).

(foo: 1, bar: 5).do {|x,y| [x,y].postln}
// [ 1, 0 ]
// [ 5, 1 ]

{: [x,y], x y <- (foo: 1, bar: 5)}.all
// -> [ [ 1, 0 ], [ 5, 1 ] ]
(Dictionary.do implementation)
Dictionary {

	do { arg function;
		this.keysValuesDo({ arg key, value, i;
			function.value(value, i);
		})
	}
}

So instead you still have to write something like

{: [x, d[x]], var d = (foo: 1, bar: 5), x <- d.keys}.all
// -> [ [ foo, 1 ], [ bar, 5 ] ]
1 Like

I’m actually wrong about that! You can write

(:1..).nextN(9) // -> [ 1, 2, 3, 4, 5, 6, 7, 8, 9 ]

The (: context creates a Routine rather than an array, and (: seems to use seriesIter or an equivalent implementation that does support iteration over infinite series. In the Bison code the expression introduced by (: is a valrange3 as opposed to a valrange2 used for the more commonly used ( ) context that calls into series and ultimately _SimpleNumberSeries for array generation.

Actually the (: syntax is documented in the Syntax Shortcuts page, although the infinite “end” isn’t mentioned.

creating arithmetic series

instead of writing: you can write:
Array.series(16,1,1) or series(1,nil,16) (1..16)
Array.series(6,1,2) or series(1,3,11) (1,3..11)

There is also the similar syntax for creating an iterating Routine

instead of writing: you can write:
seriesIter(1,3,11) (:1,3..11)

(I had no idea this site supports tables… and auto-creates them when you paste tabbed materials. That’s pretty cool!)

I guess you don’t see this (: much in SC code because everyone uses Pseqs a lot… and those require a fully instantiated array, list etc., rather than a generator. There’s of course Pseries, which is a “storageless” generator and even allows patterns for its arguments. But then that seems often overkill if don’t actually plan to change those parameters dynamically… The following substitute should probably be used in simple examples more often.

Pbind(\foo, Pseq((1..3))).iter.all(())
// -> [ ( 'foo': 1 ), ( 'foo': 2 ), ( 'foo': 3 ) ]

Pbind(\foo, (:1..3)).iter.all(())
// -> [ ( 'foo': 1 ), ( 'foo': 2 ), ( 'foo': 3 ) ]

There is a subtlety with the latter if you reuse the same Pbind object though, as the (: stream gets instantiated when the Pbind is constructed, not when its iterator (stream) is constructed, i.e.

p = Pbind(\foo, Pseq((1..3)))
p.iter.next(()) // -> ( 'foo': 1 )
p.iter.next(()) // -> ( 'foo': 1 )

// versus

q = Pbind(\foo, (:1..3))
q.iter.next(()) // -> ( 'foo': 1 )
q.iter.next(()) // -> ( 'foo': 2 )

// of course (but then you're not saving any typing)

t = Plazy{Pbind(\foo, (:1..3))}
t.iter.next(()) // -> ( 'foo': 1 )
t.iter.next(()) // -> ( 'foo': 1 )

Using comprehension-based generators directly in Pbinds could have the same problem as the middle code above. Generating their entire array with {: /*stuff*/ }.all and wrapping that in a Pseq is a solution (to that kind of problem); but so is Plazy-wrapping.of the Pbind, and the latter is a bit less typing that Pseq(.all).

Plazy{Pbind(\foo, {: x, x <- (5..19), x.isPrime})}
// vs
Pbind(\foo, Pseq({: x, x <- (5..19), x.isPrime}.all))
2 Likes

@RFluff: A series of brilliant pointers in this thread! Wouldn’t it be worth collecting them in a condensed form in the Resources/Learning category ?

… or suggest an update for the docs ?

I actually abandoned using list comprehensions as I didn’t find a straight way to programmatically describe an arbitrarily dimensioned solution space, e.g. instead of explicitely writing

(
a = all {: [u, v, w, x, y, z],
	u <- (1..7),
	v <- (1..u),
	w <- (1..v),
	x <- (1..w),
	y <- (1..x),
	z <- (1..y),
	(u + v + w + x + y + z).isPrime
}
)

miSCellaneous_lib includes ‘enum’, there you’d write

// enum gets a function to evaluate at every enumeration step

(
b = 6.enum((1..7), { |x, i, col|
	(x <= col[i-1]) and: { (i == 5).if { (col[..4].sum + x).isPrime }{ true } }
})
)

// compare results

(a - b).flat.every(_ == 0) 
-> true
1 Like

Yeah the limitation of the SC generators syntax is that you can’t assign to multiple “vars” in one expression. In normal assignments or even in Pbinds you can “vectorize” the left-hand side

#x, y = [4, 8];
x // -> 4
y // -> 8

Pbind(#[\x, \y], [12, 34]).iter.next(())
// -> ( 'y': 34, 'x': 12 )

Initially when I saw that “paired” assignment like x y <-(3..5) is possible in comprehensions I thought it was doing that, but 2nd arg there is just the 2nd arg to do, which is usually just a counter, unless you pass more custom object that implements an unusual do.

One can of course assign to an array as one variable even in comprehensions, e.g.

{: c,  c <- (1..4).dup(3).allTuples; c.testSomething  }.all

but then it’s not really space-efficient anymore as the array gets generated, which could we problematic of large combinatorial spaces.

There’s alas no built-in tuples generator (i.e. routine) in classlib (only fully stored array with allTuples). Somewhat obscurely one can write

Pbinop(\pair, (:1..3), (:1..4), 'x').iter.nextN(5)
// -> [ [ 1, 1 ], [ 1, 2 ], [ 1, 3 ], [ 1, 4 ], [ 2, 1 ] ]

But that only works in two dimensions… at least directly like that. You can “add more dimensions” but it’s a bit tricky with binops

r = Pbinop(\pair, (:1..2), (:3..4), 'x')
Pbinop('++', r, (:5..6), 'x').iter.nextN(5)
// -> [ [ 1, 3, 5 ], [ 1, 3, 6 ], [ 1, 4, 5 ], [ 1, 4, 6 ], [ 2, 3, 5 ] ]

Although ++ is a “real” (infix) binary operator, you cannot actually shorten the latter line to

(r ++.x (:5..6)).iter.nextN(5)
// -> [ [ 1, 3 ], [ 1, 4 ], [ 2, 3 ], [ 2, 4 ], 5 ]

because that applies ++ to the whole first Stream (r) instead of its elements.

I have a new toy…

(~allTupr = { |a, b ...ca|
	var c, p = Pbinop(\pair, a, b, \x);
	while { (c = ca.pop).notNil } {	p = Pbinop('++', p, c, \x) };
	p.iter })

~allTupr.((:1..2), (:3..4), (:5..6)).nextN(5)
// -> [ [ 1, 3, 5 ], [ 1, 3, 6 ], [ 1, 4, 5 ], [ 1, 4, 6 ], [ 2, 3, 5 ] ]
// same as
3.enum([(1..2), (3..4), (5..6)], type: 1)[..4]

But my allTupr takes streams/routines as inputs and outputs a stream too.

The funny thing is that since duplicating stream is a no-op, one has to be careful how to write a true stream dup… by embedding it into a function an exploiting that dup(n=2) on a function applies it n-times.

~allTupr.(*(:1..2).dup).all
 // -> [ [ 1, 2 ] ]

~allTupr.(*{(:1..2)}.dup).all
// -> [ [ 1, 1 ], [ 1, 2 ], [ 2, 1 ], [ 2, 2 ] ]

So to do…

(
b = 6.enum((1..7), { |x, i, col|
	(x <= col[i-1]) and: { (i == 5).if { (col[..4].sum + x).isPrime }{ true } }
})
)

in this streamy-style

(
~isRevSorted = {|aa| var b = true;
    aa.doAdjacentPairs {|x, y| b = b && (x >= y)}; b};

r = ~allTupr.(*{(:1..7)}.dup(6)).select
    { |aa| ~isRevSorted.(aa) && aa.sum.isPrime }
)

c = r.all // works but very slow
c == b // true though

The order of backing is obviously inefficient as I do it above since my tests don’t reject partially built sequences that aren’t adequately sorted. There’s probably a lot of overhead from all those stream/routine context switches too… So the (somewhat) smarter bear solution, with filter at every step

(~backTupr = { |f, a, b ...ca|
	var c, p = Pbinop(\pair, a, b, \x) select: f;
	while {(c = ca.pop).notNil} {p = Pbinop('++', p, c, \x) select: f};
	p.iter })

r = ~backTupr.(~isRevSorted, (:1..5), (:4..6))
r.all // -> [ [ 4, 4 ], [ 5, 4 ], [ 5, 5 ] ]

(
r = ~backTupr.(~isRevSorted, *{(:1..7)}.dup(6)).select
    { |aa| aa.sum.isPrime }
)

d = r.all // somewhat faster
d == b // still ok

Still what this backTupr has shown me is that streams have pretty substantial overhead, even when the same algorithm is “streamified”.

I’m thinking now that I could have a version of my Pforp that allows the inner stream to skip value too somehow, i.e. not output something and just e.g. \continue (maybe via a thrown exception). That would make an extra select filter unnecessary at least syntax-wise… although Stream.select is implemented as a FuncStream, so pretty cheap anyway. At least my Pforp turned out to have a somewhat pleasant syntax when used with (: streams, e.g.

Pforp((:1..5), (:4..6), [_, _]).iter.select(_[0] >= _[1]).all
// also as below, but the later generas a Pselect
Pforp((:1..5), (:4..6), [_, _]).select(_[0] >= _[1]).iter.all

Actually, I can probably steal the idea from {; and write a Pforpr that doesn’t yield by itself, so then any “continue” is simply a matter of not yielding in the function (although this could lead to Pn-style deadlocks).

There’s actually a way to “have your cake and eat it too”. It turns out that .p applied to a routine repackages the routine’s function as Prout. (.p is more commonly used to create Prouts straight from functions.) So

(:1..7).p
//-> a Prout
(:1..7).p.iter.all
// -> [ 1, 2, 3, 4, 5, 6, 7 ]

q = Pbind(\foo, (:1..3))
q.iter.next(()) // -> ( 'foo': 1 )
q.iter.next(()) // -> ( 'foo': 2 )
// but
q = Pbind(\foo, (:1..3).p)
q.iter.next(()) // -> ( 'foo': 1 )
q.iter.next(()) // -> ( 'foo': 1 )

I.e. with the .p added you get Pseq-like instantiation & reset behavior while still not storing the array.

Of course you could use Pseries, but that’s a little more verbose to type. And Pseries for steps other than 1 requires a bit more “mental” arithmetic because the 3rd argument for Pseries is the number of items emitted, not the endpoint of the series.

(9..16) do: { |x| (:3, 3+4 .. x).p.iter.all.postln }
//vs
(9..16) do: { |x| Pseries(3, 4, x-3div:4+1).iter.all.postln }

Also my (2nd) backtracking example needs to use a simpler order check; fully sub-array check is unnecessary. But that turns out to not help it much; it seems there’s still a lot of overhad from all those streams being created. I guess I could write Pforpa, which would take in a varargs pattern array and produce just one stream…

({
a = all {: [u, v, w, x, y, z],
	u <- (1..7),
	v <- (1..u),
	w <- (1..v),
	x <- (1..w),
	y <- (1..x),
	z <- (1..y),
	(u + v + w + x + y + z).isPrime
}
}.bench)

// COLD: time to run: 0.0086135090095922 seconds.
//  HOT: time to run: 0.0096485679969192 seconds.
// HOT2: time to run: 0.0058308919833507 seconds.
({
b = 6.enum((1..7), { |x, i, col|
	(x <= col[i-1]) and: { (i == 5).if { (col[..4].sum + x).isPrime }{ true } }
})
}.bench)

// COLD: time to run: 0.020542001002468 seconds.
//  HOT: time to run: 0.014371441007825 seconds.
// HOT2: time to run: 0.013709213002585 seconds.
~isRevSorted2 = { |aa| (aa @@ -1) <= (aa @@ -2) }
q = ~backTupr.(~isRevSorted2, *{(:1..7)}.dup(6)).select { |aa| aa.sum.isPrime }
{q.reset; c = q.all}.bench

// COLD: time to run: 0.12865246800357 seconds.
//  HOT: time to run: 0.11591687399778 seconds.
// HOT2: time to run: 0.10469843598548 seconds.

c == b // true, just checking

Interesting enough Haskell (beside the usual and “parallel”, i.e. zipped/interleaved comprehensions) has a 3rd kind of "SQL-like comprehensions for records… which would be an interesting idea to explore in SC, although they are probably fairly redundant to Pbind & FilterPatterns. On the other hand, that Haskell feature would be an interesting basis for Haskell-based patterns for their interface to the SC server (which is quite basic at the moment.) But those “SQL-comprehensions” are geared toward processing an array of records, i.e. a database-like thing, rather than a stream of events/records. Although with the lazy Haskell behavior there’s probably not much of a difference over there… although trying to do some of the “SQL” ops on infinite lists would probably blow things up.

The most basic feature of that is a selector like

(name, dept, salary) <- employees

which extract fields. A more SC-like example would be, e.g.

p = Pbind(\dur, 0.2, \degree, (:1..10).p)
// "future syntax" below, inspired from the Haskell one
q {:(dur: indur*2, degree: indeg div: 2), (dur: indur, degree: indeg) <- p.iter }.p

But there are some obvious usability issues with just this: you go from Pattern to Stream and then back… and at least for that basic processing there is

Pbind(\dur, 2 * Pkey(\dur) /*...*/)

or even a

Pfunc { |ev| ev[\dur] = 2 * ev[\dur]; /*...*/ ev }

Also, unlike in Haskell we couldn’t just write

q = {:(indur*2, indeg div: 2), (indur, indeg) <- p.iter }.p

because IdentityDictionary that Event is based on doesn’t guarantee “column order”. So we’d need to have field tags which make this stuff verbose… or convert to and from tuples, e.g. some kind of key-array order “on the side”, which would probably be tedious usability-wise and slow performance-wise (although Events do eventually get converted to bundle arrays anyway).

The more SQL-ish stuff that allows grouping etc. in the Haskell SQLish comprehensions does not seem terribly useful for stream processing, but might be so e.g. if you have “full notes” with duration and degree stored as events in an array and you want to process those in a complicated way than one-by-one, but I don’t have a clear musical application in mind. (I think Panola has something like that with their transformers on arrays I saw in an example, but they store the notes as strings which is a bit iffy from my perspective.)

@dkmayer You might be interested in this little experiment. I’ve turned enum into a routine as enumr as a quick hack. The good news is that a single Routine layer adds some 1.5X overhead, but not the ~10X I was seeing with my nested routines/patterns (backTupr above).

`enumr` source
+Integer {
	enumr { |pool, function = true, evalAtZero = false, type = 0, order = true, maxNum = inf|
		// type 0: one array for all levels
		// type 1: array of pools (size must equal receiver)

		var /*allCols,*/ currentCol, currentIndex, indexCol,
			endOfEnum, currentPool, check, item, count;

		^r { // excuse the lack of indentation
		currentIndex = 0;
		endOfEnum = false;
		count = 0;
		order.not.if {
			pool = (type == 0).if { pool.scramble }{ pool.collect(_.scramble) };
		};
		indexCol = -1!this;
		currentCol = 0!this;
		while { endOfEnum.not }{ // old r { here had reset bug
			indexCol[currentIndex] = indexCol[currentIndex] + 1;
			currentPool = (type == 0).if { pool }{ pool[currentIndex] };
			(indexCol[currentIndex] >= (currentPool.size)).if {
				indexCol[currentIndex] = -1;
				currentIndex = currentIndex - 1;
			}{
				item = currentPool.at(indexCol[currentIndex]);

				((currentIndex == 0) && evalAtZero.not).if {
					true
				}{
					function.(item, currentIndex, currentCol, indexCol)
				}.if {
					currentCol[currentIndex] = item;
					(currentIndex == (this - 1)).if {
						//allCols = allCols.add(currentCol.deepCopy);
						yield(currentCol.copy);
						count = count + 1;
						(count == maxNum).if { endOfEnum = true };
					}{
						currentIndex = currentIndex + 1
					}
				}
			};
			(currentIndex == -1).if { endOfEnum = true };
		}};
		//^allCols;
	}
}
// standard enum first
(3 do: { bench { b = 6.enum((1..7), { |x, i, col|
	(x <= col[i-1]) and: { (i == 5).if { (col[..4].sum + x).isPrime }{ true } }
})}})
// time to run: 0.016463552994537 seconds.
// time to run: 0.010260178001772 seconds.
// time to run: 0.010137996003323 seconds.

// Routine version
(
r = 6.enumr((1..7), { |x, i, col|
	(x <= col[i-1]) and: { (i == 5).if { (col[..4].sum + x).isPrime }{ true } }
});

3 do: { bench {r.reset; c = r.all} }
)
// time to run: 0.02330367200193 seconds.
// time to run: 0.014323964998766 seconds.
// time to run: 0.013121689997206 seconds.

c == b // ok
1 Like

That can be useful indeed. Just paraphrasing now as between some online-courses and -talks: I assume especially in combination with ‘order’-flag set to false and resetting.
Enumerating all versions of whatever is, done directly in music, not super-exciting (a composition teacher once said: completeness is an aesthetically irrelevant category, I tend to agree …). So snooping into certain parts of solution spaces might be a better strategy.
Actually enum grew out of similar stuff I did within an artistic research project. I did not use it personally for musical applications. However there might be nice ones, e.g. the 2nd help file example after an idea of Fabrice Mogini, finding all melodies of a certain melodic shape. This is an option space, there’s no need to take all of them.

Well, the boolean function that enumr calls could randomly return false to arbitrarily exclude some branches. And since it has access to the “depth level” (currentIndex) it could even do that in a way that e.g. increases random random rejection probability with the depth, as not to produce too “booring” results if it rejects an entire big sub-tree too easily early on

p = 3.enumr([1,2], {[true, false].choose}).p
p.iter.all
// -> [ [ 1, 1, 2 ], [ 1, 2, 1 ], [ 1, 2, 2 ], [ 2, 1, 1 ], [ 2, 2, 2 ] ]
p.iter.all
// -> [ [ 2, 2, 2 ] ]
p.iter.all
// -> nil
p.iter.all
// -> [ [ 2, 1, 1 ], [ 2, 1, 2 ], [ 2, 2, 1 ], [ 2, 2, 2 ] ]

p = 3.enumr([1,2], { |x,i,col| if (i < 2) {true} {[true, false].choose} }).p
p.iter.all
// -> [ [ 2, 1, 2 ], [ 2, 2, 2 ] ]
p.iter.all
// -> [ [ 1, 1, 1 ], [ 1, 1, 2 ], [ 1, 2, 2 ], [ 2, 2, 1 ] ]
p.iter.all
// -> [ [ 1, 1, 2 ], [ 2, 1, 1 ] ]
p.iter.all
// -> [ [ 1, 1, 1 ], [ 1, 1, 2 ], [ 1, 2, 2 ], [ 2, 1, 1 ], [ 2, 1, 2 ], [ 2, 2, 2 ] ]
p.iter.all
// -> [ [ 1, 2, 2 ] ]

The 2nd one can’t give just nil.

(I also saw you’ve implemented some sieves in miSC, I’m not sure if this simple idea isn’t entirely a subset of some sieve.)

Yes, I see now, I had different start orders in mind, but that’s also a possibility.

I haven’t either thought about the relations between trees and sieves. But as you brought up this idea of a lazy enumerarion (enumr) variant, for sieves I did something like that via a twofold implementation: the Sieve class and the Psieve patterns, where you can dynamically change params. I thought that this could be interesting, but I didn’t experiment much with it.
Doing something similar with enumeration could definitely make sense (either via a Routine or a Pattern, which could basically do the same via embedInStream, as Psieve).