Pforp, Pforai (pattern) extensions on BinaryOpXStream; and a flatMap experiment

This isn’t a quark yet, but more like getting feedback on some (fairly concrete) ideas. For more background & motivation you could read the discussion using the plain old BinaryOpXStream, which is what you get when your use the .x adverb on an operator on streams. That thing has some limitations:

1.) Only operators/methods are supported (duh) not arbitrary (binary) functions. Although you can of course define as many operators as you like in SC, even at runtime, e.g.

('<>': { "hi".postln }) <> () // <> not defined in Event by default

And as far stream/patterns go, you can ‘x’ adverb any method taking one argument really (i.e. treat it as a binary operator) or even a method that takes more than one argument, but for those you get the default values for the rest, as for blend below (blendFrac is 0.5 by default)

Pbinop(\blend, Pseries(0, 10), Pseq([10, 100]), 'x').iter.nextN(8)
// -> [ 5.0, 50.0, 10.0, 55.0, 15.0, 60.0, 20.0, 65.0 ]

But it gets a bit tedious to have to define operators (or even pseudo-methods in events) just so you can use that .x adverb streamified, even though it’s a fun hack, e.g.

(Pbinop(\bhack,
	Pbind(*[
		bhack: { |self, other| other + rrand(1, self.numbr) },
		numbr: Pseries(1, 2),
	]),
	Pseq([10, 100]),
	'x').iter.nextN(8, ()))
// (e.g.) -> [ 11, 101, 11, 103, 15, 102, 16, 107 ]

2.) Not easy to get “variable row length” support. Recall that you can make Pseq (unlike quite a few other patterns) to “continue” past nils, e.g.

Pseq([1, 2, nil, 3, 4]).iter.all // -> [ 1, 2 ]
// but...
Pseq([1, 2, nil, 3, 4]).iter.nextN(6)
// -> [ 1, 2, nil, 3, 4, nil ]

So you may want/like to use that trick (useful with patterns ending in “p”, e.g. Psetp and friends) but alas that won’t work with BinaryOpXStream:

(Pseries(10, 10) +.x Pseq([1, 2, nil, 3, 4])).iter.nextN(6)
// -> [ 11, 12, 21, 22, 31, 32 ]

3.) The fact that e.g. +.x is non-commutative operator even on arrays (and even more so) on streams can be confusing. (The outer loop is the left-hand side argument, by the way.) So maybe having (better) named arguments like a Pattern usually has is a usability improvement too.

Soo… I actually have 2.1 drafts for how to make a better mousetrap in this area.

Pforp

The first idea is a rather straightforward lifting of BinaryOpXStream to a Pattern class that just fixes the above shortcomings… but alas it turned out to have somewhat odd semantics in the more “advanced” use cases I came up with. Some usage examples first (there’s class code include in a collapsible box later).

By default Pforp works like BinaryOpXStream

Pforp(Pseries(10, 10), Pseq([1, 2, nil, 3, 4]), (_+_)).iter.nextN(6)
// -> [ 11, 12, 21, 22, 31, 32 ]

except you pass a function instead of an operator (as 3rd arg). This function receives an item from the first stream and needs to combine/merge it with an item from the 2nd (inner) stream. (This function is basically what the innermost code block in a pair of nested for-loops would be in charge of doing in a typical imperative programming piece of code, hence the Pforp name of the pattern.)

A first minor improvement (over BinaryOpXStream), which is “on by default” is that the input stream value is also passed as 3rd argument to the merge/loop function, so you can write for example

(Pforp(Pseries(10, 10), Pseq([1, 2, nil, 3, 4]),
       {|...aa| aa.sum}).iter.nextN(6, 100))
// -> [ 111, 112, 121, 122, 131, 132 ]

The 4th arg to Pforp is where it gets more interesting; this is nilsToReset (the inner stream), and the default value is 1; however, if you change that to 2 it starts to “see” past the first nil, but every nil is still used to pull a (new) value for the outer stream/pattern…

// recall the 1st example
Pforp(Pseries(10, 10), Pseq([1, 2, nil, 3, 4]), (_+_)).iter.nextN(6)
// -> [ 11, 12, 21, 22, 31, 32 ]
// Now
Pforp(Pseries(10, 10), Pseq([1, 2, nil, 3, 4]), (_+_), 2).iter.nextN(6)
// -> [ 11, 12, 23, 24, 41, 42 ]

So the “middle” nil in the Pseq no longer rests the stream, so final two values in Pseq’s array are used for the creating the “2nd row” with the 2nd item pulled from the outer stream (producing [23, 24] vs. [21, 22] in the default nilsToReset=1 example.) These “rows” (sub-sequences) in the Pseq, separated by nils, can obviously be of different length).

The somewhat odd part however is that a finite Pseq acts as having a tail of infinite nils at the end, so the first two of those nils act to reset the inner stream… but the fact there are two taken into account also means that the outer stream gets advanced twice on the “array end” (now, i.e. with nilsToReset=2). So that’s why you have no 30-something value in the output. It’s actually possible to work around this limitation in Pforp by using an infinite stream for the inner sequence too, but still with explicit nils in it to make “row advances” in the outer pattern:

// So instead of
Pforp(Pseries(10, 10), Pseq([1, 2, nil, 3, 4]), (_+_), 2).iter.nextN(12)
// -> [ 11, 12, 23, 24, 41, 42, 53, 54, 71, 72, 83, 84 ]
// We can do
Pforp(Pseries(10, 10), Pseq([1, 2, nil, 3, 4, nil], inf), (_+_), 2).iter.nextN(12)
// -> [ 11, 12, 23, 24, 31, 32, 43, 44, 51, 52, 63, 64 ]

There are no unintended skips of the outer pattern now, and we’re alternating between (nil-delimited) sub-sequence of the inner pattern. This approach may be good enough for a “intermediate” use cases.

But let’s say, as “advanced” usage, we also want to able to skip “rows” (outer stream values) and not reset the inner pattern’s stream.

// this won't work as intended
Pforp(Pseries(10, 10), Pseq([1, 2, nil, nil, 3, 4, nil], inf), (_+_), 2).iter.nextN(12)
// -> [ 11, 12, 31, 32, 51, 52, 71, 72, 91, 92, 111, 112 ]

Now we’re “back to square one” as two (explicit, this time) nils in Pseq cause an inner stream reset (beside skipping a “row” from the outer stream). So, let’s make nilsToReset=3 and see what happens

Pforp(Pseries(10, 10), Pseq([1, 2, nil, nil, 3, 4, nil], inf), (_+_), 3).iter.nextN(12)
// -> [ 11, 12, 33, 34, 41, 42, 63, 64, 71, 72, 93, 94 ]

Managed to skip the 20s as intended, and the 30s “paired” with the right values now (3 and 4).

So Pforp is reasonably useable, but it can be a bit hairy to reason about, and having to use infinite Pseq sub-sequences to get the intended results in the “skippy use cases” is a bit non-intuitive as well.

`Pforp` implementation/code
Pforp : Pattern {
	var <>outerPattern, <>innerPattern, <>mergeFunc, <>nilsToReset = 1;

	*new { arg outerPattern, innerPattern, itemCombineFunc, nilsToReset = 1;
		^super.newCopyArgs(outerPattern, innerPattern, itemCombineFunc, nilsToReset)
	}

	embedInStream {  arg inval;
		var outerStr = outerPattern.asStream, innerStr = innerPattern.asStream;
		// could treat mergeFunc as a pattern too, but it's not clear if that helps much
		// since functions can pull from streams "on their own" anyway...
		var outerVal = outerStr.next(inval), innerVal;
		var nilCount = 0, mustReset;

		if (outerVal.isNil) { ^nil; };
		loop {
			innerVal = innerStr.next(inval);
			if (innerVal.isNil) {
				while {
					nilCount = nilCount + 1;
					mustReset = (nilCount >= nilsToReset);
					// always advance outer on a nil
					outerVal = outerStr.next(inval);
					if (outerVal.isNil) { ^nil };
					if (mustReset) { innerStr.reset };
					innerVal = innerStr.next(inval);
					innerVal.isNil && (mustReset.not) // repeat condition
				};
				// PforEach(1, nil, {}) would hang without next check
				if (innerVal.isNil) { ^nil } { nilCount = 0 };
			};
			// Copies prevent the function from changing the args "too much".
			// Well, to some extent, it's not a deep copy.
			inval = yield(mergeFunc.value(outerVal.copy, innerVal.copy, inval.copy));
		};
	}
	// TODO: handle cleanups (for event streams)
	// todo: storeOn
}

As a minor improvement we can make Pforp “steal a nil” for the purposes of letting use a finite Pseq “most of the time”. (I’m changing the “running example” to something a bit simpler now). Basically, we’d like two write the 2nd line below, but get the results like for the first.

Pforp(Pseries(), Pseq([0, 10, nil, 100, nil], inf), (_+_), 2).iter.nextN(4)
// -> [ 0, 10, 101, 2 ]
// But:
Pforp(Pseries(), Pseq([0, 10, nil, 100]), (_+_), 2).iter.nextN(4)
// -> [ 0, 10, 101, 3 ]

There’s one-line hack in Pforp that will give us this. Insead of unconditionally pulling an item from the outer stream on every nil from the inner stream we do it only for nils that aren’t about to reset the inner Stream (this is how we “steal” one), but for the base case where a single nil would reset, we can’t obviously do that:

if(nilsToReset <= 1 || mustReset.not) { outerVal = outerStr.next(inval) };

For clarity I’m calling this modification Pforp2 below.

`Pforp2` implementation/code
Pforp2 : Pattern {
	var <>outerPattern, <>innerPattern, <>mergeFunc, <>nilsToReset = 1;

	*new { arg outerPattern, innerPattern, itemCombineFunc, nilsToReset = 1;
		^super.newCopyArgs(outerPattern, innerPattern, itemCombineFunc, nilsToReset)
	}

	embedInStream {  arg inval;
		var outerStr = outerPattern.asStream, innerStr = innerPattern.asStream;
		var outerVal = outerStr.next(inval), innerVal;
		var nilCount = 0, mustReset;

		if (outerVal.isNil) { ^nil; };
		loop {
			innerVal = innerStr.next(inval);
			if (innerVal.isNil) {
				while {
					nilCount = nilCount + 1;
					mustReset = (nilCount >= nilsToReset);
					// cond to pull: not on the "reset-triggering nil" unless
					// there's no other way to pull from outer
					if(nilsToReset <= 1 || mustReset.not) { outerVal = outerStr.next(inval) };
					if (outerVal.isNil) { ^nil };
					if (mustReset) { innerStr.reset };
					innerVal = innerStr.next(inval);
					innerVal.isNil && (mustReset.not)
				};
				// PforEach(1, nil, {}) would hang without next check
				if (innerVal.isNil) { ^nil } { nilCount = 0 };
			};
			inval = yield(mergeFunc.value(outerVal.copy, innerVal.copy, inval.copy));
		};
	}
	// TODO: handle cleanups (for event streams)
	// todo: storeOn
}

So what this buys us is

Pforp(Pseries(), Pseq([0, 10, nil, 100, nil], inf), (_+_), 2).iter.nextN(4)
// -> [ 0, 10, 101, 2 ]
// But:
Pforp(Pseries(), Pseq([0, 10, nil, 100]), (_+_), 2).iter.nextN(4)
// -> [ 0, 10, 101, 3 ]
Pforp2(Pseries(), Pseq([0, 10, nil, 100]), (_+_), 2).iter.nextN(4)
// -> [ 0, 10, 101, 2 ]

But alas Pforp is still wired in some “more advanced” cases when want to allow “skips in the middle”

Pforp2(Pseries(), Pseq([0, 10, nil, nil, 100]), (_+_), 3).iter.nextN(9)
// -> [ 0, 10, 102, 4, 14, 106, 8, 18, 110 ]

The 1 value output from the Pseries is skipped on purpose here in the jump from 10 to 102 (the explicit “double nil” in the Pseq causes that), but we’re also “missing” the 3 output from the Pseries because the end of the Pseq gives 3 nils, and we’re only “eating” one.

As it turned out, things can be made much simpler by distinguishing between the signal that advances the outer stream ane the signal that resets the inner stream. In fact, we can make everything simpler by using arrays for inner stream. Reaching the array end is now the signal to pull another (full array) from the inner steam and nils embedded in the array(s) still control the outer stream “pulls”. That gets us Pforai–see next post.

Pforai

Pforai code/implementation
Pforai : Pattern {
	var <>outerPattern, <>arrayPattern, <>itemMergeFunc, <>arrayEndNils = 1;

	*new { arg outerPattern, arrayPattern, itemCombineFunc, arrayEndNils = 1;
		^super.newCopyArgs(outerPattern, arrayPattern, itemCombineFunc, arrayEndNils)
	}

	embedInStream {  arg inval;
		var outerStr = outerPattern.asStream, arrayStr = arrayPattern.asStream;
		var outerVal = outerStr.next(inval), innerVal;
		var nilCount = 0, arrayIdx = 0, array;

		if (outerVal.isNil) { ^nil; };
		loop {
			array = arrayStr.next(inval);
			// Nil test must come before we pad
			// because (nil ++ nil.dup(1)) == ([] ++ nil.dup(1))
			// and we want to treat nil array differently from an empty one here.
			if (array.isNil) { ^nil };
			array = array ++ nil.dup(arrayEndNils);

			array.do { |item, idx|
				if (item.isNil) {
					// always advance outerStr on a nil item in the array
					outerVal = outerStr.next(inval);
					if (outerVal.isNil) { ^nil };
				} { // todo: maybe have flag to pass nils to func; it's nasty for that to the default though
					inval = yield(itemMergeFunc.value(outerVal.copy, item.copy, inval.copy, idx, array.size));
				};
			};
		};
	}
	// TODO: handle cleanups (for event streams)
	// todo: storeOn
}

Basic usage is even simpler than for Pforp because you don’t have to even build a Pseq-style sequence:

Pforp(Pseries(), Pseq([0, 10, nil, 100, nil], inf), (_+_), 2).iter.nextN(9)
// -> [ 0, 10, 101, 2, 12, 103, 4, 14, 105 ]
// Now just:

Pforai(Pseries(), [0, 10, nil, 100], (_+_)).iter.nextN(9)
// -> [ 0, 10, 101, 2, 12, 103, 4, 14, 105 ]

But it accepts a pattern on the 2nd arg, not just fixed array. Before we get to that kind of usage, more comparison with the previous solutions. As you could probably from the above, the a “array end” is (by default) equivalent to one nil, i.e. it will automatically pull (just one) value from the outer pattern (which by the way, it still a simple pattern.) Recall the nasty example we trying to deal with in the end of the last post:

Pforp2(Pseries(), Pseq([0, 10, nil, nil, 100]), (_+_), 3).iter.nextN(9)
// -> [ 0, 10, 102, 4, 14, 106, 8, 18, 110 ]

// Now:
Pforai(Pseries(), [0, 10, nil, nil, 100], (_+_)).iter.nextN(9) // arr end just advances by 1 now
// -> [ 0, 10, 102, 3, 13, 105, 6, 16, 108 ]

There’s no weird jump now at the end of the array, the 3s (from the outer pattern) are not “mysteriously” missing anymore.

Pforai also takes a numerical 4th argument, but unlike Pforp this says how many nils the array end translates to. So for a (pretty extreme) example you could skip 1000 items from the outer stream when the array ends:

Pforai(Pseries(), [0, 10, nil, nil, 100], (_+_), 1000).iter.nextN(9)
// -> [ 0, 10, 102, 1002, 1012, 1104, 2004, 2014, 2106 ]

And yes, you can set that 4th argument to zero if don’t want any “row advances” (outer stream pulls) when the array “wraps around”.

Pforai(Pseries(), [0, 10, nil, 100], (_+_), 0).iter.nextN(9)
// -> [ 0, 10, 101, 1, 11, 102, 2, 12, 103 ]

That zero setting will generally makes sense only you’re using some internal nils in the array.

The array is actually a stream/pattern of arrays instead of a fixed one… The following examples are bit gratuitous (as they could be done with a plain array with nils in the right places), but it shows that it works to have Pseqs outputting arrays there. And someone might prefer “sub arrays” to writing nil for outer pattern “advance/next command”, but these have to be a in Pse.

Actually the first of the following examples is not entirely gratuitous because the entire array has to be/become a nil for the inner patter to terminate the Pforai:

Pforai(Pseries(0, 10), Pseq([[1, 2], [5, 6]]), (_+_)).iter.nextN(9)
// -> [ 1, 2, 15, 16, nil, nil, nil, nil, nil ]

Pforai(Pseries(0, 10), Pseq([[1, 2], [5, 6]], inf), (_+_)).iter.nextN(9)
// -> [ 1, 2, 15, 16, 21, 22, 35, 36, 41 ]

Can obviously vary array size between “iterations” output by the Pseq.

Pforai(Pseq([10, 100], inf), Pseq([[1, 2, 3], [66]], inf), (_+_)).iter.nextN(9)
// -> [ 11, 12, 13, 166, 11, 12, 13, 166, 11 ]

With a “dummy”/constant outer pattern is basically an array-swap-enabled Pseq-like, plus Pcollect-syyle map.

Pforai(100, [1, 2, 3], (_+_)).iter.nextN(9)
// -> [ 101, 102, 103, 101, 102, 103, 101, 102, 103 ]

If you dynamically swap the array, e.g. via a Pfunc you gent Pn+Plazy-style, full play of the previous array before the swap becomes effective:

~myarr = [1, 2];
r = Pforai(100, Pfunc {~myarr}, (_+_)).iter
r.nextN(5) // -> [ 101, 102, 101, 102, 101 ]
// swap array now;
~myarr = [5, 6];
r.nextN(5) // -> [ 102, 105, 106, 105, 106 ]

And a fairly minimalistic example of using Pforai with events as items:

// using with event patterns as items (Event.next does .copy.putAll via .composeEvents)
Pforai((dur: 0.2), [(degree: 1), (note: 10)], (_.next(_))).iter.nextN(3, ())
// -> [ ( 'degree': 1, 'dur': 0.2 ), ( 'note': 10, 'dur': 0.2 ), ( 'degree': 1, 'dur': 0.2 ) ]

// sound (blending) example; array used for the blendFrac with a nil "pulling" 
// the Rest() from the outer sequence, which is then "matched" with the 0-blendFrac.
(
~bf = {|self, other, blendFrac=0.5|
       self.putAll(self.blend(other, blendFrac, false)) };
Pforai(Pbind(\dur, Pseq([0.5, Rest(1.4)], 3), \degree, Pseries(1, 5)),
	0.1*(1..9) ++ [nil, 0],
	~bf.(_, (dur: 0.1, degree: 8), _)).trace.play;
)

N.B. the final two letters in the Pforai name stand for “array item”, not something pretentious. I was considering (also) doing a variant that passes the entire array to the merging function, but insofar I’m happy enough with Pforai.

Here’s a bit of an expressiveness / conciseness shootout between various solutions for this “for-each-ing” problem. First the original example of @dkmayer (from the other thread). This actually looks fairly easy in any approach

// All four give the same output.
// The first two take a "finite" Pseq "on the right".
Pforp(Pseq((60..65), inf), Pseq([0, -3, 3]), _+_).iter.nextN(20)
(Pseq((60..65), inf) +.x Pseq([0, -3, 3])).iter.nextN(20)
// And these next two take an array.
Pforai(Pseq((60..65), inf), [0, -3, 3], _+_).iter.nextN(20)
(Pseq((60..65), inf) + [0, -3, 3]).flatten.iter.nextN(20)

The last solution (addition before flatten) is commutative for argument order, the other three are not. Also, .flatten translates to Pflatten there.

The next set of examples varies the row. For the sake of keeping these examples reasonably compact, they’re not musical in terms of the sequence produced (you’d probably want the 2nd “row” to be 3-sized too, e.g. [0, -3, 3, nil, 0, -6, 6] or some such, but I’m using a single large negative value for the sake of easily spotting it in the output when comparing and debugging these examples.

// The "challenger" is now:
Pforai(Pseq((60..65), inf), [0, -3, 3, nil, -100], _+_).iter.nextN(16)
// -> [ 60, 57, 63, -39, 62, 59, 65, -37, 64, 61, 67, -35, 60, 57, 63, -39 ]

// Fairly easy to do with (P)flatten still:
(Pseq((60..65), inf) + Pseq([[0, -3, 3], [-100]], inf)).flatten.iter.nextN(16)

// Pforp solution(s)
Pforp(Pseq((60..65), inf), Pseq([0, -3, 3, nil, -100, nil], inf), _+_, 2).iter.nextN(16)
Pforp2(Pseq((60..65), inf), Pseq([0, -3, 3, nil, -100]), _+_, 2).iter.nextN(16)

// By far the most hairy to figure out is the +.x (on the pattern):
((Pseq((60..65), inf) +.x value {var i = -1; Plazy {
	[Pseq([0, -3, 3]), Pseq([-100])] @@ (i = i + 1)}}).iter.nextN(16))

For an explanation how that last monster works, see this related thread/post. This is what I was complaining about in motivation item “2.)” in my first post in this thread.

Third shootout example, “full row skip” (or perhaps better called “empty rows”):

// Recall that
Pforai(Pseq((60..65), inf), [0, -3, 3], _+_).iter.nextN(20)
// -> [ 60, 57, 63, 61, 58, 64, 62, 59, 65, 63, 60, 66, 64, 61, 67, 65, 62, 68, 60, 57 ]
// whereas
Pforai(Pseq((60..65), inf), [0, -3, 3, nil], _+_).iter.nextN(20)
// -> [ 60, 57, 63, 62, 59, 65, 64, 61, 67, 60, 57, 63, 62, 59, 65, 64, 61, 67, 60, 57 ]
// Equivalent with the previous one
Pforai(Pseq((60..65), inf), [0, -3, 3], _+_, 2).iter.nextN(20)
// We want the 2nd/3rd ("skippy") one(s) done "the other ways" now

(Pseq((60..65), inf) + Pseq([[0, -3, 3], []], inf)).flatten.iter.nextN(20)
// ^^ Works because 61 + [] = [] 

Pforp(Pseq((60..65), inf), Pseq([0, -3, 3, nil, nil], inf), _+_, 3).iter.nextN(20)
Pforp2(Pseq((60..65), inf), Pseq([0, -3, 3]), _+_, 3).iter.nextN(20)

// I'm not bothering with the +.x anymore here

The “array-based” solutions are clearly easier to reason about and not having to worry about tweaking another parameter (“nilsToReset”) that the Pforp-s do need changed is a boon here. Just plugging in an empty array (in the Pflatten solution) or a nil in the Pforai’s array feels a bit more intuitive to me.

Finally, the Pflatten solution can be used with functions rather than operators with a bit of a (Pcollect) helper glue… which ends up looking quite a bit like Pforai actually when used (but it’s of course rather pointless to use it with an operator turned into a function)

~flatMapP2 = {|p1, p2, f| Ptuple([p1, p2], inf).collect(f.(*_)).flatten};

~flatMapP2.(Pseq((60..65), inf), [0, -3, 3], _+_).iter.nextN(20);
// same as
Pforai(Pseq((60..65), inf), [0, -3, 3], _+_).iter.nextN(20);

// But there are obviously syntax differences for "skippy stuff"
Pforai(Pseq((60..65), inf), [0, -3, 3, nil], _+_).iter.nextN(20)
// need be
~flatMapP2.(Pseq((60..65), inf), Pseq([[0, -3, 3], []], inf), _+_).iter.nextN(20);

And a n-pattern version is also straightforward (with varargs), although the function better be first then

~flatMap = {|f ...pats| Ptuple(pats, inf).collect(f.(*_)).flatten};
~flatMap.(_+_, Pseq((60..65), inf), [0, -3, 3]).iter.nextN(20);
~flatMap.({|...aa| aa.sum}, Pseq((60..65), inf), [0, -3, 3], -20).iter.nextN(20);

Using that ~flatMap for (entire) event processing turned out less straightforward than I thought it would be though.

Recall our earlier Pforai sound example

(
~bf = {|self, other, blendFrac=0.5|
       self.putAll(self.blend(other, blendFrac, false)) };
Pforai(Pbind(\dur, Pseq([0.5, Rest(1.4)], 3), \degree, Pseries(1, 5)),
	0.1*(1..9) ++ [nil, 0],
	~bf.(_, (dur: 0.1, degree: 8), _)).trace.play;
)

A first attempt to translate this to ~flatMap turned out “one big chord” (instead of the sequence) when played though…

(~flatMap.(~bf, // ~bf same from above
	Pbind(\dur, Pseq([0.5, Rest(1.4)], 3), \degree, Pseries(1, 5)),
	(dur: 0.1, degree: 8),
	Pseq([0.1*(1..9), [0]], inf)).trace.play) // chord ouch and

// ERROR: Primitive '_Event_Delta' failed.
// Wrong type. RECEIVER: ( 'instrument': default, 'dur': [ 0.46, 0.42, 0.38, 0.34, 0.3, 0.26, 0.22, 0.18, 0.14 ]', ....

On the plus side, the fact that you can pass as many streams/patterns as you want to flatMap (even though one is constant in the example here) makes it a bit more “natural” than having to stick one of the streams in the function, as I did in the Pforai example further above.

Since it looks somewhat promising, let’s try to “deChord” that with helper function before playing it.

d = (foo: [1, 2], bar: [10, 20]);
d.asPairs.flop; // -> [ [ foo, 1, bar, 10 ], [ foo, 2, bar, 20 ] ]
d.asPairs.flop.collect(_.asEvent);
// -> [ ( 'bar': 10, 'foo': 1 ), ( 'bar': 20, 'foo': 2 ) ]. So...

~eventDechord = { |ev| ev.asPairs.flop.collect(_.asEvent) };

(p = ~flatMap.(~eventDechord <> ~bf,
	Pbind(\dur, Pseq([0.5, Rest(1.4)], 3), \degree, Pseries(1, 5)),
	(dur: 0.1, degree: 8),
	Pseq([0.1*(1..9), [0]], inf)));

Pfin(20, p.trace).play;

// plays "half of it", but barfs on the weird Rest:
//^^ The preceding error dump is for ERROR: Primitive '_Event_Delta' failed.
// Wrong type. RECEIVER: ( 'degree': 6, 'dur': Rest([ 1.4 ]), 'server': localhost )

So to fix that too, the issue is that array-ed operations have some fairly strange effects on Rests, which turn into Rests with array values, which SC doesn’t actually know how to delta on…

blend(Rest(1), 77, [0]) // -> Rest([ 1 ])

d = ~bf.((dur: Rest(1.4), degree: 1), (dur: 0.1, degree: 8), [0])
// -> ( 'degree': [ 1 ], 'dur': Rest([ 1.4 ]) )

d.asPairs.collect{|it| it.isKindOf(Rest).if {it.value.collect(Rest(_)) } {it} }
// -> [ degree, [ 1 ], dur, [ Rest(1.4) ] ]

But note that we need to do that “Rest flop” in the right spot in the sequence of transformations: after turning into pairs but before turning pairs into events. So the “right combo” is…

(~eventFlopRestsAndDechord = { |ev|
	var withFloppedRests = ev.asPairs.collect{|it|
		it.isKindOf(Rest).if {it.value.collect(Rest(_)) } {it} };
	withFloppedRests.flop.collect(_.asEvent)
});

(p = ~flatMap.(~eventFlopRestsAndDechord <> ~bf,
	Pbind(\dur, Pseq([0.5, Rest(1.4)], 3), \degree, Pseries(1, 5)),
	(dur: 0.1, degree: 8),
	Pseq([0.1*(1..9), [0]], inf)));

Pfin(20, p.trace).play;

Actually, that was extra fix a bit unnecessary/overkill here since I could have just paired the Rests with 0 instead of [0]…

(p = ~flatMap.(~eventDechord <> ~bf,
	Pbind(\dur, Pseq([0.5, Rest(1.4)], 3), \degree, Pseries(1, 5)),
	(dur: 0.1, degree: 8),
	Pseq([0.1*(1..9), 0], inf))); // note 0 instead of [0]

Pfin(20, p.trace).play;

Unlike when using Pforai, here (with flatMap) the “row” doesn’t have to be an array since the (merge/map) function processes entire arrays in a single call, so they can be something else too as long as the function handles that. (And my ~bf function was “auto-vectorized” from a scalar function anyway, with no code changes.)

The Pfin was needed because my quick take on flatMap uses an infinite-rep Ptuple. So just p.trace.play will “go on forever” looping back after the 3 somewhat different passages. To fix that:

// instead of
~flatMap = {|f ...pats| Ptuple(pats, inf).collect(f.(*_)).flatten};
// let's define/use
~flatMapr1 = {|f ...pats| Ptuple(pats).collect(f.(*_)).flatten};

(p = ~flatMapr1.(~eventDechord <> ~bf,
	Pbind(\dur, Pseq([0.5, Rest(1.4)], 3), \degree, Pseries(1, 5)),
	(dur: 0.1, degree: 8),
	Pseq([0.1*(1..9), 0], inf)));

p.trace.play;

That works ok here.

Alas it’s a conundrum to put a default value for the reps while still having a varargs array of patterns. But a mere top-level repeat is much easier to add with an extra Pn wrapper, so the repeats = 1 for Ptuple is a lot more useful in flatMap’s code.

The reason I had inf reps for Ptuple in flatMap initially is that I mistakenly wanted it to make it repeat finite Pseqs, but that’s the wrong mindset here. The right one is to have Pseqs repeat themselves if needed:

~flatMap.(_+_, Pseq((60..65), inf), Pseq([[0, -3, 3], []])).iter.nextN(20);
// -> [ 60, 57, 63, 60, 57, 63, 60, 57, 63, 60, 57, 63, 60, 57, 63, 60, 57, 63, 60, 57 ]
// ^^ that's infinite, but differs from the right/intended output
// in having a too short period: note the 4th & 5th elements differ from
~flatMap.(_+_, Pseq((60..65), inf), Pseq([[0, -3, 3], []], inf)).iter.nextN(20);
// -> [ 60, 57, 63, 62, 59, 65, 64, 61, 67, 60, 57, 63, 62, 59, 65, 64, 61, 67, 60, 57 ]

// With ~flatMapr1 there's a much more clear distinction 
~flatMapr1.(_+_, Pseq((60..65), inf), Pseq([[0, -3, 3], []])).iter.nextN(20);
// -> [ 60, 57, 63, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil ]
~flatMapr1.(_+_, Pseq((60..65), inf), Pseq([[0, -3, 3], []], inf)).iter.nextN(20);
// -> [ 60, 57, 63, 62, 59, 65, 64, 61, 67, 60, 57, 63, 62, 59, 65, 64, 61, 67, 60, 57 ]

Maybe I should go back and fix this (by replacing ~flatMap with ~flatMapr1) to remove a source of confusion in these (flatMap) examples… But maybe there’s something to be learned from this mistake too :slight_smile:

I’ve done a version of Pforp now that has a \skip feature, which act as continue for the loop, i.e. doesn’t return anything. I thought this would be only useful for syntax conciseness, but it helps with performance a surprising amount not having to reject stuff thereafter, even when the amount of stuff rejected isn’t large.

bench { 10000 do: { Pforpc((:3..5), (:4..6), {|x,y| if(x != y) {[x,y]} {\skip}}).iter.all }}
// time to run: 2.1056828770088 seconds.
// time to run: 2.1689775430132 seconds.

bench { 10000 do: { Pforp((:3..5), (:4..6), [_,_]).iter.reject({|a| a[0] == a[1]}).all }}
// time to run: 2.7107534459999 seconds.
// time to run: 2.8003267510001 seconds.

This isn’t too shabby even compared with comprehensions, which are of course a lot more optimized, but only for the common series. If you force comprehensions to use external iterators (as (: does below), they lose a lot of speed. In fact seriesIter (the implementation of (: ) seems to be the real bottleneck here and/or the context switches to that.

bench { 10000 do: {{: [x,y], x<-(3..5), y<-(4..6), x !=y }.all} }
// time to run: 1.001463046 seconds.
// time to run: 1.0205968089999 seconds.

bench { 10000 do: {{: [x,y], x<-(:3..5), y<-(:4..6), x !=y }.all} }
// time to run: 2.726683586 seconds.
// A 2nd run of that interestingly crashed the interpreter!

The token \skip is configurable in Pforpc's constructor, by the way.

Pforpc (i.e. with skip/continue) source
Pforpc : Pattern {
	var <>outerPattern, <>innerPattern, <>mergeFunc, <>nilsToReset = 1, <>skipToken = \skip;

	*new { arg outerPattern, innerPattern, itemCombineFunc, nilsToReset = 1, skipToken = \skip;
		^super.newCopyArgs(outerPattern, innerPattern, itemCombineFunc, nilsToReset, skipToken)
	}

	embedInStream {  arg inval;
		var outerStr = outerPattern.asStream, innerStr = innerPattern.asStream;
		// could treat mergeFunc as a pattern too, but it's not clear if that helps much
		// since functions can pull from streams "on their own" anyway...
		var outerVal = outerStr.next(inval), innerVal, mergedVal;
		var nilCount = 0, mustReset;

		if (outerVal.isNil) { ^nil; };
		loop {
			innerVal = innerStr.next(inval);
			if (innerVal.isNil) {
				while {
					nilCount = nilCount + 1;
					mustReset = (nilCount >= nilsToReset);
					// always advance outer on a nil (sub-sequence?)
					outerVal = outerStr.next(inval);
					if (outerVal.isNil) { ^nil };
					if (mustReset) { innerStr.reset; /* "innerStr.reset".postln; */};
					innerVal = innerStr.next(inval);
					innerVal.isNil && (mustReset.not)// repeat while-loop if true
				};
				// PforEach(1, nil, {}) would hang without next check
				if (innerVal.isNil) { ^nil } { nilCount = 0 };
			};
			mergedVal = mergeFunc.value(outerVal.copy, innerVal.copy, inval.copy);
			if (mergedVal != skipToken) { inval = yield(mergedVal) };
		};
	}
	// TODO: handle cleanups (for event streams)
	// todo: storeOn
}