Short octaves are an old idea. I don’t think octave specifity has been embraced in 2-D keyboards yet, though. So I’ve had some thoughts about it.
Let’s start with just intonation. You can choose a 1/1, and take every pitch with an interval within a Tenney limit relative to it. That is, every ratio m/n where m*n is below a given cutoff.
The human hearing range is around 1000:1 (16Hz to 16Khz). Call it 1024:1 (10 octaves) because that makes the arithmetic easier. The extremes will be 1/32 and 32/1 relative to a 1/1 of around 512 Hz (treble clef). This gives a Tenney limit of 32. That’s not very big. It’s roughly 5-limit within the octave. 5/4 and 6/5 get in, but 7/6 and 8/5 are out.
A Farey limit gives more variety in the middle. You take every m:n where both m and n are no greater than 32. This gives
as the top octave. In the middle, you get the full 31-limit. Maybe that’s too much. Drop it to 8 octaves (same as a piano) and it becomes the 16-limit. Top octave:
First octave up:
1/1 16/15 15/14 14/13 13/12 12/11 11/10 10/9 9/8 8/7 15/13 7/6 13/11 6/5 11/9 16/13 5/4 14/11 9/7 13/10 4/3 15/11 11/8 7/5 10/7 13/9 16/11 3/2 14/9 11/7 8/5 13/8 5/3 12/7 7/4 16/9 9/5 11/6 13/7 15/8 2/1
I count 40 pitches compared to 8 at the top. There’s a noticeable bulge in the middle.
Tenney-Euclidian cutoffs are interesting. You can calculate it as the square of each part of a sum of products of prime logs. So, the complexity of 32:1 is simply (5*log(2))**2 or 25*log(2)**2 or 25 square octaves. 31:1 is log(31)**2 or [log(31)/log(2)]**2 = 24.5 square octaves. 35:1, however, is log(5)**2 + log(7)**2 which comes out as 13.3 square octaves. So a cutoff that excluded 35:1 would also remove everything between it and 30:1, but still let 60:1 and 120:1 through.
The Euclidean behavior looks bizarre, but it has a rationale. What it’s doing is favoring ratios with the maximal number of distinct prime factors. These are more likely to form simple intervals with other pitches in the system than ratios of one or two primes. Prime powers like 16 are unduly punished, however.
It happens that the metrics for the RMS over a Tenney limit give more weight to simple primes than Tenney-Euclidean weighting, and so do a more sensible job of penalizing high primes -- doubly because they’re large and are less likely to occur with lots of other primes.
The best Euclidean metrics or inner products might be those for the RMS error over a Farey limit. They’ll favour composites, and small intervals, and small primes. An inner product on JI can be applied to tempered intervals according to Gene Smith’s interval complexity.
Say G is the metric for JI and [M> is the unweighted mapping. The inverse of <M|G|M> (with <M] as the transpose of [M>) is the metric for interval complexity in the temperament class defined by [M>.
All these metrics are dualistic in that they consider otonalities and utonalities equally. In practice, it might be better to favor otonalities. Then, for meantone, flats would tend to be at the bottom of the keyboard and sharps at the top. I’m not sure how to systematize this, though.
Note: the pseudoinverse of [M> might be Inv(<M|M>)<M]. Then [M>+[M> = Inv(<M|M>)<M|M> = and [M> [M>+ is [M>Inv(<M|M>)<M]. This is a metric for interval complexity in JI intervals as the [M> and <M] transform to the temperament. The thing is, replace [M> with the weighted mapping W|M> and you get a metric of W|M>Inv(<M|WW|M>)<M|W. This is the equivalent to the interval complexity metric above where G=WW. But it applies to weighted JI intervals. I'm not sure if Gene defined it like that. It looks like the straight inverse definition is simpler than the pseudoinverse one when you take account of the weighting.