Py = for the Python programming language
Ly = for the LilyPond music notation package
Mento = for Partimento (that's a painting of Naples inside PyCharm :)
So we've got humans that play chess, and humans that play music.
But we also have machines that play chess. So.... I wonder if we can help a machine improvise (and have a human improvise along with it?)
I've made my way through enough of the Python language & Lily Pond syntax to get started -- now it's time to dig into some programming, and later machine learning.... baby steps :)
"Small moves, Ellie (Sparks). Small moves." -- from Contact
Here's a start: Rule Of The Octave (RO) in Major key dyads in the CSV format ('rule_of_the_octave.csv')
Using Panda (not quite sure what that means yet) we're looking through the attributes (or methods?) of our variable ro (reads in the contents of 'rule_of_the_octave.csv' and assigns to ro) using a Python Jupyter Notebook.
ro.values displays RO as 2-dimensional array (finally, an array that I actually care about!)
Also, is it a Python Jupyter Notebook.... or a Jupyter Notebook using Python?... hmmmm...
Switched from Jupyter Notebooks to PyCharm:
Have a CSV for a few bass motions (root layer) to start, based on percentages:
1-2, 2-1
1-7, 7-1
#5-6 (from 1), 6-#5
(Figuring out these percentages out as I work through Furno #1).
Also have CSVs for the 3rds (second layer) and 5ths/6ths (third layer) just to make sure I could route them to the Arturia synths through MIDI channels 1-3 (in Python 0-2) with mido.
Keys: For now I'm hard coding the Keys (C Major/A Minor; will have transposition tables for those later)
Meter: Dividing the number of generated notes by 2, 3, or 5 to get some kind of meter setting displayed (a little wonky still)
Repeats: hard coding a variable to set the number of repeats (probably won't make that random anyway) of the short form; root notes through each repeat can either be the same or generate a new set ('cause why not!)
Form: Just a single repeated section so far; next is figuring out how to transition to the next section that is repeated (have to go back through my Form & Analysis music texts)
Bass Motions:
I've started improvising along with the generated bass line for the ear training challenge and because it's fun :)
For now the first root note is always 1, the last root is always 1, and the penultimate root is 5 (in Major)
Each note is currently 3 seconds/3000ms, with a random float generator between 2.8 to 3.1 for more a of greasy time pulse than a strict metric pulse
I'm applying what I'm figuring out adapting RO for guitar from the Partimento-ish section.
Also, logged into GitHub in PyCharm to checkin/checkout versions; have to do a tutorial on how that all works!
Started with 1, 2, 6 (with #5 as 59), 7.
Added two possibilities (two octaves) for 5; removed #5 (was 59) to simplify for now.
Expanding to two octaves to see how that works out, so have to reconfigure the percentages.
Keeping a two octave spread for the bass notes, and getting closer on percentages.
A little wonky, but seeing what the machine does as it is "loosely" guided by RO & Partimento bass motions, as I follow along playing layer 2 (3rds) & layer 3 (5ths or 6ths).
Needed to change the CSV labels to account for enough digits for all available octaves (and make room for sharps & flats.)
Playing through Arturia's Piano V2.
Was going to read in another CSV for converting the current bass note's number (110, 120, etc.) to display the pitch/octave (C2, D3, E4, etc.) -- but then realized the key/value pairs of a Python dictionary is perfect for that.
So far, Furno #1 (Major) bass motions make up the percentages/choices. Furno #2 is in Minor, so that'll have be a new CSV.
Creating Python dictionaries as transposition devices, after the user/human inputs the key (which is then stored in 'label_to_midi'.)
The numbers 110, 120, 130, 140, etc. are being used to represent 'Movable DO' in Solfège (as opposed to 'Fixed DO'.)
(Going to combine these dictionaries so each key will have multiple values in an array like: 110: [ 'C2', 36] --- just have to figure how to implement that correctly!)
Each note is now 2 seconds/2000ms, with a random float generator between 1.9 to 2.1 to vary a strict metric pulse; then that is divided in half to create an in-between beat (subdivision) to better feel the coming downbeat.
It's difficult enough to improvise over a bass line generated from percentage weights, so wanted to alleviate the tempo guessing as much as possible.
Using a metronome click I recorded + simpleaudio for WAV playback:
import simpleaudio as sa
My brother showed me how to simplify some previous code.
Ahhhhh, that's better!
Along the same lines as above, need to figure out how to simplify this code.
Probably something along the lines of:
clicks_left = 3
while clicks_left != 0:
clicks_left -= 1
play current click
wait the subdivision length
(out of PreClick and on to PlayTune....)
Although, simplifying these parts of the code are what tend to confuse us non-programmers!
Trying an R.O. CSV to see if moving to Tensor Flow/Keras graphs (still learning what that is) makes more sense than the CSV percentages?
Also, trying the 'Happy Birthday' rhythmic motive variations below to avoid the downbeat, since I have to wait for the machine to give each successive bass note in the form/sequence (some of them are guess-able, some not.)