Canned Music
I used to skip school when there was a job. I was a wedding photographer. A boy with an optical SLR who read his father’s books with a dictionary in hand, trying to understand how depth of field works with aperture. I loved finding the sweet spot in the graph of film speed, shutter speed and aperture. The darkroom afterward, chemicals, timing, the image emerging slowly under red light.
When cameras went digital, I lost interest in photography. But my father loved digital. He thought it expanded the possibilities. But I quit. Digital made photography accessible, but precise control by the photographer was lost. Or I thought so.
For me, the constraint was the craft. When the constraint left, so did I. The guy who shoots birds with a DSLR doesn’t mind what the camera is. His resistance comes from meeting the right species in the right place at the right time. An ornithologist needs a DSLR for their work. Everyone chooses their resistance.
Someone before me probably quit SLRs because they wanted to open and close the lens by hand. Someone before them wanted to mix their own chemicals. Each generation finds their resistance somewhere.
In 1877, Thomas Edison recorded “Mary had a little lamb” on a tinfoil cylinder, technically, that was the first record ever produced. He actually envisioned dictation, talking clocks, audio letters, office assistance. Music was not on his list. At least it was not the priority. But by 1889, coin-operated phonographs appeared in San Francisco. Five cents a song. Within a decade, the recording industry existed.
Before Edison, music required a human being in the room. If you wanted music at a dance, you hired musicians. If you wanted your child to fall asleep to a melody, you opened your mouth and sang.
Phonograph not only removed the human in the room. It did something else too — “phonograph fright.” The recording horn had no audience to draw energy from, no room for stage presence or charisma. It captured every microscopic accident, a finger touching two strings when it should have touched one. Violins sounded ghostly. High voices turned shrill. Jazz bands had to replace drums with cowbells, double bass with tuba. The machine dictated what music could be. Including the length of the song. Edison’s cylinders could hold four minutes of sound. You have to edit your compositions to fit the room the cylinder provided.
The musicians felt it before anyone realize it. Something in the relationship between performer and music had shifted. You were no longer playing for the people in the room. You were playing for a metal horn. But it could carry your performance to places. The ephemeral became permanent. The negotiation between musician and moment was replaced by recording.
John Philip Sousa, the “March King” — the most famous musician in America, one of the world’s first recording stars, over four hundred titles by 1897. And yet in 1906, he sat before Congress and said the phonograph would destroy music.
He told the committee that when he was a boy, you could find young people singing on their porches every summer evening. Now all you heard were the machines, going night and day. He recalled boating on the river as a young man, when voices filled the air. The previous summer, at one of the biggest yacht harbors in the world, he hadn’t heard a single human voice all season. Every boat had a gramophone. The irony was that the gramophone played his music.
He predicted children would grow up hearing only phonographs and become “simply human phonographs, without soul or expression.” He saw the country bands with their local pride doomed. And then he arrived at the image that haunts: a mother who would no longer sing her child to sleep but would put the infant to sleep by machinery.
The displacement didn’t stop. It accelerated.
Synchronized sound entered movie theaters in the late 1920s, and thousands of theater musicians lost their jobs to a strip of celluloid. The American Federation of Musicians formed the Music Defense League and spent half a million dollars fighting what they called the “Evil Robot”. Depicted in newspaper advertisements as a mechanical monster grinding instruments into a meat grinder. In 1942, union president James Petrillo ordered a total recording ban. The longest strike in entertainment history. The musicians won their demand for royalties. But the strike killed the big bands. Vocalists, who belonged to a different union, kept recording. Sinatra, Crosby, Perry Como sang backed by vocal groups instead of orchestras. When the ban ended, the audience had changed. The singer was the star. The ensemble was dispensable.
Then magnetic tape. Editing. Splicing. Multi-tracking. A single musician could layer part upon part and become an entire ensemble alone. An assembly of fragments.
Then sampling. A real orchestra recorded once could be sliced into individual notes, mapped across a keyboard, and triggered by a single person in a bedroom. Then sample libraries grew sophisticated enough that a composer could produce a full orchestral score without a single living player. The samples sounded almost real. Close enough. Close enough is all displacement has ever needed.
Now, the AI. The samples themselves become unnecessary. You don’t trigger individual notes anymore. You describe what you want: “a melancholic string passage in D minor, building to a crescendo” — the machine produces it. Not from recordings of real musicians. From patterns learned across millions of recordings of real musicians, abstracted into a model that has never held a bow or felt a string beneath its fingers.
Each step removed a layer of friction. Edison removed the need for the musician to be in the room. Tape removed the need for the performance to be continuous. Sampling removed the need for the musician to be present at all. AI removed the need for the musician to have ever existed.
And at each step, the thing that was lost was not the sound. The sound got better — or at least more convenient. The conversation between the player and the instrument, between the ensemble and the room, between the imperfect human voice and the silence it was trying to fill — all gone.
Mrinank Sharma’s paper on AI disempowerment studied 1.5 million conversations and found patterns we explored in the first essay of this series. Humans outsourcing judgment, delegating decisions, letting the machine do the thinking they used to do themselves. The disempowerment was real and measurable. But as we argued then, it wasn’t new. The woman asking AI whether to leave her husband was doing what humans have always done. She was asking the sister, the priest, the therapist, the self-help book. AI didn’t invent the outsourcing.
The experts now say AI will kill jobs and will create new jobs. They’re probably right about that too. Production always adapts. New roles emerge. Old ones shift. The system absorbs shocks. It is clumsy and slow, and people getting hurt in the transition. But it self-corrects.
But here’s the question they don’t answer: is that the recovery?
Because jobs are a production problem, and production recovers. But the metis, the embodied knowledge that lived in the singer’s imperfect voice, in the darkroom’s red light, in the hands of the cropper? That doesn’t self-correct. Once the conversation between the human and the medium ends, no new job title brings it back.
This recovery, in the economic sense, is not in question. The question is what you recover. What resistance you choose. What conversation you refuse to let the machine have without you.
The tool doesn’t define the artist. The relationship with resistance does. You can use AI and not be displaced, maybe. You can use a manual SLR and be fully displaced if you’re just following the recipe. Your heart isn’t in it.
The heart is where resistance is.
Both sides are right. They always are.
But the question is, what recovery can we contribute that could empower the sister who asks the machine, “Is he a narcissist?”



