大卫应付并没有开始让任何人发疯。在 1980年, 作曲家设想了一个工具, 以帮助治愈他的创造性块: 一台机器, 可以跟踪所有的声音和松散的线程贯穿他的脑海, 找到相似之处, 并产生了整个音乐的灵感。所以他建造了它。

创造了超过六年的实验, 他的写歌电脑程序被称为 EMI 发音的艾美-或音乐智力实验。简而言之, EMI 是通过模式匹配来工作的: 将音乐片断分解成小块, 分析它们, 找出听起来相似的地方以及它应该去哪里。应付意味着应用这种水平的分析, 以他自己的工作身体, 推断什么 Ihs 音乐风格是-但他意识到它的工作与其他作曲家很好。喂养足够的另一个作曲家的作品-说, 约翰塞巴斯蒂安-巴赫-进入 EMI, 它会识别什么使巴赫的声音像巴赫和吐出模仿巴赫 很好 一般的听者可能不知道如何告诉他们除了真实的东西。

在1997年在斯坦福大学举行的一次演讲中, 与会者听取了俄勒冈大学教授的讲话。 维妮菲克纳 在钢琴上演奏三分乐曲: 一个由巴赫, 一个由 EMI 风格的巴赫, 一个由她的丈夫, 史蒂夫拉森, 另一个教授。被问及是哪个, 人们误认为拉森的作品为电脑和 EMI 的真正巴赫。拉森被摧毁, 告诉 纽约时报 当时的 "巴赫绝对是我最喜欢的作曲家之一.....。人们可能被电脑程序愚弄是非常令人不安的。

广告

他不是唯一一个: 应付告诉 Gizmodo, 听众不喜欢被要求猜测哪个是, 如果他们猜测错误。此外, 批评家们说, EMI 的作品听起来并不像他们有任何 "灵魂"。

"我不知道什么是灵魂," 他在加州大学圣克鲁斯校区附近的家庭办公室的电话中告诉我, 在10年前他退休后, 他一直是音乐教授。"你可以查字典, 但他们都说: 它的东西, 我们不知道它是什么, 但我们已经得到了它, 我们可以告诉它在那里。这对我来说不是很有用。

应对被认为是音乐制作 AI 的教父, 并坚持未来是光明的, 正确的算法将有助于解开歌曲创作中的新表达, 人类将无法访问其他。直到最近几年, 教 AI 写像人类的歌曲一直是学术界的工作, 主要集中在古典音乐。今天, 像索尼和谷歌这样的科技公司的研究人员问: 如果 AI 能写流行歌曲呢?我们如何训练他们, 最终的产品会和收音机上的一样好吗?可能是更好?他们的努力使我们感到疑惑: AI 是最新的 "灵魂"-粉碎技术, 把音乐家们从他们的工艺中边缘出来, 或者它是一种新的仪器--生活在你的电脑里, 可能比你更了解你想要的东西, 最终会提高音乐家创造真正伟大的东西的机会?


从创作音乐的过程中删除人类决策的努力已经有几个世纪的历史了。在 1787年, 莫扎特出版了一个 "音乐骰子游戏" 的指南, 其中玩家滚动一个模子几次, 并串在一起预写的音乐, 与每个模具的六张脸。最终的结果是一个完整的, 虽然随机组装, 音乐: 歌曲写的数字。

广告

在 1957年, 伊利诺伊大学的两位教授 Lejaren 希勒和伦纳德. 艾萨克森在学校的房间大小的髂骨电脑上编排了音乐乐谱。他们认为音乐必须遵循严格的规则, 以使它听起来很吸引我们的耳朵, 如果计算机可以学习这些规则, 也许它可以通过随机生成的音符序列, 遵循这些规则来编写音乐。

在一个实验中, 他们编写了髂骨的旋律, 满足一定的要求: 范围不能超过一个八度, 它必须开始和结束的 C 音符, 等等。计算机一次生成一张便笺, 但如果生成了错误的规则中断注释, 则程序拒绝了该注释并重试。

他们最后的工作, 髂骨套房, 打破了地面, 粉碎了音乐始终是一个强烈的经验或感觉的旋律表达的想法。希勒和艾萨克森承认, 公众可能对此持谨慎态度。"当我们的工作的主题来了, 问题被问及: ' 作曲家会发生什么事?, '" 他们在书中写道 实验音乐: 用电子计算机作曲.但他们提供: 电脑不知道他们是对还是错。计算机只是遵循指令。即使一个程序可以快速地写出歌曲, 人类最终还是需要权衡它的发音是对还是好。

摘录从 Lejaren 希勒和伦纳德. 艾萨克森的髂套房。

"这就是为什么一些作家谈到计算机程序员与电脑交谈" 的原因。他给它提供了某些信息, 并告诉计算机如何处理这些信息。计算机执行这些指令, 然后程序员检查结果, "他们写道。

广告

广告

应付将开创这个模型-人类作曲家与他们的计算机对应者一起工作-在 mid-'90s。

他发明了一个程序, 他叫艾米丽-以 EMI 和应付的父亲命名, 这可以组成一个全新的风格的音乐, 而不是简单地模仿其他作曲家。每次艾米丽提出新的音乐, 应付可以告诉节目他是否喜欢它。他根据自己的喜好对 Gizmodo 说: "这个计划改变得如此之小, 但在这个计划中仍有一些随机性。换言之, 它会更好地把一些他会喜欢的东西, 但你之前拇指击落的一些东西可能仍然 "蠕变"。

"在一周的时间里, 你会对这个计划非常熟悉," 他说。"我经常觉得在这个过程中, 我和一个人说话, 这听起来很奇怪。


在 2017年9月, 在巴黎的索尼电脑科学实验室的研究小组, 以及法国音乐家 Benoît Carré的帮助下, 发表了两首歌曲, 在 AI 的帮助下写: "爸爸的车", 一首写在披头士风格的歌曲和 "先生的歌谣影子, "在美国的歌曲作者像艾灵顿公爵和乔治格什温的风格。为了做到这一点, 团队使用了流动机器, 这是一个工具, 旨在帮助指导歌曲作者和推动他们更有创意, 而不是为他们做所有的工作。

负责流动机器发展的弗朗索瓦. Pachet, 展示了该工具如何将一种音乐风格映射到另一种旋律中来创建一首全新的歌曲。

“My goal has always been to put some audaciousness, some boldness back into songwriting,” François Pachet, who spearheaded the development of Flow Machines at Sony CSL, told Gizmodo over a video call in January. “I have the impression that in the 1960s, ‘70s, maybe ‘80s, things were more interesting in terms of rhythm, harmony, melody, and so on,” he said, although he admitted that might make him dinosaur. (“People can say I’m outdated. Maybe, I don’t know.”)

广告

广告

Pachet, who now leads the AI research arm at Spotify, oversaw the development of Flow Machines for years, bringing interested musicians into the studio to experiment with adding it to their songwriting process. (Flow Machines also received funding from the European Research Council.) His work laid the foundation for an album that he and Carré started (and Carré would finish, after Pachet started at Spotify): a multi-artist album titled Hello World, featuring various pop, jazz, and electronic musicians. The songs, all of which include some element (the melody, the harmony, or what have you) that was generated by AI and finessed by the artists, just like Hiller and Isaacson suggested 50 years ago.

SKYGGE feat. Kiesza, “Hello Shadow” (Music Video) composed, in part, by AI for the album “Hello World.”

To record a song with Flow Machines, artist start by bringing in something that’s inspiring them: a guitar track, a vocal track (their voice or someone else’s), lead sheets, or MIDI files containing data about a given melody or harmony. Flow Machines analyzes these things along with tens of thousands of other lead sheets it has in its database and “generates, on the fly, a score,” Pachet says. Like with Emily Howell, if the artist doesn’t like it, they can reject it and Flow Machines will come out with something else—but if they do like it, or a part of it, they can begin to play around with the music, editing specific notes or recording their own instrumentals or lyrics over it. Or you can bring in a track—let’s say a guitar track from a musician you really admire—and ask Flow Machines to map it onto a melody that you’re working on, or map it onto a melody mix in a Frank Ocean vocal track. The result is meant to sound like getting the three of you in a room together—a guitarist performing in your style, Frank Ocean singing along over it—albeit, when Pachet demonstrates this function at a TEDx Talk (video above), the result is a bit choppy.

广告

This process is fairly quick, usually taking between a few hours or a few days. The idea is to make composing music as painless as it is rewarding. “You have an interface where you can an interactive dialogue, where Flow Machines generates stuff, and you stop if you think it’s really good, if you think it’s great. If you don’t think it’s best, then you continue,” said Pachet.

Generation of Lead Sheets with FlowComposer, a demonstration on YouTube.

He added: “That was the goal, to bring in artists, allow them to use the machine in any possible way, with the only constraint that at the end being that the artists should like the results. They should be able to say, ‘OK, I endorse it, I put my name on that.’ That is a very, very demanding constraint.”

广告

广告

It’s great if artists like it, but what about listeners? It’s not clear that audience are huge fans of music written with the help of AI yet—although there’s also nothing, necessarily, stopping them from being, either. One of the most famous names on the album is Kiesza, who sings the title track; as of writing, her song has amassed over 1.8 million plays on Spotify. (When it was released on December 1, it appeared on Spotify’s New Music Friday, per a Reddit thread cataloguing the playlist’s additions from that week.) For (an extreme) comparison, Cardi B’s “Bodak Yellow” has over 10 million plays on Spotify—but still, getting over a million streams is encouraging.

When trying to predict the future of music written with AI, it may help to look to non-U.S. markets. In February, the London-based company Jukedeck—their AI-powered online tool creates short, original music aimed primarily at video-makers and video game designers—collaborated with the Korean music company Enterarts to put on a concert in Seoul. The music was performed by K-pop stars—like Bohyoung Kim from the group SPICA and the girl group High Teen—but the basis for the songs came from music composed by Jukedeck’s AI system. According to Jukedeck’s founder, it was attended by 200-300 people, and almost entirely members of the media. The company is planning on releasing three more “mini-albums” this year.If they do, they have their work cut out for them: The first mini-album has less than 1,000 streams on Spotify.


在接受采访时, the Guardian eight years ago, David Cope said that AI-generated music would be “a mainstay of our lives” in our lifetime. That hasn’t happened quite yet; the aforementioned songs aren’t landing on the Top 40, so much as they are generating a lot of buzz and fear-mongering headlines.

SKYGGE, “Magic Man” (Lyrics Video) composed, in part, by AI for the album “Hello World.”

When I ask Pachet whether he thinks young people will care about whether a computer helped write a song, he agrees with Cope. “Millennials don’t listen to music the same way we did 20 or 30 years ago. It’s definitely not easy to characterize, but things have changed and you can see it by looking at how people listen to music,” he says. Pachet goes on: “There is so much more music available now than before. People tend to skip a lot, they listen for 10 seconds and then very quickly [decide if they like it or not]. That’s a new behavior that did not exist before the internet.”

广告

广告

If young people are listening to music in a kind of ruthless, speed-dating way—trying to eliminate the songs that aren’t bops from the songs that are as quickly as possible, as if to better curate and maximize their own listening experience—then maybe songs written the help of AI can sneak right on in there.

One way to smooth the emergence of AI as a songwriting companion into the market is to frame AI instead as just another musical instrument, like the piano or the synthesizer. It’s a handy bit of rhetoric for any bullish AI enthusiast: No one is arguing that drummers have been put out of business by the widespread use of drum machines in popular music. Casting AI in a similar light may help reduce anxiety about job-snatching robo-songwriters. Some are already arguing this; Pachet tells me, when I ask about how AI might get credited on song credits, that “Flow Machines was just a tool. You never credit the tool alright. Otherwise, many songs would be credited with guitar or vocoder or trumpet or piano or something. So you really need to see this as a tool.”

广告

For what it’s worth, tech companies like Google do seem to be focusing on the “tool” part; earlier this month, the Google Magenta team (which researches how AI can help increase human creativity) showed off something they’re calling NSynth Super, a touchpad generates completely new sounds based on two different ones. (Imagine hearing something that sounds halfway between a trumpet and a harmonica.) When I spoke to Jesse Engels, a research engineer at Magenta, he too compared what AI can do for songwriters to what instruments have historically done. He talks about how guitar amps were originally just meant to amplify the sound of guitars, and that using them to add distortion to guitar-playing was the happy result of people messing around with it. One of the current goals of Magenta, he said, is “to have models rapidly adapt to the whims of creative mis-users.”

If they can get their tools into the hands of enough people, they might succeed.