Log in

goodpods headphones icon

To access all our features

Open the Goodpods app
Close icon
Creative Next: AI Automation at Work - AI Composed Music

AI Composed Music

02/04/20 • 48 min

Creative Next: AI Automation at Work

Musical composition is one of the earliest examples of human art and creativity. Today, new and original music is increasingly being composed by AI. Drew Silverstein, Co-Founder and CEO of Amper Music, joins the show.

Automation of sound and music, in the form of licensing stock and pre-existing recordings, is a decades-old trend that became ubiquitous with the rise of the internet. Now, thanks to machine learning and artificial intelligence, the creation of original music is increasingly being automated. Drew Silverstein, one of the pioneers at the forefront of this trend, joins the Creative Next team to explore these technologies and the trends and impacts they have on work in general and musicians in particular.

Memorable Quotes

“As our technology evolves, we see AI dramatically decreasing the cost of accomplishing certain tasks and dramatically decreasing the amount of information that any one person needs to know to be successful at that task. And whether it's music, whether it's farming, whether it's creating a script, or whether it's just doing more rote business tasks, and I think what we are going to arrive at in the not too distant future, is a world in which the ability to complete a task is fully democratized and anyone can do nearly anything with the assistance of an AI.”

“The value, then, of our human input is gonna be on the creative input and the creative direction, so that, as people, how can we direct the workforce and the work effort of these machines to do something that's meaningful to us.”

“All you need to know to create unique and professional music tailored to your content are three things: the style of music you want to create, the mood you want to convey and the length of your piece of music and that's all you know. In a matter of seconds, you'll make something brand new.”

“We think our job is just a matter of tasks in a sequence that accomplish something specific that they get a goal done, whereas our career is all about helping others achieve their goals, and the manner in which we do that will change. We used to communicate via written letters only, and then it became telegraphs, and then it became phone calls, and then it became email. Now it's texts. We're still communicating. We're still conveying messages, but how we do that will change. And in the same way, in music, the jobs of the music world that exist today will certainly evolve and be very different in a matter of years, in the same way that they're very different now than they were 10 years ago.”

“So what I would say to those, both coming up in music, and those who are already successful and experienced, is to understand that technologies evolve. The way we do things will change. Be accepting of that. Be on the forefront of the adoption of those new technologies and their tools, but also be mindful at the core value in music. It's not because of the process by which it's made. It's because we're making art and people value art.”

“Whether or not we exist as a company, this is happening. AI music is here.”

“And then we said to ourselves, ‘As composers, we are experts at translating music into emotion and emotion into music.’ And so we suggested, ‘What if we could create a creative AI that gives you the same collaborative experience of working with us, but within the time and economic framework that you need?’”

“And with each evolution in technology, the barriers to expressing oneself creatively through music were dramatically decreased, the time it takes to learn to express oneself and the cost of purchasing the tools to do so. And in that manner, we see AI music and Amper as the next step in this centuries long, if not longer, progression of technological innovation democratizing creative abilities.”

Who You'll Hear

Dirk Knemeyer, Social Futurist and Producer of Creative Next (@dknemeyer)

Jonathan Follett, Writer, Electronic Musician, Emerging Tech Researcher and Producer of Creative Next (@jonfollett)

Drew Silverstein, Co-Founder & CEO, Amper Music

Join The Conversation

Website & Newsletter: www.creativenext.org

Twitter: @GoCreativeNext

Facebook: /GoCreativeNext

Instagram: @GoCreativeNext

Sponsors

GoInvo, A design practice dedicated to innovation in healthcare whose clients are as varied as AstraZeneca, 3M Health Information Services, and the U.S. Department of Health and Human Services. www.goinvo.com

Design Museum Foundation, A new kind of museum, they believe design can change the world. They’re online, nomadic, and focused on making design accessible to everyone. Their mission: bring the transformative power of design everywhere. You can learn about their exhibitions, events, magazine, and more. www.designmuseumfoundation.org

BIF, As a purpose-driven fi...

plus icon
bookmark

Musical composition is one of the earliest examples of human art and creativity. Today, new and original music is increasingly being composed by AI. Drew Silverstein, Co-Founder and CEO of Amper Music, joins the show.

Automation of sound and music, in the form of licensing stock and pre-existing recordings, is a decades-old trend that became ubiquitous with the rise of the internet. Now, thanks to machine learning and artificial intelligence, the creation of original music is increasingly being automated. Drew Silverstein, one of the pioneers at the forefront of this trend, joins the Creative Next team to explore these technologies and the trends and impacts they have on work in general and musicians in particular.

Memorable Quotes

“As our technology evolves, we see AI dramatically decreasing the cost of accomplishing certain tasks and dramatically decreasing the amount of information that any one person needs to know to be successful at that task. And whether it's music, whether it's farming, whether it's creating a script, or whether it's just doing more rote business tasks, and I think what we are going to arrive at in the not too distant future, is a world in which the ability to complete a task is fully democratized and anyone can do nearly anything with the assistance of an AI.”

“The value, then, of our human input is gonna be on the creative input and the creative direction, so that, as people, how can we direct the workforce and the work effort of these machines to do something that's meaningful to us.”

“All you need to know to create unique and professional music tailored to your content are three things: the style of music you want to create, the mood you want to convey and the length of your piece of music and that's all you know. In a matter of seconds, you'll make something brand new.”

“We think our job is just a matter of tasks in a sequence that accomplish something specific that they get a goal done, whereas our career is all about helping others achieve their goals, and the manner in which we do that will change. We used to communicate via written letters only, and then it became telegraphs, and then it became phone calls, and then it became email. Now it's texts. We're still communicating. We're still conveying messages, but how we do that will change. And in the same way, in music, the jobs of the music world that exist today will certainly evolve and be very different in a matter of years, in the same way that they're very different now than they were 10 years ago.”

“So what I would say to those, both coming up in music, and those who are already successful and experienced, is to understand that technologies evolve. The way we do things will change. Be accepting of that. Be on the forefront of the adoption of those new technologies and their tools, but also be mindful at the core value in music. It's not because of the process by which it's made. It's because we're making art and people value art.”

“Whether or not we exist as a company, this is happening. AI music is here.”

“And then we said to ourselves, ‘As composers, we are experts at translating music into emotion and emotion into music.’ And so we suggested, ‘What if we could create a creative AI that gives you the same collaborative experience of working with us, but within the time and economic framework that you need?’”

“And with each evolution in technology, the barriers to expressing oneself creatively through music were dramatically decreased, the time it takes to learn to express oneself and the cost of purchasing the tools to do so. And in that manner, we see AI music and Amper as the next step in this centuries long, if not longer, progression of technological innovation democratizing creative abilities.”

Who You'll Hear

Dirk Knemeyer, Social Futurist and Producer of Creative Next (@dknemeyer)

Jonathan Follett, Writer, Electronic Musician, Emerging Tech Researcher and Producer of Creative Next (@jonfollett)

Drew Silverstein, Co-Founder & CEO, Amper Music

Join The Conversation

Website & Newsletter: www.creativenext.org

Twitter: @GoCreativeNext

Facebook: /GoCreativeNext

Instagram: @GoCreativeNext

Sponsors

GoInvo, A design practice dedicated to innovation in healthcare whose clients are as varied as AstraZeneca, 3M Health Information Services, and the U.S. Department of Health and Human Services. www.goinvo.com

Design Museum Foundation, A new kind of museum, they believe design can change the world. They’re online, nomadic, and focused on making design accessible to everyone. Their mission: bring the transformative power of design everywhere. You can learn about their exhibitions, events, magazine, and more. www.designmuseumfoundation.org

BIF, As a purpose-driven fi...

Previous Episode

undefined - AI & Audio Engineering

AI & Audio Engineering

AI is driving innovation in the field of audio production. Jonathan Bailey, the Chief Technology Officer of iZotope, a company pioneering advances with these technologies, talks about the state-of-the-art in audio software.

As recently as 50 years ago, audio production required physical tools such as a soldering iron to achieve. With the rise of the personal computer these technical requirements have disappeared, replaced by software which handles all of the work with bits and bytes. From mixing to sound repair to post-production, machine learning-powered software like that offered by iZotope continues to automate an audio engineer’s workflow and even put professional audio production within the reach of amateurs.

Memorable Quotes

“There's a macro trend which is actually bigger than sort of machine learning or AI, which is for the professional working in audio the last 50, 60, 70 plus years has been a transition away from the technical problem domain to the creative problem domain.”

“You have a person that has a point of view that is guiding and steering the neural network. Now, there are new network architectures where a neural network can train another neural network, and those are pretty interesting, but there's still someone behind that, right? So they're, currently and for the foreseeable future, there's going to be kind of a guiding hand who's steering and curating what these things are capable of.”

“There's a lot of buzz in the world of technology overall and I think probably a lot of snake oil and misunderstanding of what machine learning really is capable of, but on the other hand, it is a pretty spectacularly powerful technology and set of techniques that we can use in the world of music and audio.”

“By being able to encode some of the best practices and some of the learning that only an audio engineer would have, and it's like your virtual audio engineer buddy, now people can create recordings that will sound good enough that they could be uploaded directly to Spotify or SoundCloud.”

“As a team that works with ML all day long, we are just scratching the surface of what is even possible to do in terms of personalizing the experience to a specific user, in terms of continuing to enhance our algorithms in response to real-world data.”

“We can really see a future where the audio engineer sits down, they've made a recording, it's de-noised, it's cleaned up. Everything works well together, and they can start getting creative, just painting with colors rather than having to fix a bunch of problems in the content that they produce. That's the world that we're trying to push those people towards.”

“The sort of next horizon for both the world at large but definitely for audio is how can we use neural networks to generate content?”

“We have a stream of audio coming into the product and a stream of audio leaving the product, and our job is to process that audio to make it sound better or make it sound more like the user wants us to.”

“We can almost treat that representation like an image, and at each portion of that spectral representation, we can attempt to make a decision, for example, is this voice or not-voice?”

“So we've trained a neural network to be able to make point-to-point decisions, both in time and in frequency.”

“We had an idea that it might be possible to use machine learning to solve this problem.”

Who You'll Hear

Dirk Knemeyer, Social Futurist and Producer of Creative Next (@dknemeyer)

Jonathan Follett, Writer, Electronic Musician, Emerging Tech Researcher and Producer of Creative Next (@jonfollett)

Jonathan Bailey, CTO, iZotope

Join The Conversation

Website & Newsletter: www.creativenext.org

Twitter: @GoCreativeNext

Facebook: /GoCreativeNext

Instagram: @GoCreativeNext

Sponsors

GoInvo, A design practice dedicated to innovation in healthcare whose clients are as varied as AstraZeneca, 3M Health Information Services, and the U.S. Department of Health and Human Services. www.goinvo.com

Design Museum Foundation, A new kind of museum, they believe design can change the world. They’re online, nomadic, and focused on making design accessible to everyone. Their mission: bring the transformative power of design everywhere. You can learn about their exhibitions, events, magazine, and more. www.designmuseumfoundation.org

BIF, As a purpose-driven firm, BIF is committed to bringing design strategy where it is needed most - health care, education, and public service to create value for our most vulnerable populations. www.bif.is

Next Episode

undefined - AI & The Art of Music

AI & The Art of Music

AI is being used by music groups, such as our guest this episode Claire Evans, a member of the band YACHT. Their latest album, Chain Tripping, leveraged machine learning solutions for the music, lyrics, and more.

Artists are making the most of machine learning, using the technology both in the creation of their art and as a cultural touchpoint for expression, exploration, and commentary. While the Internet and more modern emerging technologies have long had a negative impact on musicians and others who create using audio, Claire Evans and her band YACHT - Young Americans Challenging High Technology - are at the vanguard of discovering how these technologies will impact art and music in the future.

Memorable Quotes

“Since we only really learn by doing things and making things, we figured that the most efficient way for us to get a sort of bodily understanding of what the hell AI is and what it's doing and what it means for artists and for all of us was to try to make something with it.”

“I think when we first started this project, we naively thought we could just kind of hand our back catalog to some algorithm, and the algorithm would analyze that and spit out new songs that would be new YACHT songs. And the project, the art, would be about committing to that, whatever it was. As soon as we started working on this, we realized that we're not there yet, thank God. Algorithms can't just spit out pop songs. If they could, the airwaves would be full of them.”

“If you listen to the record it sounds like an interesting experimental rock or pop record. It doesn't sound like generative, you know, plausible nonsense. It sounds like songs, and that's because there was very much a human in the loop. We used the machine learning model to facilitate the process of generating source material, and then from that source material we built songs the way that we would always build songs as humans in a studio playing music.”

“I was projecting my own meaning onto words that I didn't write. And trying to sort of cobble together some kind of meaning to the songs that made it possible for me to sort of perform and convey them with my voice. And so, it's oddly democratizing, because now the fans, the listeners, and the band, are all trying to figure out what it all means at the same time. And we were going to have as many interpretations of what it means as there are people to listen to it.”

“It also has no consideration of the body, right. It doesn't ‘know’ what it feels like to play any of these melodies on the guitar or on the keyboard. If it's physically challenging to do. All it knows is the MIDI data that it's been fed in the training process. So, a lot of these melodies sounded odd, but simple enough to play. But then when we sat down to actually play them, we found that they were extremely challenging, because they forced us to acknowledge the embodied habits that we bring with us as players into the studio.”

“I like to think of some of these machine learning models being like a camera of their individual disciplines. I mean, a text-generating model that's able to make perfect texts. Maybe that just becomes the camera of writing. And we have to completely step outside of our comfort zone to reinvent what writing means in the 21st century. And what an exciting proposition that is for an artist.”

“There's also something really interesting about the reflective quality of AI as it works today. I mean, you build a machine learning model by feeding it lots of information, trading data. And in the context of music that information is historic. It's the history of music. It's a corpus of millions of notes, or a corpus of millions of words, of song lyrics from musicians and artists that we love. Or ourselves. So this idea that we could use an emerging technology not only learn to understand it, but also maybe learn something about ourselves in the process.”

“Maybe in ten years we won't even be making music for people anymore. Maybe we'll just be making music for other AI's to listen to.”

“Probably we'll get to a place, where machine learning models in some combination are able to generate any song that sounds like a song a human wrote. Or a novel that reads like a novel a human wrote.”

“In two or three years, who knows exactly when, we will be at a place where text generating models are able to generate texts that is effectively indistinguishable from human written texts. Arguably we're there already.”

“I think we're in a really interesting moment right now, where some of these tools are just now becoming kind of artist-friendly enough to even be useful or usable to people who don't have hardcore technical backgrounds. And, I think we're going to see an efflorescence of really interesting creative material emerge out of that. And the more sort of democratic these tools get, the more unpredictable it will be.”

“The future doesn't ...

Episode Comments

Generate a badge

Get a badge for your website that links back to this episode

Select type & size
Open dropdown icon
share badge image

<a href="https://goodpods.com/podcasts/creative-next-ai-automation-at-work-57323/ai-composed-music-2955947"> <img src="https://storage.googleapis.com/goodpods-images-bucket/badges/generic-badge-1.svg" alt="listen to ai composed music on goodpods" style="width: 225px" /> </a>

Copy