Connect with us

Artificial Intelligence

Australia wins first AI ‘Eurovision Song Contest’ by sampling koalas and kookaburras

Published

on

australia-wins-first-ai-‘eurovision-song-contest’-by-sampling-koalas-and-kookaburras

A team of programmers and songwriters from Australia have won the inaugural (and unofficial) AI ‘Eurovision Song Contest’, using a neural network trained on noises made by koalas, kookaburras, and Tasmanian devils to help score their winning entry.

The group, named Uncanny Valley, said their song was a response to the bushfire season that began ravaging Australia in June 2019. Scientists estimated that around a billion animals were killed by the fires (a figure that excludes insects, fish, frogs, and bats, but includes reptiles, birds, and mammals — including those sampled for the song).

The track, titled “Beautiful the World,” falls into the grand tradition of saccharine and zany Eurovision pop that so often shows up alongside raging death metal and techno remixes of European folk ballads. It includes the lyrics “Flying in fear but love keeps on coming (flying, flying) / Dreams still live on the wings of happiness (dreams still)” and recasts the devastating bushfires as “vivid candles of hope.”

The real Eurovision Song Contest has been held every year since 1956, and Australia has participated since 2015 (partly because Australians love it so much). The AI version, though, was cooked up by Dutch broadcaster VPRO after the 2020 edition was canceled due to the coronavirus pandemic. Thirteen teams entered songs and were live-streamed earlier this week.

The winners of the AI Eurovision Song Contest, based on a combination of audience votes and judges’ scores.
Image: VPro

Although it was dubbed an “AI song contest,” computers weren’t always calling the shots. As is often the case with AI music, machine learning was used to generate some elements of the songs, but it was usually up to humans to arrange and perform the final tracks.

In the case of “Beautiful the World,” it seems that AI was mainly used to write the melody and lyrics, with the samples from Australian fauna used to craft a synth instrument. The final performance, though, was firmly down to humans. (You can read more about the technical aspects of the song in this blog post by team member Sandra Uitdenbogerd.)

The AI Eurovision audience wasn’t just in the mood for poptimism, though. Second place in the contest went to Germany’s entry, an eerie song titled “I’ll Marry You, Punk Come” by Team Dadabots x Portrait XO.

For their lyrics, the team used AI trained on 1950s acapellas to generate a stream of babble that they then tried to recognize words in. The music was generated using a collection of neural networks trained on everything from pop choruses to baroque harmonies. The resulting track (which you can listen to below) is a melange of different styles, with one team member comparing the curation process to “hunting and gathering.”

If the results of the contest show anything, though, it’s that artificial intelligence is best used as a partner in music-making, not as the lead. The last-place entry was “Painfulwords” by Team New Piano from Switzerland, which let computers take charge.

“Faced with the choice between making an accessible song with quite a few human interventions, or experimenting with as much AI as possible and then delivering a worse-sounding song, we chose the latter,” said the two data scientists responsible for the track. The results speak for themselves.

Artificial Intelligence

Algorithm that determines school exam results risks ‘baking in inequality’

Published

on

algorithm-that-determines-school-exam-results-risks-‘baking-in-inequality’

An algorithm used to calculate exam results in England risks unfairly punishing poorer pupils, politicians have warned.

The system was introduced when school exams were cancelled due to the COVID-19 pandemic. Teachers were instead asked to hand over their predicted grades for each student to exam regulators. The algorithm then adjusts their estimates by comparing them to the school‘s past results.

The approach aims to moderate teacher predictions that are overly generous. But critics fear that basing results on a school‘s past performance rather than a student’s work will unfairly penalise children from disadvantaged backgrounds.

In Scotland, a similar system downgraded the exam results of pupils in deprived areas by more than twice the rate of students in the country’s richest regions.

[Read: Study: Only 18% of data science students are learning about AI ethics]

“A shameful attainment gap exists in Scotland, and the Scottish government chose to add that to the algorithm rather than address it,” said Ian Murray, the shadow Scottish secretary, in a blog post yesterday.

In total, the system reduced around 125,000 estimated grades — a quarter of all results — while only about 9,000 were pushed upwards. Scottish Labour’s education spokesperson Iain Gray accused the exam authority of treating teachers’ judgement “with contempt.”

South of the border

Labour fears that similar issues will arise when English exam results are released next week. Kate Green, the party’s shadow education secretary, has asked the government for assurances that the system won’t “exacerbate existing inequalities.”

“Young people deserve to have their hard work assessed on merit, but the system risks baking in inequality and doing most harm to students from disadvantaged backgrounds, those from ethnic minority groups, and those with special educational needs and disabilities,” said Kate Green, Labour’s Shadow Education Secretary.

England’s exam watchdog today announced that schools will be allowed to appeal the results. But the regulator expects few of them to succeed.

For students, the system is exacerbating the stress of waiting for exam results that can have a big impact on their life chances.

Published August 7, 2020 — 17:33 UTC

Thomas Macaulay

Thomas Macaulay

August 7, 2020 — 17:33 UTC

Continue Reading

Artificial Intelligence

Algorithm reveals some of us have DNA from a mystery ancestor

Published

on

algorithm-reveals-some-of-us-have-dna-from-a-mystery-ancestor

Interbreeding gets a bad rap these days, but early humans loved a bit of interspecies action. Genes from fossils show our ancestors had entanglements with both Neanderthals and an ancient group called Denisovans. But new research suggests they also got it on with another mystery relative, whose DNA still exists in people today.

Scientists from Cornell University and Cold Spring Harbor Laboratory made the discovery by developing an algorithm that analyzes genomes. The software can identify segments of DNA that came from other species — even if it’s from an unknown source.

The researchers applied the algorithm to genomes from two Neanderthals, a Denisovan, and two African humans. It found that 3% of the Neanderthal genome came from ancient humans, through interbreeding that occurred between 200,000 and 300,000 years ago.

More intriguingly, it also revealed that 1% of the Denisovan genome likely came from another unknown species. About 15% of that material may have been passed down to humans who are still alive today.

[Read: AI study of Twitter bots reveals boredom is what separates us from machines]

The researchers still aren’t sure who our mystery relative is. But they suspect it may be Homo erectus, an extinct species that roamed the earth millions of years ago.

Study co-author Adam Siepel says the algorithm can reach further back in time than any computational method he’s ever seen. He believes it could be used to study gene flow in other species that have interbred, such as wolves and dogs.

“What I think is exciting about this work is that it demonstrates what you can learn about deep human history by jointly reconstructing the full evolutionary history of a collection of sequences from both modern humans and archaic hominins,” Speigel said in a statement.

You can check the research out for yourself in the PLOS Genetics journal.

Published August 7, 2020 — 11:57 UTC

Thomas Macaulay

Thomas Macaulay

August 7, 2020 — 11:57 UTC

Continue Reading

Artificial Intelligence

Why ‘human-like’ is a low bar for most AI projects

Published

on

why-‘human-like’-is-a-low-bar-for-most-ai-projects

Show me a human-like machine and I’ll show you a faulty piece of tech. The AI market is expected to eclipse $300 billion by 2025. And the vast majority of the companies trying to cash in on that bonanza are marketing some form of “human-like” AI. Maybe it’s time to reconsider that approach.

The big idea is that human-like AI is an upgrade. Computers compute, but AI can learn. Unfortunately, humans aren’t very good at the kinds of tasks a computer makes sense for and AI isn’t very good at the kinds of tasks that humans are. That’s why researchers are moving away from development paradigms that focus on imitating human cognition.

A pair of NYU researchers recently took a deep dive into how humans and AI process words and word meaning. Through the study of “psychological semantics,” the duo hoped to explain the shortcomings held by machine learning systems in the natural language processing (NLP) domain. According to a study they published to arXiv:

Many AI researchers do not dwell on whether their models are human-like. If someone could develop a highly accurate machine translation system, few would complain that it doesn’t do things the way human translators do.

In the field of translation, humans have various techniques for keeping multiple languages in their heads and fluidly interfacing between them. Machines, on the other hand, don’t need to understand what a word means in order assign the appropriate translation to it.

This gets tricky when you get closer to human-level accuracy. Translating one, two, and three into Spanish is relatively simple. The machine learns that they are exactly equivalent to uno, dos, and tres, and is likely to get those right 100 percent of the time. But when you add complex concepts, words with more than one meaning, and slang or colloquial speech things can get complex.

We start getting into AI‘s uncanny valley when developers try to create translation algorithms that can handle anything and everything. Much like taking a few Spanish classes won’t teach a human all the slang they might encounter in Mexico City, AI struggles to keep up with an ever-changing human lexicon.

NLP simply isn’t capable of human-like cognition yet and making it exhibit human-like behavior would be ludicrous – imagine if Google Translate balked at a request because it found the word “moist” distasteful, for example.

This line of thinking isn’t just reserved for NLP. Making AI appear more human-like is merely a design decision for most machine learning projects. As the NYU researchers put it in their study:

One way to think about such progress is merely in terms of engineering: There is a job to be done, and if the system does it well enough, it is successful. Engineering is important, and it can result in better and faster performance and relieve humans of dull labor such as keying in answers or making airline itineraries or buying socks.

From a pure engineering point of view, most human jobs can be broken down into individual tasks that would be better suited for automation than AI, and in cases where neural networks would be necessary – directing traffic in a shipping port, for example – it’s hard to imagine a use-case where a general AI would outperform several narrow, task-specific systems.

Consider self-driving cars. It makes more sense to build a vehicle made up of several systems that work together instead of designing a humanoid robot that can walk up to, unlock, enter, start, and drive a traditional automobile.

Most of the time, when developers claim they’ve created a “human-like” AI, what they mean is that they’ve automated a task that humans are often employed for. Facial recognition software, for example, can replace a human gate guard but it cannot tell you how good the pizza is at the local restaurant down the road.

That means the bar is pretty low for AI when it comes to being “human-like.” Alexa and Siri do a fairly good human imitation. They have names and voices and have been programmed to seem helpful, funny, friendly, and polite.

But there’s no function a smart speaker performs than couldn’t be better handled by a button. If you had infinite space and an infinite attention span, you could use buttons for anything and everything a smart speaker could do. One might say “Play Mariah Carey,” while another says “Tell me a joke.” The point is, Alexa’s about as human-like as a giant remote control.

AI isn’t like humans. We may be decades or more away from a general AI that can intuit and function at human-level in any domain. Robot butlers are a long way off. For now, the best AI developers can do is imitate human effort, and that’s seldom as useful as simplifying a process to something easily automated.

Published August 6, 2020 — 22:35 UTC

Continue Reading

Trending

English
Spanish English