Google’s new AI upgrade lets scientists use sound to better protect endangered species

a row of animals being tried to conserve using Google's Perch AI
(Image credit: Google)

What you need to know

  • Perch 2.0 is an upgraded open-source model from DeepMind that listens to nature and helps scientists track wildlife through sound.
  • It can sift through endless audio recordings—birds, frogs, whales, even background human noise—and make sense of it faster than humans ever could.
  • The first version already uncovered hidden bird populations and sped up conservation work in places like Hawaiʻi.
  • The new version handles messy, overlapping sounds better and can even estimate numbers of animals calling, not just identify them.

DeepMind has introduced Perch 2.0, an open-source AI tool that helps researchers track wildlife by studying sounds from nature.

For years, conservationists have relied on audio recorders to capture the calls and songs of wildlife in forests and oceans, building huge sound libraries that reveal which species are present and how ecosystems are holding up.

The challenge, however, has always been the sheer volume of recordings. Sorting through millions of hours of audio by hand is not only tedious but nearly impossible at scale. That’s where Perch comes in.

Can AI help to save endangered birds? - YouTube Can AI help to save endangered birds? - YouTube
Watch On

The original Perch model, launched in 2023, quickly found its way into tools like Cornell’s BirdNet Analyzer and has already been used by groups such as BirdLife Australia and the Australian Acoustic Observatory. It has helped track endangered species, including uncovering a hidden population of the rarely seen Plains Wanderer, and sped up conservation work in Hawaiʻi by recognizing honeycreeper calls in minutes instead of hours, Google's DeepMind team said in a blog post.

With more than a quarter of a million downloads, the tool proved there’s real demand for AI that can listen.

Building on that work, Perch 2.0 expands its reach. It’s trained on a larger dataset and can now identify sounds from mammals, amphibians, and even human-made noise.

Eavesdropping on ecosystems

The updated model works well in all kinds of environments, from jungles to underwater habitats, and does a better job sorting through messy, overlapping sounds. Beyond simply identifying species, the AI can help estimate how many animals are calling, or even track trends such as birth rates over time.

Another upgrade is what DeepMind calls “agile modeling.” Instead of requiring massive labeled datasets, scientists can now feed the system just one example of a rare call — say, a juvenile bird or a specific frog species — and Perch will search through its database for similar sounds. This allows researchers to build accurate new classifiers in less than an hour, a process that previously took weeks.

For conservation projects, these changes mean researchers can spend more time focusing on action rather than slogging through recordings. By putting powerful listening tools directly into the hands of scientists, the model could become an essential part of protecting the planet’s most vulnerable wildlife.

Jay Bonggolto
News Writer & Reviewer

Jay Bonggolto always keeps a nose for news. He has been writing about consumer tech and apps for as long as he can remember, and he has used a variety of Android phones since falling in love with Jelly Bean. Send him a direct message via Twitter or LinkedIn.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.