On a recent trip to the local botanical gardens, I noticed a tall, striking purple flower I’d never noticed before. I tried to Google it, but I didn’t know quite what to ask. “Purple flower” brought me pictures of narcissus and freesia, orchids and primrose, gladiolus and morning glory. None of them were the flower I’d seen.
But thanks to artificial intelligence, curious amateur naturalists like me now have better ways to identify the nature around us. Several new sites and apps use AI technology to put names to photographs.
iNaturalist.org is one of these sites. Founded in 2008, has until now been solely a crowdsourcing site. Users post a picture of a plant or animal and a community of scientists and naturalists will identify it. Its mission is to connect experts and amateur "citizen scientists," getting people excited about plants and wildlife while using the data gathered to potentially help professional scientists monitor changes in biodiversity or even discover new species.
This month, iNaturalist plans to launch an app that uses AI to identify plants and animals down to the species level. The app takes advantage of so-called “deep learning,” using artificial neural networks that allow computers to learn as humans do, so their capabilities can advance over time.
“We’re hopeful this will engage a whole new group of citizen scientists,” Loarie says.
The app is trained by being fed labeled images from iNaturalist’s massive database of “research grade” observations—observations that have been verified by the site’s community of experts. Once the model has been trained on enough labeled images, it begins to be able to identify unlabeled images. Currently iNaturalist is able to add a new species to the model every 1.7 hours. The more images uploaded by users and identified by experts, the better.
“The more stuff we get, the more trained up the model will be,” Loarie says.
The iNaturalist team wants to the model to always be accurate, even if that means not being as precise as possible. Right now the model tries to give a confident response about the animal's genus, then a more cautious response about the species, offering the top 10 possibilities. It currently is correct about the genus 86 percent of the time, and gives the species in its top 10 results 77 percent of the time. These numbers should improve as the model continues to be trained.
Playing around with a demo version, I entered a picture of a puffin perched on a rock. “We're pretty sure this is in the genus Puffins,” it said, giving the correct species—Atlantic puffin—as the top suggested result. Then I entered a picture of an African clawed frog. “We're pretty sure this is in the genus Western spadefoot toads,” it told me, offering African clawed frog as among its top 10 results.
The AI was “not confident enough to make a recommendation” about a picture of my son, but suggested he might be a northern leopard frog, a garden snail or a gopher snake, among other, non-human creatures. As all of these are spotted, I realized the computer vision was seeing the polka-dot background of my son’s highchair and misidentifying it as part of the specimen. So I cropped the picture until only his face was visible and pressed “classify.” “We're pretty sure this is in the suborder Lizards,” the AI responded. Either my baby looks like a lizard or—the real answer, I presume—this shows that the model only recognizes what it’s been fed. And no one is feeding it pictures of humans, for obvious reasons.
iNaturalist hopes the app will take pressure off its community of experts, and allow for a larger community of observers to participate, such as groups of schoolchildren. It could also allow for “camera trapping” – sending in streams of images from a camera trap, which takes a picture when it’s triggered by motion. iNaturalist has discouraged camera trapping, as it floods the site with huge amounts of images that may or may not actually need expert identification that (some images will be empty, while others would catch common animals like squirrels that the camera's owner could easily identify himself or herself). But with the AI that wouldn’t be a problem. iNaturalist also hopes the new technology will engage a new community of users, including people who might have an interest in nature but wouldn’t be willing to wait several days for an identification under the crowdsourced model.
Quick species identification could also be useful in other situations, such as law enforcement.
“Let’s say TSA workers open a suitcase and someone’s got geckos,” says Loarie. “They need to know whether to arrest someone or not.”
In this case, the AI could tell the TSA agents what type of gecko they were looking at, which could aid in an investigation.
iNaturalist is not the only site taking advantage of computer vision to engage citizen scientists. The Cornell’s Merlin Bird ID app uses AI to identify more than 750 North American birds. You just have to answer a few simple questions first, including the size and color of the bird you saw. Pl@ntNet does the same for plants, after you tell it what part of the plant it’s looking at (flower, fruit, etc.).
This is all part of a larger wave of interest in using AI to identify images. There are AI programs that can identify objects from drawings (even bad ones). AIs can look at paintings and identify artists and genres. Many experts think computer vision will play a huge role in healthcare, making it easier to identify, for example, skin cancers. Car manufacturers use computer vision to teach cars to identify and avoid hitting pedestrians. A plot point of a recent episode of the comedy Silicon Valley dealt with a computer vision app for identifying food. But since its creator only trained it on hot dogs—since training a neural network requires countless hours of human labor—it could only distinguish between hot dogs and “not hot dogs.”
This question of humor labor is important. Massive databases of correctly labeled images are crucial to training AIs, and can be hard to come by. iNaturalist, as a longtime crowdsourced site, already has exactly this kind of database, which is why its model has been advancing so quickly, Loarie says. Other sites and apps have to find their data elsewhere, often from academic images.
“It’s still early days, but I guarantee in the next year you’re going to see a proliferation of these kinds of apps,” Loarie says.