AI offers a glimpse into a dog's mind


A "mind reading" algorithm reveals dogs to be action-oriented as opposed to their object-obsessed human best friends.

A new study from Emory University shows that dogs would be pretty bad at whodunits, with their brains focusing on the action they see rather than who or what is performing it. In this, their minds differ considerably from human brains, which focus on objects and actions.

"We humans are very object-oriented," Gregory Berns, professor of psychology at Emory and corresponding author of the study, said in the university's news release on the paper published by the Journal of Visualized Experiments.

Berns said that "there are ten times as many nouns as there are verbs in the English language because we have a particular obsession with naming objects," but dogs had other things to worry about.

"Animals have to be very concerned with things happening in their environment to avoid being eaten or to monitor animals they might want to hunt. Action and movement are paramount," Berns said, noting that it made "perfect sense" that dogs' brains were going to be highly attuned to actions first.

Significant differences between canine and human visual systems also reflect this. While dogs can only see in shades of blue and yellow, they have a slightly higher density of vision receptors designed to detect motion.

"Historically, there hasn't been much overlap in computer science and ecology," said Erin Phillips, lead author of the paper, who worked in Bern's Canine Cognitive Neuroscience Lab. "But machine learning is a growing field that is starting to find broader applications, including in ecology."

"Superstar" dogs

Researchers say that only two out of all the dogs trained for the study had the attention span and temperament for the experiments. Both mixed breeds, Daisy and Bhubo, were dubbed "superstar" dogs for their exceptional concentration.

"They didn't even need treats," Phillips said. The dogs had to lie still for the fMRI scan and watch half an hour-long video without a break, three sessions each. It is unclear whether two humans who underwent the same experiment needed the treats.

In any case, researchers recorded the fMRI neural data for all the subjects as they watched videos filmed from a dog's perspective. These included a lot of sniffing and playing, cars and bikes driving past, a human offering a ball, or a cat walking to a house – scenes interesting enough to keep the dog's attention for an extended period.

Video data was then segmented by time stamps into classifiers that included objects such as dogs, cars, humans, or cats and actions like sniffing, playing, or eating. Researchers had a machine-learning algorithm called Ivis to analyze the patterns of the neural data.

When mapping out data for human participants, the model was 99% accurate for both object and action classifiers. In the case of canine subjects, it did not work for object classifiers but was 75% to 88% accurate at decoding action classifications.

Researchers say that the study offers a "first look" at how the canine mind reconstructs what it sees – even if to a limited degree. "The fact that we can do that is remarkable," Berns said.