Why people of color probably shouldn’t get the new Amazon Halo


Amazon's new Halo fitness tracker comes with the Tone feature, which will record your voice throughout the day and analyze it so you can know how you sound to people. But Amazon doesn't have the best data protection policy, its AI fares poorly with people of color, and it provides this kind of data to the police -- all strong reasons why people of color probably shouldn't buy the Halo.

In the Canadian sci-fi TV show Continuum, an evil corporation releases a fitness tracker called Halo. The device is ostensibly meant to help provide people with a way to get their “entire health and well-being team” – a family doctor, therapist, a personal trainer, nutritionist and even physiotherapist – in the size of a consumer-friendly, wearable device:

ADVERTISEMENT

Unfortunately, the Halo tracker, which was "always watching, always listening, always with you," didn’t really deliver on its promises. In reality, it helped feed endless data for the tech megacorporation SadTech, which ended up helping to establish a corporation-controlled government in North America.

So it may come as a surprise that Amazon, the actual, real-word tech megacorporation, is bringing out its own new fitness tracker, which is also called Halo. 

Now, while this Halo will likely not cause as much doom as Continuum’s Halo, there are a few reasons why you should probably avoid buying it. And, unfortunately, given tech’s pretty horrible track record with people of color, and Amazon’s too-close-for-comfort relationship with law enforcement agencies – it’s especially not a good idea for people of color to get Amazon’s new Halo.

"Amazon’s entire existence is a red flag...Amazon is creating increasingly encroaching, ubiquitous technologies, constantly accessing and collecting more data."

Ali Alkhatib, Research Fellow at the Center for Applied Data Ethics

I spoke with Ali Alkhatib, Research Fellow at the Center for Applied Data Ethics, about Halo and Amazon in general. For Alkhatib, Amazon’s entire existence is a red flag. He told CyberNews: “The relationship that Amazon has with society is profoundly broken, and they don’t interrogate that at all. Amazon is creating increasingly encroaching, ubiquitous technologies, constantly accessing and collecting more data. It’s frustrating that they don’t worry about leveraging their customer’s data, when they sell these devices to them, and then giving that data to law enforcement.”

“This,” Alkhatib says, “presents a bigger threat for POC than for the average white male.” But, he would like to emphasize, “the harm is still profound for any person. The white person is still becoming part of the system that undermines their privacy.”

Abusing and misusing Halo’s Tone analysis

ADVERTISEMENT

There’s a lot to be wary of when it comes to Amazon in general, but one of the bigger concerns comes from the eye-catching Tone feature: according to its press release, Tone is a voice analysis tool that uses machine learning “to analyze energy and positivity in a customer’s voice so they can better understand how they may sound to others, helping improve their communication and relationships.”

Essentially, Tone will analyze your voice recordings as a mood-analysis tool. With that, it’s supposed to help you understand the situations that cause you the most stress, or to generally get an overview of your emotional well-being.

Once you set up your Tone voice profile, it will “run passively and intermittently in the background so you don’t have to think about it. Throughout the day, it will take short samples of your speech and analyze the acoustic characteristics that represent how you sound to the people you interact with.” 

This naturally brings a few concerns. The main takeaway is that, once you’ve turned the Tone feature on, it will be working pretty much continuously, 24/7, recording your voice. And naturally, it will not only record your voice, but also record the voice of the person or people you are speaking to, “throughout the day.”

Amazon’s privacy page about Tone promises that these recordings are essentially safe:

“Your recordings from Tone speech samples only exist on your band or smartphone temporarily before being processed on your phone and automatically deleted. They never get sent to the cloud, and can’t even be played back or downloaded, so no one ever hears them.”

That is most likely true. The recordings aren’t being sent to anyone, and no one will listen to them. But let’s recall how Amazon processes Echo voice commands: it automatically transcribes conversations, and keeps those transcripts in its servers. Upon request, users can have those recordings deleted, but the transcripts remain.

With the wording at the moment, it may very well be true that Amazon deletes audio samples of your voice, while at the same time transcribing those and keeping those text recordings for its machine learning or product improvement or increasing ad business.

Alkhatib told CyberNews that this kind of feature-set is “intentionally abstracted by the designers of these systems. It’s valuable for them to say that for Halo, and there's nothing that prevents them from selling this data to anyone they want to.” 

Their data privacy promises also say nothing about the Tone analysis: your recordings don’t get sent anywhere, but what about the analysis of your recordings, and the context and metadata? Your happiness, sadness, stress, anxiety – perhaps aggressiveness? Aggressiveness at a particular time, on a particular day, around a particularly controversial event?

Ali Akhatib tweet screenshot
ADVERTISEMENT

Amazon and the police

There isn’t much speculation that needs to be undertaken in order to understand the worst of what Amazon can do with its Halo data. 

Let’s simply look at the mess of Ring, which Amazon bought in early 2018, and whose data Amazon then made available to various law enforcement agencies through a cozy partnership. This inevitably introduced a wide host of problems, such as issues of transparency, city money subsidizing private products, increased surveillance, and many more.

This is one point that really frustrates Alkhatib. He told CyberNews, “Using data with absolutely no checks on what they do, and no laws, for how they use our data... this is screwed up. This entire system shouldn’t be legal, and whatever we need to write into the law to codify it, we just need to do it. Police accessing this data is the major problem, and technically, according to them, this isn’t my data, it’s Amazon’s.”

Now, there are two general points to emphasize here. First, if you aren’t really aware, police and people of color don’t really have the best trust and relationship, mostly because the police can’t really seem to stop disproportionately targeting people of color.

Secondly, AI and people of color also don’t really have the best trust and relationship. AI is 5-10 times worse at recognizing black people versus their white counterparts. Even Google had its own slip up when it misidentified two African Americans as gorillas. Its workaround was not to fix its AI, but instead to just ban recognizing gorillas altogether

That’s not even to mention the problem of the pseudo-science known as “predictive policing,” which hopes to use modeling to identify future crime hotspots. But how can you discuss policing, especially in America, without discussing the history and the context – the sociological and ethical aspects – that lead to crime in the first place? 

As one Twitter user opined: “let's find out who we've been criminalizing, and then we can send more cop cars over there.”

Abeba Birhane tweet screenshot

And, by the way, when talking about the hot AI of the year, facial recognition, Detroit’s police department admitted that facial recognition misidentifies suspects 96% of the time.

ADVERTISEMENT

AI’s failure at recognizing marginalized groups

I asked Alkhatib why AI was so bad at working with non-white people. One reason seems to be a technical failure that at the moment presents a significant challenge: a lack of nuance recognition.

AI is created in training environments, and they are tested and trained and the environments are improved. But really a technical question can then be: Why do AI systems fare so terribly in novel, untrained situations?

“Essentially,” Alkhatib mentions, “people can recognize a new situation, whereas algorithms don’t have that ability. This is an inherent issue with AI – the algorithm can’t reflect like humans can, and they can’t grow or develop as needed.”

But the other reason reflects the creation of those training environments themselves: the bias and lack of diversity of the creators. 

“Race representation is terrible [in these fields],” Alkhatib tells me. “It is extremely white, there are so few people of color. And that’s a reflection of how deeply racist this field is.” But it goes even beyond that. 

Not only is it the environment creation that is affected by this lack of diversity, or abject whiteness, but it’s also about the ethics of creating such programs – race analysis, or crime or gender prediction – and feeding that to prejudiced police groups, even to this day:  

Vinay Prabhu tweet screenshot

“It’s difficult, but the effect is that a system is created where people are not really questioning the systems and why these systems are being built in the first place,” Alkhatib told me. “And that could be due to a lack of inclusion.”

Putting all the pieces together

ADVERTISEMENT

As a fellow brother of color, speaking hopefully to other people of color, I hope we can come to some sort of base understanding based on the points I raised above:

  1. We don’t have a good history with law enforcement, which in actuality means that they, in their attitudes and practices, don’t really have a good history with us
  2. We don’t have a good history with AI which powers the devices, such as Amazon and its new Halo, that provide data to law enforcement, that in turn disproportionately affect us
  3. The effect of these biased algorithms, unchecked data collection, and police cooperation is a general race-based distress with far-reaching implications: wrongful arrests, intimidation, depression, and even premature aging.

Look, for people of color, it’s dire – but in some parts this is a voluntary direness. And this is perhaps a direct or indirect path along which capitalism pushes us. 

To steal a quote from Indian activist Arundhati Roy: “If we were sleepwalking into a surveillance state, now we are running toward it because of the fear that is being cultivated.” While she was talking about responses to the coronavirus, we can expand that to the capitalist societies that use fear – something as general as fear of crime, or as minuscule as FOMO – to get people to capitulate to a surveillance society.

But really – why feed into it? Why feed it? Until the Halo, or any newfangled device that relies wholly or in part on AI for recognition or analysis, has been carefully studied and itself analyzed, I’d recommend that people of color simply pass on it.