The Buzz has a new home!

The Buzz has now moved to a new website. Check it out here for advice on dating, friendship, wellness, and more: bumble-buzz.com

How Bumble is Taking on Misogyny in A.I. and Synthetic Media

By Payton Iheme, Vice President, Head of Global Public Policy

You may have noticed that artificial intelligence (A.I.) and synthetic media have dominated headlines for the past year or so, in industry and consumer press alike. User-friendly tools like chatbots have democratized a space that, until quite recently, seemed more like sci-fi than technology designed to make our daily lives easier. 

For those new to this latest iteration of A.I., your first introduction may well have been a deluge of augmented profile pictures on your friends’ social media accounts—idealized but photo-realistic likenesses. (Synthetic images of women in particular often have exaggerated, unrealistic proportions, which should serve as a red flag as we consider possible uses, and abuses, of this technology.) 

As the internet evolves, so will synthetic media—the catch-all term for artificially-created images, text, music, and more. Bumble, as a tech company, is thrilled at the prospect of innovation in this space. At the same time, we’re aware that A.I. is already being used nonconsensually—in deepfake porn, for example. If women and folks from underrepresented groups don’t have a seat at the table at the genesis of new technologies, we’re, as the adage goes, on the menu. We must have a voice in the very creation of this emerging media, not just the conversation surrounding its evolution. 

This is why we at Bumble have been working behind the scenes with the nonprofit Partnership on A.I. (PAI), a coalition committed to the responsible use of these technologies. We’re proud to join industry peers including Adobe, BBC, CBC/Radio Canada, D-ID, TikTok, OpenAI, Respeecher, Synthesia, and WITNESS, ​​as launch partners in PAI’s Responsible Practices for Synthetic Media: A Framework for Collective Action. We plan to use this framework to help guide our continued efforts to fight nonconsensual image abuse (NCII), in turn contributing to a safer and more equitable internet—part of Bumble’s mission from day one. 

Bumble has a track record when it comes to combating misogyny, harassment, and toxicity online. We’ve rolled out safety features within the Bumble app itself, like Private Detector, an A.I. tool that helps shield our community from unwanted lewd images. We’ve also worked closely with legislators, successfully backing bills in both the U.S. and U.K. to create a penalty for the sending of these sorts of images or videos—cyberflashing, as we call it. We’re just as committed to helping make the A.I. and synthetic media spaces safer for women and those from underrepresented groups. And we’re open to collaboration. Have an idea that’ll help support these goals? Let us know at policy@team.bumble.com

To learn more about the PAI and this coalition, see here.