As she was ending up her undergraduate diploma at Princeton College, Rebecca Portnoff was making an attempt to determine what was subsequent: extra college, or go work for a tech firm?
She studied pc science, doing her dissertation on pure language processing. This was over a decade in the past, when synthetic intelligence wasn’t the buzzword it is today however nonetheless held scores of promise and pleasure for these working with it.
Across the identical time, she picked up a duplicate of Half the Sky, a e-book by Nicholas Kristof and Sheryl WuDunn about human rights abuses in opposition to girls internationally. The e-book, beneficial by her sister, ended up main her to the groundbreaking path she’s on right this moment on the nonprofit Thorn.
“I made a decision that I needed to make an impression on this house, however didn’t actually know what that regarded like as somebody with a machine studying and pc science background, and figured I’d have a greater likelihood of answering that query as a graduate scholar than working full-time at a tech firm,” Portnoff tells Quick Firm.
Portnoff accomplished her PhD at U.C. Berkeley, and spent her time studying concerning the impacts of kid sexual abuse and what efforts are in place to fight it.
Flash ahead to right this moment, Portnoff is the vp of information science at Thorn, a nonprofit cofounded by Demi Moore and Ashton Kutcher that makes use of tech to struggle baby exploitation. Her group works to determine victims, cease revictimization, and stop abuse from occurring within the first place utilizing machine studying and synthetic intelligence.
For all of the methods tech can struggle baby sexual abuse, it can also amplify it. For instance, dangerous actors may use generative AI to create life like baby sexual abuse materials. Portnoff is main an initiative with the nonprofit All Tech Is Human that works with tech giants to place new security measures in place to forestall sure misuse circumstances. She additionally led Thorn and All Tech Is Human’s Security by Design initiative final yr, which works to encourage tech firms to develop their AI with the intent to fight baby sexual abuse from the beginning, fairly than retrofit the tech later as soon as points come up.
Amazon, Anthropic, Google, Meta, OpenAI, Microsoft, and a handful of different firms have pledged to undertake Security by Design rules as a part of the mission. For instance, OpenAI built-in a part of the tech into its DALL-E 2 generative AI internet app.
“So far as the place issues must go, or the place issues can be headed with the Security by Design work and stopping the misuse of a few of this, I do know that there are days the place I really feel actually hopeful with how the ecosystem has moved to attempt to mitigate this,” Portnoff says. “And there are additionally days the place I really feel it looks as if we haven’t moved quick sufficient. On the finish of the day there are going to be firms and builders that work to forestall this misuse, and there can be these that don’t, and so there’s going to have to be laws that comes into play with regards to bringing alongside that full ecosystem.”
This story is a part of AI 20, our monthlong collection of profiles spotlighting probably the most fascinating technologists, entrepreneurs, company leaders, and artistic thinkers shaping the world of synthetic intelligence.