英語演講 學(xué)英語,練聽力,上聽力課堂! 注冊 登錄
> 英語演講 > 英語演講mp3 > TED音頻 >  第173篇

演講MP3+雙語文稿:人工智能也會有偏見嗎?

所屬教程:TED音頻

瀏覽:

2022年07月10日

手機版
掃描二維碼方便學(xué)習和分享
https://online2.tingclass.net/lesson/shi0529/10000/10387/tedyp174.mp3
https://image.tingclass.net/statics/js/2012

聽力課堂TED音頻欄目主要包括TED演講的音頻MP3及中英雙語文稿,供各位英語愛好者學(xué)習使用。本文主要內(nèi)容為演講MP3+雙語文稿:人工智能也會有偏見嗎?,希望你會喜歡!

【演講者及介紹】Kriti Sharma

人工智能科學(xué)家克里蒂·夏爾馬(Kriti Sharma)創(chuàng)造了Al技術(shù),以幫助解決我們這個時代面臨的一些最嚴峻的社會挑戰(zhàn)——從家庭暴力到性健康和不平等。

【演講主題】如何讓人工智能遠離人類的偏見

【中英文字幕】

翻譯者psjmz mz 校對者Jin Ge

00:13

How many decisions have been made about youtoday, or this week or this year, by artificial intelligence? I build AI for aliving so, full disclosure, I'm kind of a nerd. And because I'm kind of a nerd,wherever some new news story comes out about artificial intelligence stealingall our jobs, or robots getting citizenship of an actual country, I'm theperson my friends and followers message freaking out about the future.

你今天,這周,或今年有多少決定是人工智能(AI)做出的?我靠創(chuàng)建AI為生,所以,坦白說,我是個技術(shù)狂。因為我是算是個技術(shù)狂,每當有關(guān)于人工智能要搶走我們的工作這樣的新聞報道出來,或者機器人獲得了一個國家的公民身份時,我就成了對未來感到擔憂的朋友和關(guān)注者發(fā)消息的對象。

00:46

We see this everywhere. This media panicthat our robot overlords are taking over. We could blame Hollywood for that.But in reality, that's not the problem we should be focusing on. There is amore pressing danger, a bigger risk with AI, that we need to fix first. So weare back to this question: How many decisions have been made about you today byAI? And how many of these were based on your gender, your race or yourbackground?

這種事情隨處可見。媒體擔心機器人正在接管人類的統(tǒng)治。我們可以為此譴責好萊塢。但現(xiàn)實中,這不是我們應(yīng)該關(guān)注的問題。人工智能還有一個更緊迫的危機,一個更大的風險,需要我們首先應(yīng)對。所以我們再回到這個問題:今天我們有多少決定是由人工智能做出的?其中有多少決定是基于你的性別,種族或者背景?

01:25

Algorithms are being used all the time tomake decisions about who we are and what we want. Some of the women in thisroom will know what I'm talking about if you've been made to sit through thosepregnancy test adverts on YouTube like 1,000 times. Or you've scrolled pastadverts of fertility clinics on your Facebook feed. Or in my case, Indianmarriage bureaus.

算法一直在被用來判斷我們是誰,我們想要什么。在座的人里有些女性知道我在說什么,如果你有上千次被要求看完YouTube上那些懷孕測試廣告,或者你在臉書的短新聞中刷到過生育診所的廣告。或者我的遇到的情況是,印度婚姻局。

01:50

(Laughter)

(笑聲)

01:52

But AI isn't just being used to makedecisions about what products we want to buy or which show we want to bingewatch next. I wonder how you'd feel about someone who thought things like this:"A black or Latino person is less likely than a white person to pay offtheir loan on time." "A person called John makes a better programmerthan a person called Mary." "A black man is more likely to be arepeat offender than a white man." You're probably thinking, "Wow,that sounds like a pretty sexist, racist person," right? These are somereal decisions that AI has made very recently, based on the biases it haslearned from us, from the humans. AI is being used to help decide whether ornot you get that job interview; how much you pay for your car insurance; howgood your credit score is; and even what rating you get in your annualperformance review. But these decisions are all being filtered through itsassumptions about our identity, our race, our gender, our age. How is thathappening?

但人工智能不僅被用來決定我們想要買什么產(chǎn)品,或者我們接下來想刷哪部劇。我想知道你會怎么看這樣想的人:“黑人或拉丁美洲人比白人更不可能按時還貸?!薄懊屑s翰的人編程能力要比叫瑪麗的人好?!薄昂谌吮劝兹烁锌赡艹蔀閼T犯?!蹦憧赡茉谙?,“哇,這聽起來像是一個有嚴重性別歧視和種族歧視的人。”對吧? 這些都是人工智能 近期做出的真實決定,基于它從我們?nèi)祟惿砩蠈W(xué)習到的偏見。人工智能被用來幫助決定你是否能夠得到面試機會;你應(yīng)該為車險支付多少費用;你的信用分數(shù)有多好;甚至你在年度績效評估中應(yīng)該得到怎樣的評分。但這些決定都是通過它對我們的身份、種族、性別和年齡的假設(shè)過濾出來的。為什么會這樣?

03:11

Now, imagine an AI is helping a hiringmanager find the next tech leader in the company. So far, the manager has beenhiring mostly men. So the AI learns men are more likely to be programmers thanwomen. And it's a very short leap from there to: men make better programmersthan women. We have reinforced our own bias into the AI. And now, it'sscreening out female candidates. Hang on, if a human hiring manager did that,we'd be outraged, we wouldn't allow it. This kind of gender discrimination isnot OK. And yet somehow, AI has become above the law, because a machine madethe decision. That's not it.

想象一下人工智能正在幫助一個人事主管尋找公司下一位科技領(lǐng)袖。目前為止,主管雇傭的大部分是男性。所以人工智能知道男人比女人更有可能成為程序員,也就更容易做出這樣的判斷:男人比女人更擅長編程。我們通過人工智能強化了自己的偏見?,F(xiàn)在,它正在篩選掉女性候選人。等等,如果人類招聘主管這樣做,我們會很憤怒,不允許這樣的事情發(fā)生。這種性別偏見讓人難以接受。然而,或多或少,人工智能已經(jīng)凌駕于法律之上,因為是機器做的決定。這還沒完。

04:00

We are also reinforcing our bias in how weinteract with AI. How often do you use a voice assistant like Siri, Alexa oreven Cortana? They all have two things in common: one, they can never get myname right, and second, they are all female. They are designed to be ourobedient servants, turning your lights on and off, ordering your shopping. Youget male AIs too, but they tend to be more high-powered, like IBM Watson,making business decisions, Salesforce Einstein or ROSS, the robot lawyer. Sopoor robots, even they suffer from sexism in the workplace.

我們也在強化我們與人工智能互動的偏見。你們使用Siri,Alexa或者Cortana 這樣的語音助手有多頻繁?它們有兩點是相同的:第一點,它們總是搞錯我的名字,第二點,它們都有女性特征。它們都被設(shè)計成順從我們的仆人,開燈關(guān)燈,下單購買商品。也有男性的人工智能,但他們傾向于擁有更高的權(quán)力,比如IBM的Watson可以做出商業(yè)決定,還有Salesforce的Einstein 或者ROSS, 是機器人律師。所以即便是機器人也沒能逃脫工作中的性別歧視。

04:43

(Laughter)

(笑聲)

04:44

Think about how these two things combineand affect a kid growing up in today's world around AI. So they're doing someresearch for a school project and they Google images of CEO. The algorithmshows them results of mostly men. And now, they Google personal assistant. Asyou can guess, it shows them mostly females. And then they want to put on somemusic, and maybe order some food, and now, they are barking orders at anobedient female voice assistant. Some of our brightest minds are creating thistechnology today. Technology that they could have created in any way theywanted. And yet, they have chosen to create it in the style of 1950s "MadMan" secretary. Yay!

想想這兩者如何結(jié)合在一起,又會影響一個在當今人工智能世界中長大的孩子。比如他們正在為學(xué)校的一個項目做一些研究,他們在谷歌上搜索了CEO的照片。算法向他們展示的大部分是男性。他們又搜索了個人助手。你可以猜到,它顯示的大部分是女性。然后他們想放點音樂,也許想點些吃的,而現(xiàn)在,他們正對著一位順從的女聲助手發(fā)號施令。我們中一些最聰明的人創(chuàng)建了今天的這個技術(shù)。他們可以用任何他們想要的方式創(chuàng)造技術(shù)。然而,他們卻選擇了上世紀50年代《廣告狂人》的秘書風格。是的,你沒聽錯!

05:36

But OK, don't worry, this is not going toend with me telling you that we are all heading towards sexist, racist machinesrunning the world. The good news about AI is that it is entirely within ourcontrol. We get to teach the right values, the right ethics to AI. So there arethree things we can do. One, we can be aware of our own biases and the bias inmachines around us. Two, we can make sure that diverse teams are building thistechnology. And three, we have to give it diverse experiences to learn from. Ican talk about the first two from personal experience. When you work intechnology and you don't look like a Mark Zuckerberg or Elon Musk, your life isa little bit difficult, your ability gets questioned.

但還好,不用擔心。這不會因為我告訴你我們都在朝著性別歧視、種族主義的機器前進而結(jié)束。人工智能的好處是,一切都在我們的控制中。我們得告訴人工智能正確的價值觀,道德觀。所以有三件事我們可以做。第一,我們能夠意識到自己的偏見和我們身邊機器的偏見。第二,我們可以確保打造這個技術(shù)的是背景多樣的團隊。第三,我們必須讓它從豐富的經(jīng)驗中學(xué)習。我可以從我個人的經(jīng)驗來說明前兩點。當你在科技行業(yè)工作,并且不像馬克·扎克伯格或埃隆·馬斯克那樣位高權(quán)重,你的生活會有點困難,你的能力會收到質(zhì)疑。

06:27

Here's just one example. Like mostdevelopers, I often join online tech forums and share my knowledge to helpothers. And I've found, when I log on as myself, with my own photo, my ownname, I tend to get questions or comments like this: "What makes you thinkyou're qualified to talk about AI?" "What makes you think you knowabout machine learning?" So, as you do, I made a new profile, and thistime, instead of my own picture, I chose a cat with a jet pack on it. And Ichose a name that did not reveal my gender. You can probably guess where thisis going, right? So, this time, I didn't get any of those patronizing commentsabout my ability and I was able to actually get some work done. And it sucks,guys. I've been building robots since I was 15, I have a few degrees incomputer science, and yet, I had to hide my gender in order for my work to betaken seriously.

這只是一個例子。跟大部分開發(fā)者一樣,我經(jīng)常參加在線科技論壇,分享我的知識幫助別人。我發(fā)現(xiàn),當我用自己的照片,自己的名字登陸時,我傾向于得到這樣的問題或評論:“你為什么覺得自己有資格談?wù)撊斯ぶ悄??”“你為什么覺得你了解機器學(xué)習?”所以,我創(chuàng)建了新的資料頁,這次,我沒有選擇自己的照片,而是選擇了一只帶著噴氣背包的貓。并選擇了一個無法體現(xiàn)我性別的名字。你能夠大概猜到會怎么樣,對吧?于是這次,我不再收到任何居高臨下的評論,我能夠?qū)P陌压ぷ髯鐾辍_@感覺太糟糕了,伙計們。我從15歲起就在構(gòu)建機器人,我有計算機科學(xué)領(lǐng)域的幾個學(xué)位,然而,我不得不隱藏我的性別以讓我的工作被嚴肅對待。

07:31

So, what's going on here? Are men justbetter at technology than women? Another study found that when women coders onone platform hid their gender, like myself, their code was accepted fourpercent more than men. So this is not about the talent. This is about anelitism in AI that says a programmer needs to look like a certain person. Whatwe really need to do to make AI better is bring people from all kinds ofbackgrounds. We need people who can write and tell stories to help us createpersonalities of AI. We need people who can solve problems. We need people whoface different challenges and we need people who can tell us what are the realissues that need fixing and help us find ways that technology can actually fix it.Because, when people from diverse backgrounds come together, when we buildthings in the right way, the possibilities are limitless.

這是怎么回事呢?男性在科技領(lǐng)域就是強于女性嗎?另一個研究發(fā)現(xiàn),當女性程序員在平臺上隱藏性別時,像我這樣,她們的代碼被接受的比例比男性高4%。所以這跟能力無關(guān)。這是人工智能領(lǐng)域的精英主義,即程序員看起來得像具備某個特征的人。讓人工智能變得更好,我們需要切實的把來自不同背景的人集合到一起。我們需要能夠書寫和講故事的人來幫助我們創(chuàng)建人工智能更好的個性。我們需要能夠解決問題的人。我們需要能應(yīng)對不同挑戰(zhàn)的人,我們需要有人告訴我們什么是真正需要解決的問題,幫助我們找到用技術(shù)解決問題的方法。因為,當不同背景的人走到一起時,當我們以正確的方式做事情時,就有無限的可能。

08:38

And that's what I want to end by talking toyou about. Less racist robots, less machines that are going to take our jobs --and more about what technology can actually achieve. So, yes, some of theenergy in the world of AI, in the world of technology is going to be about whatads you see on your stream. But a lot of it is going towards making the worldso much better. Think about a pregnant woman in the Democratic Republic ofCongo, who has to walk 17 hours to her nearest rural prenatal clinic to get acheckup. What if she could get diagnosis on her phone, instead? Or think aboutwhat AI could do for those one in three women in South Africa who face domesticviolence. If it wasn't safe to talk out loud, they could get an AI service toraise alarm, get financial and legal advice. These are all real examples ofprojects that people, including myself, are working on right now, using AI.

這就是我最后想和你們討論的。減少種族歧視的機器人,減少奪走我們工作的機器——更多專注于技術(shù)究竟能實現(xiàn)什么。是的,人工智能世界中,科技世界中的一些能量是關(guān)于你在流媒體中看到的廣告。但更多是朝著讓世界更美好的方向前進。想想剛果民主共和國的一位孕婦,需要走17小時才能到最近的農(nóng)村產(chǎn)前診所進行產(chǎn)檢。如果她在手機上就能得到診斷會怎樣呢?或者想象一下人工智能能為1/3面臨家庭暴力的南非女性做什么。如果大聲說出來不安全的話,她們可以通過一個人工智能服務(wù)來報警,獲得財務(wù)和法律咨詢。這些都是包括我在內(nèi),正在使用人工智能的人所做的項目中的真實案例。

09:45

So, I'm sure in the next couple of daysthere will be yet another news story about the existential risk, robots takingover and coming for your jobs.

我確信在未來的幾十天里面,會有另一個新聞故事,告訴你們,機器人會接管你們的工作。

09:54

(Laughter)

(笑聲)

09:55

And when something like that happens, Iknow I'll get the same messages worrying about the future. But I feelincredibly positive about this technology. This is our chance to remake theworld into a much more equal place. But to do that, we need to build it theright way from the get go. We need people of different genders, races,sexualities and backgrounds. We need women to be the makers and not just themachines who do the makers' bidding. We need to think very carefully what weteach machines, what data we give them, so they don't just repeat our own pastmistakes. So I hope I leave you thinking about two things. First, I hope youleave thinking about bias today. And that the next time you scroll past anadvert that assumes you are interested in fertility clinics or online bettingwebsites, that you think and remember that the same technology is assuming thata black man will reoffend. Or that a woman is more likely to be a personalassistant than a CEO. And I hope that reminds you that we need to do somethingabout it.

當這樣的事情發(fā)生時,我知道我會收到同樣對未來表示擔憂的信息。但我對這個技術(shù)極為樂觀。這是我們重新讓世界變得更平等的機會。但要做到這一點,我們需要在一開始就以正確的方式構(gòu)建它。我們需要不同性別,種族,性取向和背景的人。我們需要女性成為創(chuàng)造者,而不僅僅是聽從創(chuàng)造者命令的機器。我們需要仔細思考我們教給機器的東西,我們給它們什么數(shù)據(jù),這樣它們就不會只是重復(fù)我們過去的錯誤。所以我希望我留給你們兩個思考。首先,我希望你們思考當今社會中的偏見。下次當你滾動刷到認為你對生育診所或者網(wǎng)上投注站有興趣的廣告時,這會讓你回想起同樣的技術(shù)也在假定黑人會重復(fù)犯罪。或者女性更可能成為個人助理而非CEO。我希望那會提醒你,我們需要對此有所行動。

11:20

And second, I hope you think about the factthat you don't need to look a certain way or have a certain background inengineering or technology to create AI, which is going to be a phenomenal forcefor our future. You don't need to look like a Mark Zuckerberg, you can looklike me. And it is up to all of us in this room to convince the governments andthe corporations to build AI technology for everyone, including the edge cases.And for us all to get education about this phenomenal technology in the future.Because if we do that, then we've only just scratched the surface of what wecan achieve with AI.

第二,我希望你們考慮一下這個事實,你不需要以特定的方式去看,也不需要有一定的工程或技術(shù)背景去創(chuàng)建人工智能,人工智能將成為我們未來的一股非凡力量。你不需要看起來像馬克·扎克伯格,你可以看起來像我。我們這個房間里的所有人都有責任去說服政府和公司為每個人創(chuàng)建人工智能技術(shù),包括邊緣的情況。讓我們所有人都能在未來接受有關(guān)這項非凡技術(shù)的教育。因為如果我們那樣做了,才剛剛打開了人工智能世界的大門。

12:05

Thank you.

謝謝。

12:06

(Applause)

(鼓掌)

用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國 四級聽力 英語音標 英語入門 發(fā)音 美語 四級 新東方 七年級 賴世雄 zero是什么意思日照市滕家村英語學(xué)習交流群

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦