BBC英語 學(xué)英語,練聽力,上聽力課堂! 注冊 登錄
> BBC > BBC news > 2023年BBC新聞聽力 >  內(nèi)容

科技英語:人類的偏見正蔓延至AI

所屬教程:2023年BBC新聞聽力

瀏覽:

tingliketang

2024年05月20日

手機(jī)版
掃描二維碼方便學(xué)習(xí)和分享

Artificial intelligence built on mountains of potentially biased information has created a real risk of automating discrimination, but is there any way to re-educate the machines?
建立在大量潛在偏見信息之上的人工智能確實(shí)帶來了自動(dòng)歧視的風(fēng)險(xiǎn),但有什么方法可以對機(jī)器進(jìn)行重新教育呢?

The question for some is extremely urgent. In this ChatGPT era, AI will generate more and more decisions for health care providers, bank lenders or lawyers, using whatever was scoured from the internet as source material.
對于一些人來說,這個(gè)問題非常緊迫。 在這個(gè) ChatGPT 時(shí)代,人工智能將使用從互聯(lián)網(wǎng)上搜索到的任何內(nèi)容作為源材料,為醫(yī)療保健提供者、銀行貸款機(jī)構(gòu)或律師做出越來越多的決策。

AI's underlying intelligence, therefore, is only as good as the world it came from, as likely to be filled with wit, wisdom, and usefulness, as well as hatred, prejudice and rants.
因此,人工智能的潛在智能取決于它所來自的世界,可能充滿機(jī)智、智慧和有用性,也可能充滿仇恨、偏見和咆哮。

"It's dangerous because people are embracing and adopting AI software and really depending on it," said Joshua Weaver, Director of Texas Opportunity & Justice Incubator, a legal consultancy.
“這很危險(xiǎn),因?yàn)槿藗冋趽肀Ш筒捎萌斯ぶ悄苘浖?,并且真正依賴它?rdquo;法律咨詢公司德克薩斯機(jī)會(huì)與正義孵化器主任約書亞韋弗說。

"We can get into this feedback loop where the bias in our own selves and culture informs bias in the AI and becomes a sort of reinforcing loop," he said.
他說:“我們可以進(jìn)入這個(gè)反饋循環(huán),我們自己和文化的偏見會(huì)影響人工智能的偏見,并成為一種強(qiáng)化循環(huán)。”

Making sure technology more accurately reflects human diversity is not just a political choice.
確保技術(shù)更準(zhǔn)確地反映人類多樣性不僅僅是一個(gè)政治選擇。

ChatGPT-style generative AI, which can create a semblance of human-level reasoning in just seconds, opens up new opportunities to get things wrong, experts worry.
專家擔(dān)心,ChatGPT 式的生成人工智能可以在短短幾秒鐘內(nèi)創(chuàng)造出人類水平的推理能力,這為出錯(cuò)的機(jī)會(huì)提供了新的機(jī)會(huì)。

The AI giants are well aware of the problem, afraid that their models can descend into bad behavior, or overly reflect a western society when their user base is global.
人工智能巨頭很清楚這個(gè)問題,擔(dān)心他們的模型可能會(huì)陷入不良行為,或者當(dāng)他們的用戶群遍布全球時(shí)過度反映西方社會(huì)。

The huge models on which ChatGPT is built "can't reason about what is biased or what isn't so they can't do anything about it," cautioned Jayden Ziegler, head of product at Alembic Technologies.
Alembic Technologies 產(chǎn)品主管 Jayden Ziegler 警告說,ChatGPT 所基于的巨大模型“無法推理出什么是有偏見的,什么是沒有偏見的,所以他們對此無能為力”。

For now at least, it is up to humans to ensure that the AI generates whatever is appropriate or meets their expectations.
至少目前,人類有責(zé)任確保人工智能生成合適的或滿足他們期望的東西。
用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國 四級聽力 英語音標(biāo) 英語入門 發(fā)音 美語 四級 新東方 七年級 賴世雄 zero是什么意思廣州市紙行路17號小區(qū)英語學(xué)習(xí)交流群

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦