Artificial intelligence built on mountains of potentially biased information has created a real risk of automating discrimination, but is there any way to re-educate the machines?
建立在大量潛在偏見(jiàn)信息之上的人工智能確實(shí)帶來(lái)了自動(dòng)歧視的風(fēng)險(xiǎn),但有什么方法可以對(duì)機(jī)器進(jìn)行重新教育呢?
The question for some is extremely urgent. In this ChatGPT era, AI will generate more and more decisions for health care providers, bank lenders or lawyers, using whatever was scoured from the internet as source material.
對(duì)于一些人來(lái)說(shuō),這個(gè)問(wèn)題非常緊迫。 在這個(gè) ChatGPT 時(shí)代,人工智能將使用從互聯(lián)網(wǎng)上搜索到的任何內(nèi)容作為源材料,為醫(yī)療保健提供者、銀行貸款機(jī)構(gòu)或律師做出越來(lái)越多的決策。
AI's underlying intelligence, therefore, is only as good as the world it came from, as likely to be filled with wit, wisdom, and usefulness, as well as hatred, prejudice and rants.
因此,人工智能的潛在智能取決于它所來(lái)自的世界,可能充滿(mǎn)機(jī)智、智慧和有用性,也可能充滿(mǎn)仇恨、偏見(jiàn)和咆哮。
"It's dangerous because people are embracing and adopting AI software and really depending on it," said Joshua Weaver, Director of Texas Opportunity & Justice Incubator, a legal consultancy.
“這很危險(xiǎn),因?yàn)槿藗冋趽肀Ш筒捎萌斯ぶ悄苘浖?,并且真正依?lài)它,”法律咨詢(xún)公司德克薩斯機(jī)會(huì)與正義孵化器主任約書(shū)亞韋弗說(shuō)。
"We can get into this feedback loop where the bias in our own selves and culture informs bias in the AI and becomes a sort of reinforcing loop," he said.
他說(shuō):“我們可以進(jìn)入這個(gè)反饋循環(huán),我們自己和文化的偏見(jiàn)會(huì)影響人工智能的偏見(jiàn),并成為一種強(qiáng)化循環(huán)。”
Making sure technology more accurately reflects human diversity is not just a political choice.
確保技術(shù)更準(zhǔn)確地反映人類(lèi)多樣性不僅僅是一個(gè)政治選擇。
ChatGPT-style generative AI, which can create a semblance of human-level reasoning in just seconds, opens up new opportunities to get things wrong, experts worry.
專(zhuān)家擔(dān)心,ChatGPT 式的生成人工智能可以在短短幾秒鐘內(nèi)創(chuàng)造出人類(lèi)水平的推理能力,這為出錯(cuò)的機(jī)會(huì)提供了新的機(jī)會(huì)。
The AI giants are well aware of the problem, afraid that their models can descend into bad behavior, or overly reflect a western society when their user base is global.
人工智能巨頭很清楚這個(gè)問(wèn)題,擔(dān)心他們的模型可能會(huì)陷入不良行為,或者當(dāng)他們的用戶(hù)群遍布全球時(shí)過(guò)度反映西方社會(huì)。
The huge models on which ChatGPT is built "can't reason about what is biased or what isn't so they can't do anything about it," cautioned Jayden Ziegler, head of product at Alembic Technologies.
Alembic Technologies 產(chǎn)品主管 Jayden Ziegler 警告說(shuō),ChatGPT 所基于的巨大模型“無(wú)法推理出什么是有偏見(jiàn)的,什么是沒(méi)有偏見(jiàn)的,所以他們對(duì)此無(wú)能為力”。
For now at least, it is up to humans to ensure that the AI generates whatever is appropriate or meets their expectations.
至少目前,人類(lèi)有責(zé)任確保人工智能生成合適的或滿(mǎn)足他們期望的東西。