在電腦自主性越來越強(qiáng)的時代,我們怎樣確保它們按照人類的意志行事呢?
That may sound like an abstract philosophical question, but it is also an urgent practical challenge, according to Stuart Russell, professor of computer science at the University of California, Berkeley, and one of the world’s leading thinkers on artificial intelligence.
這聽起來像是個抽象的哲學(xué)問題,但據(jù)加州大學(xué)伯克利分校(University of California, Berkeley)計(jì)算機(jī)科學(xué)教授、人工智能(AI)領(lǐng)域世界領(lǐng)先的思想家之一斯圖爾特•羅素(Stuart Russell)表示,它也是一個緊迫的現(xiàn)實(shí)挑戰(zhàn)。
It is all too easy to imagine scenarios in which increasingly powerful autonomous computer systems cause terrible real-world damage, either through thoughtless misuse or deliberate abuse, he says. Suppose, for example, in the not-too-distant future that a care robot is looking after your children. You are running late and ask the robot to prepare a meal. The robot opens the fridge, finds no food, calculates the nutritional value of your cat and serves up a feline fricassee.
他說,想象這樣的場景太容易了:或是因?yàn)闊o意錯用,抑或因?yàn)楣室鉃E用,越來越強(qiáng)大的自主電腦系統(tǒng)給現(xiàn)實(shí)世界帶來可怕的破壞。例如,假設(shè)在不遠(yuǎn)的將來,一個保育機(jī)器人負(fù)責(zé)照顧你的孩子。一天你快要遲到了,讓機(jī)器人幫你做飯。機(jī)器人打開冰箱,沒有找到食物,于是它計(jì)算了你的寵物貓的營養(yǎng)價值,最后做了一份燉貓肉。
Or take a more horrifying example of abuse that is technologically possible today. A terrorist group launches a swarm of bomb-carrying drones in a city and uses image recognition technology to kill everyone in a police uniform.
或者,再舉一個更可怕的濫用案例——這在今天的技術(shù)上是完全可能的。一個恐怖組織向某城市派出了一組攜帶炸彈的無人機(jī),利用圖像識別技術(shù)殺死所有穿警察制服的人。
As Prof Russell argues in his latest book, Human Compatible, we need better ways of controlling what computers do to prevent them acting in anti-human ways, by default or by design. Although it may be many years, if not decades, away, we must also start thinking seriously about what happens if we ever achieve superhuman AI.
正如羅素教授在他的新書《人類兼容》(Human Compatible)中所寫,我們需要用更好的方法控制電腦,防止它們做出無意或有意的反人類行為。雖然這可能需要數(shù)年時間,甚至數(shù)十年,但我們必須開始認(rèn)真思考:如果我們制造出了超越人類的人工智能,會發(fā)生什么事情?
Getting that ultimate control problem right could usher in a golden age of abundance. Getting it wrong could result in humanity’s extinction. Prof Russell fears it may take a Chernobyl-scale tragedy in AI to alert us to the vital importance of ensuring control.
這個如何控制人工智能的終極問題如能應(yīng)對得當(dāng),將會引領(lǐng)我們進(jìn)入一個富足的黃金時代。應(yīng)對不當(dāng)有可能造成人類的滅絕。羅素教授擔(dān)心,也許直到出現(xiàn)一個規(guī)模匹敵切爾諾貝利事件的人工智能悲劇,才能警示人們掌握控制權(quán)的至關(guān)重要性。
For the moment, the professor is something of an outlier in the AI community in sounding such alarms. Although he co-wrote a previous textbook on AI that is used by most universities around the world, Prof Russell is critical of what he calls the standard model of AI and the “denialism” of many in the industry.
目前,像羅素教授這樣發(fā)出這類警告的人在人工智能界不是主流。雖然羅素教授是現(xiàn)在世界上大多數(shù)大學(xué)使用的人工智能教科書的合著者之一,但他對所謂的人工智能標(biāo)準(zhǔn)模型和許多業(yè)內(nèi)人士拒不接受現(xiàn)實(shí)的心理持批判態(tài)度。
瘋狂英語 英語語法 新概念英語 走遍美國 四級聽力 英語音標(biāo) 英語入門 發(fā)音 美語 四級 新東方 七年級 賴世雄 zero是什么意思綿陽市紫金城(仙童街)英語學(xué)習(xí)交流群