英語閱讀 學(xué)英語,練聽力,上聽力課堂! 注冊(cè) 登錄
> 輕松閱讀 > 雙語閱讀 >  內(nèi)容

FT社評(píng):防范人工智能的不利一面

所屬教程:雙語閱讀

瀏覽:

2018年03月18日

手機(jī)版
掃描二維碼方便學(xué)習(xí)和分享
The latest report on the potentially malicious uses of artificial intelligence reads like a pitch for the next series of the dystopian TV show Black Mirror.

有關(guān)人工智能潛在惡意用途的最新報(bào)告,讀上去就像是反烏托邦電視劇《黑鏡》(Black Mirror)下一季的廣告。

Drones using facial recognition technology to hunt down and kill victims. Information being manipulated to distort the social media feeds of targeted individuals. Cleaning robots being hacked to bomb VIPs. The potentially harmful uses of AI are as vast as the human imagination.

無人機(jī)利用面部識(shí)別技術(shù)搜尋并殺害受害者。信息正受到操縱,為的是篡改目標(biāo)個(gè)人的社交媒體消息。清潔機(jī)器人被黑客入侵,來轟炸非常重要的人物。人類的想象力有多廣,人工智能的潛在有害用途就有多廣。

One of the big questions of our age is: how can we maximise the undoubted benefits of AI while limiting its downsides? It is a tough challenge. All technologies are dualistic, particularly so with AI given it can significantly increase the scale and potency of malicious acts and lower their costs.

我們這個(gè)時(shí)代的一個(gè)重要問題是:我們?nèi)绾螌⑷斯ぶ悄芪阌怪靡傻暮锰幾畲蠡?,同時(shí)限制它的壞處呢?這是一項(xiàng)艱巨的挑戰(zhàn)。所有的技術(shù)都具有兩面性,人工智能尤其如此,因?yàn)樗軌驑O大地?cái)U(kuò)大惡意行為的規(guī)模和危害,同時(shí)降低成本。

The report, written by 26 researchers from several organisations including OpenAI, Oxford and Cambridge universities, and the Electronic Frontier Foundation, performs a valuable, if scary, service in flagging the threats from the abuse of powerful technology by rogue states, criminals and terrorists. Where it is less compelling is coming up with possible solutions.

這份報(bào)告由來自O(shè)penAI、牛津大學(xué)(Oxford)、劍橋大學(xué)(Cambridge)以及Electronic Frontier Foundation等幾家機(jī)構(gòu)的26名研究員撰寫。盡管可怕,但報(bào)告頗有意義地警示了流氓政府、犯罪分子和恐怖分子濫用強(qiáng)大技術(shù)而帶來的威脅。報(bào)告在提出可能的解決方案方面則不那么令人信服。

Much of the public concern about AI focuses on the threat of an emergent superintelligence and the mass extinction of our species. There is no doubt that the issue of how to “control” artificial general intelligence, as it is known, is a fascinating and worthwhile debate. But in the words of one AI expert, it is probably “a second half of the 21st century problem”.

公眾對(duì)于人工智能的擔(dān)憂在很大程度上聚焦于超級(jí)智能出現(xiàn)的威脅以及人類的大規(guī)模滅絕。毫無疑問,如何“控制”人工通用智能的問題是一場有趣且有意義的辯論。但用一位人工智能專家的話來說,這可能是“21世紀(jì)后半葉的問題”。

The latest report highlights how we should already be worrying today about the abuse of relatively narrow AI. Human evil, incompetence and poor design will remain a bigger threat for the foreseeable future than some omnipotent and omniscient Terminator-style Skynet.

這份最新報(bào)告強(qiáng)調(diào)了我們現(xiàn)在就應(yīng)該如何擔(dān)心相對(duì)狹義的人工智能的濫用。在可預(yù)見的將來,與某個(gè)無所不能、無所不知的《終結(jié)者》(Terminator)式的天網(wǎng)(Skynet)相比,人類罪惡、無能和糟糕設(shè)計(jì)是更嚴(yán)重的威脅。

AI academics have led a commendable campaign to highlight the dangers of so-called lethal autonomous weapons systems. The United Nations is now trying to turn that initiative into workable international protocols.

人工智能學(xué)者領(lǐng)導(dǎo)了一場值得稱贊的運(yùn)動(dòng),強(qiáng)調(diào)所謂的致命自動(dòng)武器系統(tǒng)的危險(xiǎn)。聯(lián)合國(UN)現(xiàn)在正努力將這項(xiàng)倡議轉(zhuǎn)化為可行的全球協(xié)議。

Some interested philanthropists, including Elon Musk and Sam Altman, have also sunk money into research institutes focusing on AI safety, including one that co-wrote the report. Normally, researchers who call for more money to be spent on research should be treated with some scepticism. But there are estimated to be just 100 researchers in the western world grappling with the issue. That seems far too few, given the scale of the challenge.

一些感興趣的慈善家,包括埃隆•馬斯克(Elon Musk)和山姆•奧爾特曼(Sam Altman),已將資金投入專注人工智能安全的研究機(jī)構(gòu),包括一家聯(lián)合撰寫這份報(bào)告的機(jī)構(gòu)。通常,對(duì)于呼吁擴(kuò)大研究經(jīng)費(fèi)的研究人員,應(yīng)該報(bào)以一定懷疑。但據(jù)估計(jì),在西方世界,只有100名研究人員在應(yīng)對(duì)這個(gè)問題。鑒于這項(xiàng)挑戰(zhàn)的規(guī)模,這個(gè)數(shù)字似乎太小。

Governments need to raise their understanding in this area. In the US, the creation of a federal robotics commission to develop relevant governmental expertise would be a good idea. The British government is sensibly expanding the remit of the Alan Turing Institute to encompass AI.

各國政府需要提升他們?cè)谶@個(gè)領(lǐng)域的認(rèn)識(shí)。在美國,創(chuàng)建一個(gè)聯(lián)邦機(jī)器人委員會(huì)發(fā)展相關(guān)政府專業(yè)技能將是個(gè)好辦法。英國政府正明智地?cái)U(kuò)大圖靈研究所(Alan Turing Institute)的職權(quán)范圍,將人工智能囊括在內(nèi)。

Some tech companies have already engaged the public on ethical issues concerning AI, and the rest should be encouraged to do so. Arguably, they should also be held liable for the misuse of their AI-enabled products in the same way that pharmaceutical firms are responsible for the harmful side-effects of their drugs.

一些科技公司已讓公眾參與到與人工智能相關(guān)的道德問題,其他公司也應(yīng)被鼓勵(lì)這么做??梢哉f,它們還應(yīng)對(duì)它們的人工智能產(chǎn)品的不當(dāng)使用負(fù)責(zé),就像制藥企業(yè)對(duì)藥物的有害副作用負(fù)責(zé)一樣。

Companies should be deterred from rushing AI-enabled products to market before they have been adequately tested. Just as the potential flaws of cyber security systems are sometimes explored by co-operative hackers, so AI services should be stress-tested by other expert users before their release.

應(yīng)防止公司在人工智能產(chǎn)品接受充分測試之前匆忙將其推向市場。就像網(wǎng)絡(luò)安全系統(tǒng)有時(shí)會(huì)用合作黑客查探潛在漏洞一樣,人工智能服務(wù)應(yīng)在發(fā)布之前由其他專家使用者實(shí)施壓力測試。

Ultimately, we should be realistic that only so much can ever be done to limit the abuse of AI. Rogue regimes will inevitably use it for bad ends. We cannot uninvent scientific discovery. But we should, at least, do everything possible to restrain its most immediate and obvious downsides.

最后,我們應(yīng)現(xiàn)實(shí)一點(diǎn),要限制人工智能的濫用,我們能做的只有這么多。流氓政權(quán)將不可避免地將其用于罪惡的目的。我們不能消滅科學(xué)發(fā)現(xiàn)。但至少,我們應(yīng)盡我們所能限制其最直接、最明顯的缺點(diǎn)。
 


用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國 四級(jí)聽力 英語音標(biāo) 英語入門 發(fā)音 美語 四級(jí) 新東方 七年級(jí) 賴世雄 zero是什么意思德陽市上美廣場英語學(xué)習(xí)交流群

網(wǎng)站推薦

英語翻譯英語應(yīng)急口語8000句聽歌學(xué)英語英語學(xué)習(xí)方法

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦