英語閱讀 學(xué)英語,練聽力,上聽力課堂! 注冊 登錄
> 輕松閱讀 > 科學(xué)前沿 >  內(nèi)容

用人類智慧應(yīng)對人工智能挑戰(zhàn)

所屬教程:科學(xué)前沿

瀏覽:

2017年04月27日

手機版
掃描二維碼方便學(xué)習(xí)和分享
A lot of big claims are made about the transformative power of artificial intelligence. But it is worth listening to some of the big warnings too. Last month, Kate Crawford, principal researcher at Microsoft Research, warned that the increasing power of AI could result in a “fascist’s dream” if the technology were misused by authoritarian regimes.

關(guān)于人工智能的變革威力,人們提出了很多大膽的設(shè)想。但我們也有必要聽聽一些嚴重警告。上月,微軟研究院(Microsoft Research)首席研究員凱特•克勞福德(Kate Crawford)警告稱,如果被威權(quán)政府濫用,威力與日俱增的人工智能可能會釀成一場“法西斯夢”。

“Just as we are seeing a step function increase in the speed of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” Ms Crawford told the SXSW tech conference.

克勞福德在SXSW科技大會上表示:“就在我們看到人工智能的發(fā)展速度呈階梯型上升時,其他一些事情也在發(fā)生:極端民族主義、右翼威權(quán)主義和法西斯主義崛起。”

人工智能

The creation of vast data registries, the targeting of population groups, the abuse of predictive policing and the manipulation of political beliefs could all be enabled by AI, she said.

她表示,人工智能可能帶來龐大的數(shù)據(jù)登記冊、針對特定人口群體、濫用預(yù)測型警務(wù)以及操縱政治信仰。

Ms Crawford is not alone in expressing concern about the misapplication of powerful new technologies, sometimes in unintentional ways. Sir Mark Walport, the British government’s chief scientific adviser, warned that the unthinking use of AI in areas such as the medicine and the law, involving nuanced human judgment, could produce damaging results and erode public trust in the technology.

克勞福德并不是唯一對強大的新技術(shù)被錯誤使用(有時以意想不到的方式)感到擔憂的人。英國政府首席科學(xué)顧問馬克•沃爾波特(Mark Walport)警告稱,在醫(yī)學(xué)和法律等涉及細膩人類判斷的領(lǐng)域不假思索地使用人工智能,可能帶來破壞性結(jié)果,并侵蝕公眾對這項技術(shù)的信任。

Although AI had the potential to enhance human judgment, it also risked baking in harmful prejudices and giving them a spurious sense of objectivity. “Machine learning could internalise all the implicit biases contained within the history of sentencing or medical treatment — and externalise these through their algorithms,” he wrote in an article in Wired.

盡管人工智能有增強人類判斷的潛力,但它也可能帶來有害的偏見,并產(chǎn)生一種錯誤的客觀感覺。他在《連線》(Wired)雜志的一篇文章中寫道:“機器學(xué)習(xí)可能會內(nèi)部化在量刑或醫(yī)療歷史中存在的所有隱性偏見,并通過它們的算法外部化。”

As ever, the dangers are a lot easier to identify than they are to fix. Unscrupulous regimes are never going to observe regulations constraining the use of AI. But even in functioning law-based democracies it will be tricky to frame an appropriate response. Maximising the positive contributions that AI can make while minimising its harmful consequences will be one of the toughest public policy challenges of our times.

就像一直以來的情況那樣,識別危險仍然要比化解危險容易得多。沒有底線的政權(quán)永遠不會遵守限制人工智能使用的規(guī)定。然而,即便在正常運轉(zhuǎn)的基于法律的民主國家,框定適當?shù)幕貞?yīng)也很棘手。將人工智能可以做出的積極貢獻最大化,同時將其有害后果降至最低,將是我們這個時代最艱巨的公共政策挑戰(zhàn)之一。

For starters, the technology is difficult to understand and its use is often surreptitious. It is also becoming increasingly hard to find independent experts, who have not been captured by the industry or are not otherwise conflicted.

首先,人工智能技術(shù)很難理解,其用途往往帶有神秘色彩。找到尚未被行業(yè)挖走、且不存在其他利益沖突的獨立專家也變得越來越難。

Driven by something approaching a commercial arms race in the field, the big tech companies have been snapping up many of the smartest academic experts in AI. Much cutting-edge research is therefore in the private rather than public domain.

受到該領(lǐng)域類似商業(yè)軍備競賽的競爭的推動,大型科技公司一直在爭奪人工智能領(lǐng)域很多最優(yōu)秀的學(xué)術(shù)專家。因此,很多領(lǐng)先研究位于私營部門,而非公共部門。

To their credit, some leading tech companies have acknowledged the need for transparency, albeit belatedly. There has been a flurry of initiatives to encourage more policy research and public debate about AI.

值得肯定的是,一些領(lǐng)先科技公司認識到了透明的必要性,盡管有些姍姍來遲。還有一連串倡議鼓勵對人工智能展開更多政策研究和公開辯論。

Elon Musk, founder of Tesla Motors, has helped set up OpenAI, a non-profit research company pursuing safe ways to develop AI.

特斯拉汽車(Tesla Motors)創(chuàng)始人埃隆•馬斯克(Elon Musk)幫助創(chuàng)建了非盈利研究機構(gòu)OpenAI,致力于以安全方式開發(fā)人工智能。

Amazon, Facebook, Google DeepMind, IBM, Microsoft and Apple have also come together in Partnership on AI to initiate more public discussion about the real-world applications of the technology.

亞馬遜(Amazon)、Facebook、谷歌(Google) DeepMind、IBM、微軟(Microsoft)和蘋果(Apple)也聯(lián)合發(fā)起Partnership on AI,以啟動更多有關(guān)該技術(shù)實際應(yīng)用的公開討論。

Mustafa Suleyman, co-founder of Google DeepMind and a co-chair of the Partnership, says AI can play a transformative role in addressing some of the biggest challenges of our age. But he accepts that the rate of progress in AI is outstripping our collective ability to understand and control these systems. Leading AI companies must therefore become far more innovative and proactive in holding themselves to account. To that end, the London-based company is experimenting with verifiable data audits and will soon announce the composition of an ethics board to scrutinise all the company’s activities.

谷歌DeepMind聯(lián)合創(chuàng)始人、Partnership on AI聯(lián)合主席穆斯塔法•蘇萊曼(Mustafa Suleyman)表示,人工智能可以在應(yīng)對我們這個時代一些最大挑戰(zhàn)方面發(fā)揮變革性作用。但他認為,人工智能的發(fā)展速度超過我們理解和控制這些系統(tǒng)的集體能力。因此,領(lǐng)先人工智能公司必須在對自己問責方面發(fā)揮更具創(chuàng)新和更主動的作用。為此,這家總部位于倫敦的公司正在嘗試可驗證的數(shù)據(jù)審計,并將很快宣布一個道德委員會的構(gòu)成,該委員會將審查該公司的所有活動。

But Mr Suleyman suggests our societies will also have to devise better frameworks for directing these technologies for the collective good. “We have to be able to control these systems so they do what we want when we want and they don’t run ahead of us,” he says in an interview for the FT Tech Tonic podcast.

但蘇萊曼指出,我們的社會還必須設(shè)計更好的框架,指導(dǎo)這些技術(shù)為集體利益服務(wù)。他在接受英國《金融時報》Tech Tonic播客的采訪時表示:“我們必須能夠控制這些系統(tǒng),使他們在我們希望的時間做我們想做的事,而不會自說自話。”

Some observers say the best way to achieve that is to adapt our legal regimes to ensure that AI systems are “explainable” to the public. That sounds simple in principle, but may prove fiendishly complex in practice.

一些觀察人士表示,做到這點的最佳方法是調(diào)整我們的法律制度,確保人工智能系統(tǒng)可以向公眾“解釋”。從原則上說,這聽上去很簡單,但實際做起來可能極為復(fù)雜。

Mireille Hildebrandt, professor of law and technology at the Free University of Brussels, says one of the dangers of AI is that we become overly reliant on “mindless minds” that we do not fully comprehend. She argues that the purpose and effect of these algorithms must therefore be testable and contestable in a courtroom. “If you cannot meaningfully explain your system’s decisions then you cannot make them,” she says.

布魯塞爾自由大學(xué)(Free University of Brussels)法律和科技學(xué)教授米雷列•希爾德布蘭特(Mireille Hildebrandt)表示,人工智能的危險之一是我們變得過度依賴我們并不完全理解的“不用腦子的智慧”。她認為,這些算法的目的和影響必須是可測試而且在法庭上是可爭論的。她表示:“如果你無法有意義地解釋你的系統(tǒng)的決定,那么你就不能制造它們。”

We are going to need a lot more human intelligence to address the challenges of AI.

我們將需要更多的人類智慧來應(yīng)對人工智能挑戰(zhàn)。
 


用戶搜索

瘋狂英語 英語語法 新概念英語 走遍美國 四級聽力 英語音標 英語入門 發(fā)音 美語 四級 新東方 七年級 賴世雄 zero是什么意思廈門市福晟南灣1號英語學(xué)習(xí)交流群

  • 頻道推薦
  • |
  • 全站推薦
  • 推薦下載
  • 網(wǎng)站推薦