科技行業(yè)正準備迎接人工智能帶來的震撼世界的影響。如今人們意識到,從教育、就業(yè),到如何收集人們的數據,人工智能將擾亂社會運轉的方式。
Machine learning, a form of advanced pattern recognition that enables machines to make judgments by analysing large volumes of data, could greatly supplement human thought. But such soaring capabilities have stirred almost Frankenstein-like fears about whether developers can control their creations.
機器學習是一種高級形態(tài)的模式識別,能夠讓機器通過分析大量數據來做出判斷。這有望大大輔助人類思維。但這種與日俱增的能力引發(fā)了近乎“科學怪人”(Frankenstein)式的擔憂:開發(fā)人員能否控制他們創(chuàng)造出的機器?
Failures of autonomous systems — like the death last yearof a US motorist in a partially self-driving car from Tesla Motors — have led to a focus on safety, says Stuart Russell, a professor of computer science and AI expert at the University of California, Berkeley. “That kind of event can set back the industry a long way, so there is a very straightforward economic self-interest here,” he says.
加州大學伯克利分校(University of California, Berkeley)計算機科學教授、人工智能專家斯圖亞特•拉塞爾(Stuart Russell)表示,自動系統的失誤(就像去年駕駛一輛特斯拉汽車(Tesla Motors)部分自動駕駛汽車的美國駕車者死亡那樣)促使人們關注安全。他表示:“這種事件可能會嚴重阻礙行業(yè)的發(fā)展,因此這里有著非常直接的經濟自身利益。”
Alongside immigration and globalisation, fears of AI-driven automation are fuelling public anxiety about inequality and job security. The election of Donald Trump as US president and the UK’s vote to leave the EU were partly driven by such concerns. While some politicians claim protectionist policies will help workers, many industry experts say most jobs losses are caused by technological change, largely automation.
除了移民和全球化,對人工智能驅動的自動化的擔憂,正引發(fā)公眾對于不平等和就業(yè)安全的擔憂。唐納德•特朗普(Donald Trump)當選美國總統以及英國投票退出歐盟(EU)在一定程度上就是受到這類擔憂的推動。盡管一些政治人士聲稱,保護主義政策將有利于勞動者,但很多行業(yè)專家表示,多數就業(yè)損失是由科技變革(主要是自動化)造成的。
Global elites — those with high income and educational levels, who live in capital cities — are considerably more enthusiastic about innovation than the general population, the FT/Qualcomm Essential Future survey found. This gap, unless addressed, will continue to cause political friction.
英國《金融時報》/高通(Qualcomm)聯合開展的Essential Future調查發(fā)現,全球精英(那些收入和受教育程度高、生活在首都城市的人)對于創(chuàng)新要比普通大眾熱情得多。除非彌合這種差距,否則它將繼續(xù)引發(fā)政治摩擦。
Vivek Wadhwa, a US-based entrepreneur and academic who writes about ethics and technology, thinks the new wave of automation has geopolitical implications: “Tech companies must accept responsibility for what they’re creating and work with users and policymakers to mitigate the risks and negative impacts. They must have their people spend as much time thinking about what could go wrong as they do hyping products.
美國企業(yè)家、撰寫道德和科技文章的學者維微克•瓦德瓦(Vivek Wadhwa)認為,新的自動化浪潮具有地緣政治上的潛在影響:“科技公司必須對他們所創(chuàng)造出的東西承擔責任,并與用戶和政策制定者合作,緩解風險和負面影響。他們必須讓員工花時間思考哪里可能出錯,就像他們花時間宣傳產品那樣。”
The industry is bracing itself for a backlash. Advances in AI and robotics have brought automation to areas of white-collar work, such as legal paperwork and analysing financial data. Some 45 per cent of US employees’ work time is spent on tasks that could be automated with existing technologies, a study by McKinsey says.
人工智能行業(yè)正在準備應對反彈。人工智能和機器人領域的進步,已經把自動化引入白領工作領域,例如法律文書和分析財務數據。麥肯錫(McKinsey)的一項研究稱,在美國員工的工作時間中,大約有45%用在可以借助現有技術實現自動化的任務上。
Industry and academic initiatives have been set up to ensure AI works to help people. These include the Partnership on AI to Benefit People and Society, established by companies including IBM, and a $27m effort involving Harvard and the Massachusetts Institute of Technology. Groups like Open AI, backed by Elon Musk and Google, have made progress, says Prof Russell: “We’ve seen papers . . . that address the technical problem of safety.”
為了確保人工智能有利于人類,已經建立了一些行業(yè)和學術計劃。其中包括由IBM等公司創(chuàng)建的人工智能造福人類和社會合作組織(Partnership on AI to Benefit People and Society),以及涉及哈佛大學(Harvard)和麻省理工學院(MIT)的一項2700萬美元計劃。得到埃隆•馬斯克(Elon Musk)和谷歌(Google)支持的OpenAI等組織已取得進展,拉塞爾教授表示:“我們看到了一些論文……它們針對安全性的技術問題。”
There are echoes of past efforts to deal with the complications of a new technology. Satya Nadella, chief executive of Microsoft, compares it to 15 years ago when Bill Gates rallied his company’s developers to combat computer malware. His “trustworthy computing” initiative was a watershed moment. In an interview with the FT, Mr Nadella said he hoped to do something similar to ensure AI works to benefit humans.
這方面有一些過去應對新技術影響努力的回聲。微軟(Microsoft)首席執(zhí)行官薩蒂亞•納德拉(Satya Nadella)將其與15年前相比,當時比爾•蓋茨(Bill Gates)動員公司的開發(fā)人員抗擊電腦惡意程序。他發(fā)起的“可信計算”倡議是一個分水嶺。納德拉在接受英國《金融時報》采訪時表示,他希望采取類似的舉措以確保人工智能造福于人類。
AI presents some thorny problems, however. Machine learning systems derive insights from large amounts of data. Eric Horvitz, a Microsoft executive, told a US Senate hearing late last year that these data sets may themselves be skewed. “Many of our data sets have been collected . . . with assumptions we may not deeply understand, and we don’t want our machine-learned applications . . . to be amplifying cultural biases,” he said.
然而,人工智能帶來了一些棘手的問題。機器學習系統從大量數據中得出見解。微軟高管埃里克•霍維茨(Eric Horvitz)去年底在美國參議院聽證會上表示,這些數據集可能本身就存在問題。他表示:“我們的很多數據集是……在假設我們可能并不深入理解的情況下收集的,我們不希望讓我們的機器學習應用……放大文化偏見。”
Last year, an investigation by news organisation ProPublica found that an algorithm used by the US justice system to determine whether criminal defendants were likely to reoffend, had a racial bias. Black defendants with a low risk of reoffending were more likely than white ones to be labelled as high risk.
新聞機構ProPublica去年進行的一項調查發(fā)現,美國司法機構用來確定刑事被告人是否有可能再次犯罪的算法存在種族偏見。再次犯罪風險較低的黑人被告比白人被告更容易被標記為高風險。
Greater transparency is one way forward, for example making it clear what information AI systems have used. But the “thought processes” of deep learning systems are not easy to audit.Mr Horvitz says such systems are hard for humans to understand. “We need to understand how to justify [their] decisions and how the thinking is done.”
提高透明度是一條出路,比如明確人工智能系統使用了哪些信息。但深度學習系統的“思維過程”不容易加以審查?;艟S茨表示,人類很難理解這種系統。“我們需要理解如何證明(它們的)決策合理,以及這種思考是如何完成的。”
As AI comes to influence more government and business decisions, the ramifications will be widespread. “How do we make sure the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society?” asks Joi Ito, director of MIT’s Media Lab.
隨著人工智能影響更多政府和企業(yè)決策,影響將是廣泛的。“我們如何確保我們‘培訓’的機器不會固化和放大困擾社會的人類偏見?”麻省理工學院媒體實驗室主任伊藤穰一(Joi Ito)問道。
Executives like Mr Nadella believe a mixture of government oversight — including, by implication, the regulation of algorithms — and industry action will be the answer. He plans to create an ethics board at Microsoft to deal with any difficult questions thrown up by AI.
納德拉等高管認為,答案將是結合政府監(jiān)督(言外之意,這包括對算法的監(jiān)管)和行業(yè)行動。他計劃在微軟成立一個道德委員會,以處理人工智能帶來的任何棘手問題。
He says: “I want . . . an ethics board that says, ‘If we are going to use AI in the context of anything that is doing prediction, that can actually have societal impact . . . that it doesn’t come with some bias that’s built in.’”
他說:“我希望有……一個道德委員會,它會這樣說,‘如果我們要在任何作出預測、可能具有實際社會影響的場合使用人工智能……那么它不帶有內置的一些偏見’。”
Making sure AI systems benefit humans without unintended consequences is difficult. Human society is incapable of defining what it wants, says Prof Russell, so programming machines to maximise the happiness of the greatest number of people is problematic.
確保人工智能在不會帶來一些意想不到的后果的情況下造福人類,是很困難的。拉塞爾教授說,人類社會無法界定自身想要什么,因此通過編程讓機器為最多數量的人謀求最大幸福是存在問題的。
This is AI’s so-called “control problem”: the risk that smart machines will single-mindedly pursue arbitrary goals even when they are undesirable. “The machine has to allow for uncertainty about what it is the human really wants,” says Prof Russell.
這就是人工智能所謂的“控制問題”:智能機器將一心追逐武斷的目標,甚至當這些目標并不可取的時候也是如此。“機器必須考慮到人類真正想要的東西具有不確定性,”拉塞爾教授說。
Ethics committees will not resolve concerns about AI taking jobs, however. Fears of a backlash were apparent at this year’s World Economic Forum in Davos as executives agonised over how to present AI. The common response was to say machines will make many jobs more fulfilling though other jobs could be replaced.
然而,道德委員會無法平息人們對人工智能奪走工作的擔憂。在今年的達沃斯世界經濟論壇(World Economic Forum)上,對反彈的擔憂很明顯,高管們對于如何采用人工智能并作出解釋十分焦慮。普遍的回應是,聲稱機器在可能取代一些工作的同時,也將讓許多工作更能帶來成就感。
The profits from productivity gains for tech companies and their customers could be huge. How those should be distributed will become part of the AI debate. “Whenever someone cuts cost, that means, hopefully, a surplus is being created,” says Mr Nadella. “You can always tax surplus — you can always make sure that surplus gets distributed differently.”
對科技公司和它們的客戶而言,生產率提高帶來的利益可能是巨大的。如何分配這些利益將成為有關人工智能的辯論的一部分。“每當有人削減了成本,那就意味著有望創(chuàng)造出一些盈余,”納德拉說,“你總可以對盈余課稅——你總可以確保以不同的方式分配這些盈余。”
Additional reporting by Adam Jezard
亞當•耶扎德(Adam Jezard)補充報道