在互聯(lián)網(wǎng)上傳播的虛假信息,影響了去年的美國總統(tǒng)大選。當(dāng)然,影響程度無法精確估量,但不可否認(rèn)的是,一個(gè)與克里姆林宮有關(guān)聯(lián)的“噴子制造廠”(troll farm)——“互聯(lián)網(wǎng)研究機(jī)構(gòu)”(Internet Research Agency),付出了巨大規(guī)模的惡意努力,試圖左右美國輿論。最近美國國會(huì)聽證會(huì)上曝光的情況,應(yīng)該讓關(guān)心民主體制的所有人深思。
Among these were Facebook’s acknowledgment that 150m Americans, including Instagram users, may have viewed at least one post of fake news originating with the Russian agency, which took out a total of 3,000 paid ads. That figure says much about the evolution of the media landscape. A few platforms can now reach audiences of previously unimaginable size.
這些情況包括,F(xiàn)acebook承認(rèn)1.5億美國人(包括Instagram用戶)可能看到了至少一條來自這家俄羅斯機(jī)構(gòu)的假新聞,該機(jī)構(gòu)總共投放了3000則付費(fèi)廣告。這一數(shù)據(jù)在很大程度上表明了媒體版圖的發(fā)展演變。少數(shù)幾個(gè)平臺(tái)如今能夠觸及到的受眾規(guī)模之大,是以往難以想象的。
In the 13 years since then undergraduate Mark Zuckerberg launched Facebook as a college networking site, the company has grown to become the largest global distributor of news, both real and fake. That ascent comes with responsibility. Social media platforms on this scale, for all the good they can do, can be weaponised — in some cases by hostile state actors.
本科大學(xué)生馬克•扎克伯格(Mark Zuckerberg)創(chuàng)辦作為大學(xué)社交網(wǎng)站的Facebook 13年來,該公司已發(fā)展壯大,成為全球最大的新聞分銷商(包括真實(shí)真新聞和虛假新聞)。這樣的地位攀升必然帶來責(zé)任。這種規(guī)模的社交媒體平臺(tái)可以被用來做很多好事,但也可能被武器化——在某些情況下是被敵對(duì)的國家行為者武器化。
The tech titans have insisted that they are neutral platforms, with no role as arbiters of truth or social acceptability. They are rightly wary of drawing accusations of bias. At the same time, though, Facebook and others have tacitly acknowledged that they have a role in policing content by striking out posts that promote terrorism and crimes such as child pornography. The ambiguity they have nurtured — that they can be both neutral and upstanding — is becoming increasingly untenable.
科技巨頭堅(jiān)稱自己是中立平臺(tái),沒有判斷真相或社會(huì)可接受性的義務(wù)。它們有理由擔(dān)心,那么做容易招致偏見指控。然而,與此同時(shí),F(xiàn)acebook及其他平臺(tái)已經(jīng)默認(rèn),它們?cè)诒O(jiān)督內(nèi)容方面應(yīng)該發(fā)揮作用,包括刪掉那些煽動(dòng)恐怖主義或兒童色情等犯罪行為的帖子。它們所堅(jiān)持的這種模棱兩可——它們可以既中立又正直——正變得越來越站不住腳。
It is a matter of public interest that the big platforms become more transparent and that clearer standards are in place concerning the flagging and removal of destructive content — be it slanderous, criminal, or designed to subvert democracy.
大型平臺(tái)提升透明度、在標(biāo)記和刪除有害內(nèi)容(無論是誹謗或犯罪內(nèi)容,還是旨在顛覆民主體制的虛假內(nèi)容)的問題上實(shí)行更清晰的標(biāo)準(zhǔn),事關(guān)公共利益。
The problem may have become globally understood with the 2016 US presidential election, but it did not start there. Ukraine’s government said this week that it warned Facebook in 2015 that Russia was conducting disinformation campaigns on its platform. That should have been a wake-up call, and prompted a much faster response.
2016年美國總統(tǒng)大選也許使這個(gè)問題得到了全球理解,但它并非始于美國大選。烏克蘭政府最近表示,它曾在2015年警告Facebook:俄羅斯正在其平臺(tái)上進(jìn)行散布虛假信息的活動(dòng)。那本來應(yīng)該敲響警鐘,并引發(fā)迅速得多的應(yīng)對(duì)措施。
The solution is not to subject platform companies to the same standards publishers face: that would destroy much of the value that they offer to society (while wrecking their businesses). But allowing Facebook and its peer companies to determine their responsibilities to the public is not acceptable, either. To start, when the platforms receive direct payment for political advertising, there is no reason they should not be held to the same standard as publishers. They should be as transparent about the funding of such advertising as other media.
解決方案并不是讓平臺(tái)公司受制于與出版商相同的標(biāo)準(zhǔn):那會(huì)摧毀它們提供給社會(huì)的很大一部分價(jià)值(同時(shí)毀掉它們的業(yè)務(wù))。但是,允許Facebook及其同行企業(yè)自行決定它們對(duì)公眾的責(zé)任也是不可接受的。首先,當(dāng)這些平臺(tái)直接承接政治廣告時(shí),它們沒有理由不受制于與出版商相同的標(biāo)準(zhǔn)。它們應(yīng)該像其他媒體一樣,在這些廣告由誰買單的問題上做到透明。
Unpaid content presents more difficult questions. In the US, internet companies still benefit from the blanket protection provided by the very broadly worded section 230 of the Communications Decency Act, which has been interpreted as relieving them of all responsibility for content that appears on their sites.
不付費(fèi)發(fā)布的內(nèi)容提出了更難以解決的問題。在美國,互聯(lián)網(wǎng)公司仍然受益于措辭非常寬泛的《通信內(nèi)容端正法》(CDA)第230條的全面保護(hù)。該條已被解讀為,互聯(lián)網(wǎng)公司無需為其網(wǎng)站上的內(nèi)容承擔(dān)任何責(zé)任。
The act is too strong and needs sharpening. In particular, it needs to reflect a reasonable standard for the responsibility of platforms removing malicious content once they have been made aware of it.
該法太過絕對(duì),需要更有針對(duì)性。尤其是,該法需要反映一種合理標(biāo)準(zhǔn),規(guī)定平臺(tái)在被告知惡意內(nèi)容后刪除這些內(nèi)容的責(zé)任。
As for the questions of what constitutes malice, and who decides. The same authority that, in democratic societies, has always made decisions about what is acceptable communication in the public square — the elected representatives of the people.
至于如何界定惡意內(nèi)容、以及由誰決定的問題。在民主社會(huì),這也是一直對(duì)什么是公共場(chǎng)合可接受的通信做出界定的那個(gè)權(quán)威——人民選舉產(chǎn)生的代表。