开心六月综合激情婷婷|欧美精品成人动漫二区|国产中文字幕综合色|亚洲人在线成视频

    1. 
      
        <b id="zqfy3"><legend id="zqfy3"><fieldset id="zqfy3"></fieldset></legend></b>
          <ul id="zqfy3"></ul>
          <blockquote id="zqfy3"><strong id="zqfy3"><dfn id="zqfy3"></dfn></strong></blockquote>
          <blockquote id="zqfy3"><legend id="zqfy3"></legend></blockquote>
          打開APP
          userphoto
          未登錄

          開通VIP,暢享免費電子書等14項超值服

          開通VIP
          機(jī)器有可能比人類更聰明嗎?
          Courier 聯(lián)合國教科文組織 / 2018-08-30  
           

          機(jī)器有可能比人類更聰明嗎?不可能!計算機(jī)科學(xué)家讓-加布里埃爾·加納西亞說,這只是科幻小說虛構(gòu)的神話罷了。在今天的文章中,他將帶領(lǐng)大家了解人工智能領(lǐng)域的重大發(fā)展,回顧最新的技術(shù)進(jìn)步,討論亟待回答的道德問題。


          人工智能: 神話與現(xiàn)實


          作者:讓-加布里埃爾·加納西亞

           

          1956年夏天,約翰·麥卡錫(John McCarthy)、馬文·明斯基(Marvin Minsky)、納撒尼爾·羅切斯特(Nathaniel Rochester)和克勞德· 香農(nóng)(Claude Shannon)四位美國研究人員在美國新罕布什爾達(dá)特茅斯學(xué)院組織了一場研討會,正式開啟了人工智能這一科學(xué)學(xué)科。此后,“人工智能”這個最初可能是為了引人矚目而創(chuàng)造出來的名稱,已經(jīng)變成眾所周知的流行詞語。計算機(jī)科學(xué)的這項應(yīng)用在過去幾年里不斷發(fā)展,它所催生的技術(shù)在過去60年為改變這個世界做出了巨大貢獻(xiàn)。


          然而,“人工智能”一詞成功得到普及,有時是基于一種誤解,即認(rèn)為它指的是擁有智慧、并因此會與人類競爭的人工實體。



          由淺田埝教授(Minoru Asada,日本) 研制的機(jī)器人寶寶CB2正在學(xué)習(xí)爬行 / 
          The infant robot CB2, built by Minoru Asada, Japan, is being taught to crawl.


          這種觀念與古代神話傳說有關(guān),例如關(guān)于泥人的神話傳說(出自猶太民間傳說,一尊被賦予生命的泥像)。最近,英國物理學(xué)家斯蒂芬·霍金(Stephen Hawking, 1942—2018年)、美國企業(yè)家埃隆·馬斯克(Elon Musk)和美國未來學(xué)家雷·庫茲韋爾(Ray Kurzweil)等當(dāng)代名人,以及我們現(xiàn)在所說的“強(qiáng)人工智能”或“通用人工智能”(AGI)的支持者們,推動這種觀念再次流行起來。在這里,我們不討論第二種含義。因為至少現(xiàn)在,這一含義只能算是天馬行空的想象,更多的是受到科幻小說的啟發(fā),而不是經(jīng)過實驗和實證觀察確認(rèn)的實實在在的科學(xué)現(xiàn)實。


          對麥卡錫、明斯基,以及達(dá)特茅斯夏季人工智能研究項目的其他研究人員而言,人工智能最初的意圖是用機(jī)器來模擬人類、動物、植物及物種種群的演變。更準(zhǔn)確地來說,這一科學(xué)學(xué)科建立在這樣一種猜想之上:所有認(rèn)知功能都能被精確地描述,從而有可能通過計算機(jī)編程得到復(fù)制。在人工智能存在的60多年里,尚無證據(jù)能夠反駁這一猜想,也無證據(jù)能夠無可辯駁地證明這一猜想,它仍然充滿著無限的可能和潛力。


          ENIAC(電子數(shù)字積分計算機(jī))是世界上第一臺可編程計算機(jī),于1946年問世,占地30 平方米,重達(dá)30 噸。這臺計算機(jī)由美國賓夕法尼亞大學(xué)研制,曾用于解決核物理學(xué)和氣象學(xué)問題。/ ENIAC (Electronic Numerical Integrator and Computer), the first programmable electronic digital computer, built in 1946, during the Second World War.


            坎坷的發(fā)展之路  


          人工智能出現(xiàn)的時間很短,卻已經(jīng)歷了多次變革,可以概括為六個階段。


          ?? 預(yù)言者時期


          最初,人工智能起步并取得早期成功,研究人員在欣喜之下沉迷于放大話,他們盡情發(fā)揮想象力,輕率地發(fā)表了某些觀點,因此在后來飽受批評。


          例如,1978年諾貝爾經(jīng)濟(jì)學(xué)獎得主美國政治科學(xué)家、經(jīng)濟(jì)學(xué)家赫伯特·西蒙(Herbert A. Simon)曾于1958年宣稱,如果能夠參加國際比賽的話,計算機(jī)將在10年之內(nèi)問鼎國際象棋世界冠軍。


          ?? 低谷時期


          到了20世紀(jì)60年代中期,進(jìn)展似乎變得非常緩慢。1965年,一個10歲的孩子在象棋比賽中擊敗了計算機(jī);1966年,一份由美國參議院委托撰寫的報告闡述了機(jī)器翻譯的內(nèi)在局限性。在大約10年的時間內(nèi),人工智能的負(fù)面消息層出不窮。


          ?? 語義人工智能 


          然而,研究工作并未停止,只是有了新的方向:側(cè)重于記憶心理學(xué)和理解的機(jī)制,并嘗試在計算機(jī)上對其進(jìn)行模擬,同時,還關(guān)注知識在推理中的作用。這推動了語義知識表示技術(shù)的產(chǎn)生。這項技術(shù)在20世紀(jì)70年代中期取得了相當(dāng)大的發(fā)展,同時它還促進(jìn)了專家系統(tǒng)的發(fā)展。之所以叫作語義知識表示技術(shù),是因為這些系統(tǒng)利用技能嫻熟的專家的知識來再現(xiàn)他們的思維過程。20世紀(jì)80年代早期,專家系統(tǒng)在醫(yī)療診斷等領(lǐng)域得到廣泛應(yīng)用,給人們帶來了巨大的希望。


          ?? 新連接主義和機(jī)器學(xué)習(xí) 


          技術(shù)進(jìn)步帶來了機(jī)器學(xué)習(xí)算法的發(fā)展,這使得計算機(jī)能夠積累知識,并利用它們自己的經(jīng)驗,自動進(jìn)行自我重新編程。


          工業(yè)應(yīng)用(指紋識別、語音識別等)由此發(fā)展起來。其中,人工智能、計算機(jī)科學(xué)、人造生命和其他學(xué)科結(jié)合在一起,產(chǎn)生了混合系統(tǒng)。


          ?? 從人工智能到人機(jī)界面


          從20世紀(jì)90年代末開始,人工智能與機(jī)器人和人機(jī)界面相結(jié)合,產(chǎn)生了具有情感與情緒的智能代理。除了其他方面,這也帶來了情緒計算(或情感計算,即評估感受情緒的對象的反應(yīng),并在機(jī)器上再現(xiàn))的發(fā)展,特別是對話代理(聊天機(jī)器人)的發(fā)展。


          ?? 人工智能的復(fù)興


          自2010年以來,基于形式化神經(jīng)網(wǎng)絡(luò)的使用,機(jī)器的力量使得利用深度學(xué)習(xí)技術(shù)開發(fā)大數(shù)據(jù)成為可能。語音和圖像識別、自然語言理解和自動駕駛汽車等許多領(lǐng)域出現(xiàn)了一系列非常成功的應(yīng)用案例,正在引領(lǐng)人工智能的復(fù)興。


          藍(lán)腦計劃(BBP)是人類大腦計劃 (HBP) 的組成部分, 旨在模擬鼠 類虛擬神經(jīng)元的微電路活動(2015 年)。據(jù)研究人員稱,這標(biāo)志著向模 擬人類大腦功能邁出的一步。/ Simulation of electrical activity in a microcircuit of virtual neurons of a rat (2015), by the Blue Brain Project (BBP) team, part of Europe’s Human Brain Project (HBP).


            應(yīng)  用  


          許多人工智能技術(shù)的研究成果的能力已超越人類。1997年,一項計算機(jī)程序擊敗了世界國際象棋衛(wèi)冕冠軍;2016年,又一計算機(jī)程序擊敗了世界頂尖的圍棋選手和一些數(shù)一數(shù)二的撲克選手。計算機(jī)正在證明或正在幫助證明數(shù)學(xué)定理;機(jī)器學(xué)習(xí)技術(shù)正在從太字節(jié)(1012字節(jié))甚或是拍字節(jié)(1015字節(jié))的海量數(shù)據(jù)中自動構(gòu)建知識。


          因此,機(jī)器可以識別語音并進(jìn)行轉(zhuǎn)錄,就像打字員過去所做的那樣。計算機(jī)可以準(zhǔn)確識別數(shù)以千萬計的面孔和指紋,也可以理解以自然語言編寫的文本。利用機(jī)器學(xué)習(xí)技術(shù),汽車可以實現(xiàn)無人駕駛;機(jī)器比皮膚科醫(yī)生更善于利用手機(jī)拍攝的皮膚痣照片診斷黑色素瘤;機(jī)器人代替人類參與作戰(zhàn);工廠生產(chǎn)線變得越來越自動化。


          科學(xué)家也在利用這些技術(shù),根據(jù)某些生物大分子(特別是蛋白質(zhì)和基因組)的組成序列——蛋白質(zhì)的氨基酸,基因組的堿基——確定它們的某些功能。更普遍來說,所有科學(xué)都在經(jīng)歷計算機(jī)模擬實驗在認(rèn)識方面的嚴(yán)重斷裂。之所以稱之為計算機(jī)模擬實驗,是因為這種實驗是由計算機(jī)利用以硅制成其核心的強(qiáng)大處理器在海量數(shù)據(jù)中進(jìn)行的。因此,這種實驗不同于針對生命體的體內(nèi)實驗,最重要的是,也不同于在玻璃試管中進(jìn)行的體外實驗。


          如今,人工智能的應(yīng)用幾乎影響了所有的活動領(lǐng)域,尤其是在工業(yè)、銀行、保險、健康和國防部門。許多日常任務(wù)現(xiàn)在已經(jīng)實現(xiàn)自動化,這使許多行業(yè)發(fā)生轉(zhuǎn)變,同時也導(dǎo)致一些行業(yè)最終消失。


            存在哪些道德風(fēng)險?


          有了人工智能,可能除了幽默之外,大部分的智能都可以通過計算機(jī)進(jìn)行理性分析和重構(gòu)。然而,機(jī)器在大部分領(lǐng)域超越了人類的認(rèn)知能力,引發(fā)了人們對道德風(fēng)險的恐懼。這些風(fēng)險分為三個類別,分別是工作稀缺,原本由人類從事的工作可以由機(jī)器取而代之;給個人自主性帶來消極后果,特別是自由和安全方面;以及人性有可能被更“智能”的機(jī)器所取代。 


          但如果我們審視現(xiàn)實,就會發(fā)現(xiàn)(人類從事的)工作并未消失。恰恰相反,這些工作正在發(fā)生變化,并且需要新的技能。同樣,只要我們在面對侵入我們私人生活的技術(shù)時保持警惕,個人的自主和自由并不一定會因為人工智能的發(fā)展而遭到破壞。


          最后,與某些人宣稱的相反,機(jī)器并未對人類的生存構(gòu)成威脅。機(jī)器的自主性純粹是技術(shù)性的,它只對應(yīng)從獲取信息到?jīng)Q策的物質(zhì)因果關(guān)系鏈。另一方面,機(jī)器沒有道德自主性,因為即使它們在決策過程中迷惑和誤導(dǎo)我們,它們也不具備自己的意志,仍然服從于我們指派給它們的目標(biāo)。


          作者簡介


          讓-加布里埃爾·加納西亞(法國):計算機(jī)學(xué)家,巴黎索邦大學(xué)教授,索邦計算機(jī)科學(xué)實驗室LIP6研究人員,歐洲人工智能協(xié)會會員,法蘭西大學(xué)研究院成員和國家科學(xué)研究中心道德委員會主席。他目前的研究興趣包括機(jī)器學(xué)習(xí)、符號數(shù)據(jù)融合、計算倫理學(xué)、計算機(jī)道德和數(shù)字人文。


          本文發(fā)布于聯(lián)合國教科文組織《信使》雜志(2018-3)?!缎攀埂穭?chuàng)辦于1948年,旨在宣傳教科文組織的理念,充當(dāng)文化間對話的平臺,以及組建國際討論的論壇。從2006年3月起,《信使》雜志以網(wǎng)絡(luò)版發(fā)行,為了滿足全球讀者的需要,《信使》推出了不同語言的版本,包括教科文組織的六種官方語言(英文、阿拉伯文、中文、西班牙文、法文、俄文)以及葡萄牙語、世界語、撒丁語。雜志還印刷了少量紙質(zhì)版。


           

          Are machines likely to become smarter than humans? No, says Jean-Gabriel Ganascia: this is a myth inspired by science fiction. The computer scientist walks us through the major milestones in artificial intelligence (AI), reviews the most recent technical advances, and discusses the ethical questions that require increasingly urgent answers.


          Artificial intelligence: 

          between myth and reality


          Jean-Gabriel Ganascia

           

          A scientific discipline, AI officially began in 1956, during a summer workshop organized by four American researchers – John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon – at Dartmouth College in New Hampshire, United States. Since then, the term “artificial intelligence”, probably first coined to create a striking impact, has become so popular that today everyone has heard of it. This application of computer science has continued to expand over the years, and the technologies it has spawned have contributed greatly to changing the world over the past sixty years.


          However, the success of the term AI is sometimes based on a misunderstanding, when it is used to refer to an artificial entity endowed with intelligence and which, as a result, would compete with human beings. This idea, which refers to ancient myths and legends, like that of the golem [from Jewish folklore, an image endowed with life], have recently been revived by contemporary personalities including the British physicist Stephen Hawking (1942-2018), American entrepreneur Elon Musk, American futurist Ray Kurzweil, and  proponents of what we now call Strong AI or Artificial General Intelligence (AGI). We will not discuss this second meaning here, because at least for now, it can only be ascribed to a fertile imagination, inspired more by science fiction than by any tangible scientific reality confirmed by experiments and empirical observations.


          For McCarthy, Minsky, and the other researchers of the Dartmouth Summer Research Project on Artificial Intelligence, AI was initially intended to simulate each of the different faculties of intelligence – human, animal, plant, social or phylogenetic – using machines. More precisely, this scientific discipline was based on the conjecture that all cognitive functions – especially learning, reasoning, computation, perception, memorization, and even scientific discovery or artistic creativity – can be described with such precision that it would be possible to programme a computer to reproduce them. In the more than sixty years that AI has existed, there has been nothing to disprove or irrefutably prove this conjecture, which remains both open and full of potential. 


            Uneven progress  


          In the course of its short existence, AI has undergone many changes. These can be summarized in six stages.


          ?? The time of the prophets


          First of all, in the euphoria of AI’s origins and early successes, the researchers had given free range to their imagination, indulging in certain reckless pronouncements for which they were heavily criticized later. For instance, in 1958, American  political scientist and economist Herbert A. Simon – who received the Nobel Prize in Economic Sciences in 1978 – had declared that, within ten years, machines would become world chess champions if they were not barred from international competitions.


          ?? The dark years


          By the mid-1960s, progress seemed to be slow in coming. A 10-year-old child beat a computer at a chess game in 1965, and a report commissioned by the US Senate in 1966 described the intrinsic limitations of machine translation. AI got bad press for about a decade.


          ??Semantic AI


          The work went on nevertheless, but the research was given new direction. It focused on the psychology of memory and the mechanisms of understanding – with attempts to simulate these on computers – and on the role of knowledge in reasoning. This gave rise to techniques for the semantic representation of knowledge, which developed considerably in the mid-1970s, and also led to the development of expert systems, so called because they use the knowledge of skilled specialists to reproduce their thought processes. Expert systems raised enormous hopes in the early 1980s with a whole range of applications, including medical diagnosis.


          ??Neo-connectionism and machine learning


          Technical improvements led to the development of machine learning algorithms, which allowed  computers to accumulate knowledge and to automatically reprogramme themselves, using their own experiences.


          This led to the development of industrial applications (fingerprint identification, speech recognition, etc.), where techniques from AI, computer science, artificial life and other disciplines were combined to produce hybrid systems.


          ??From AI to human-machine interfaces


          Starting in the late 1990s, AI was coupled with robotics and human-machine interfaces to produce intelligent agents that suggested the presence of feelings and emotions. This gave rise, among other things, to the calculation of emotions (affective computing), which evaluates the reactions of a subject feeling emotions and reproduces them on a machine, and especially to the development of conversational agents (chatbots).


          ??Renaissance of AI


          Since 2010, the power of machines has made it possible to exploit  enormous quantities of data (big data) with deep learning techniques, based on the use of formal neural networks. A range of very successful applications in several areas – including speech and image recognition, natural language comprehension and autonomous cars – are leading to an AI renaissance. 


            Applications  


          Many achievements using AI techniques surpass human capabilities – in 1997, a computer programme defeated the reigning world chess champion, and more recently, in 2016, other computer programmes have beaten the world’s best Go [an ancient Chinese board game] players and some top poker players. Computers are proving, or helping to prove, mathematical theorems; knowledge is being automatically constructed from huge masses of data, in terabytes (1012 bytes), or even petabytes (1015 bytes), using machine learning techniques.


          As a result, machines can recognize speech and transcribe it – just like typists did in the past. Computers can accurately identify faces or fingerprints from among tens of millions, or understand texts written in natural languages. Using machine learning techniques, cars drive themselves; machines are better than dermatologists at diagnosing melanomas using photographs of skin moles  taken with mobile phone cameras; robots are fighting wars instead of humans; and factory production lines are becoming increasingly automated.


          Scientists are also using AI techniques to determine the function of certain biological macromolecules, especially proteins and genomes, from the sequences of their constituents ? amino acids for proteins, bases for genomes. More generally, all the sciences are undergoing a major epistemological rupture with in silico experiments – named so because they are carried out by computers from massive quantities of data, using powerful processors whose cores are made of silicon. In this way, they differ from in vivo experiments, performed on living matter, and above all, from in vitro experiments, carried out in glass test-tubes.


          Today, AI applications affect almost all fields of activity – particularly in the industry, banking, insurance, health and defence sectors. Several routine tasks are now automated, transforming many trades and eventually eliminating some.


            What are the ethical risks? 


          With AI, most dimensions of intelligence ? except perhaps humour ? are subject to rational analysis and reconstruction, using computers. Moreover, machines are exceeding our cognitive faculties in most fields, raising fears of ethical risks. These risks fall into three categories – the scarcity of work, because it can be carried out by machines instead of humans; the consequences for the autonomy of the individual, particularly in terms of freedom and security; and the overtaking of humanity, which would be replaced by more “intelligent” machines. 


          However, if we examine the reality, we see that work (done by humans) is not disappearing – quite the contrary – but it is changing and calling for new skills. Similarly, an individual’s autonomy and  freedom are not inevitably undermined by the development of AI – so long as we remain vigilant in the face of technological intrusions into our private lives.


          Finally, contrary to what some people claim, machines pose no existential threat to humanity. Their autonomy is purely technological, in that it corresponds only to material chains of causality that go from the taking of information to decision-making. On the other hand, machines have no moral autonomy, because even if they do confuse and mislead us in the process of making decisions, they do not have a will of their own and remain subjugated to the objectives that we have assigned to them.


          About the Author: 


          French computer scientist Jean-Gabriel Ganascia is a professor at Sorbonne University, Paris. He is also a researcher at LIP6, the computer science laboratory at the Sorbonne, a fellow of the European Association for Artificial Intelligence, a member of the Institut Universitaire de France and chairman of the ethics committee of the National Centre for Scientific Research (CNRS), Paris. His current research interests include machine learning, symbolic data fusion, computational ethics, computer ethics and digital humanities.


          This article was published on Courier (2018-3), which is a UNESCO’s magazine first published in 1948. It aims at promoting UNESCO’s ideals, maintaining a platform for the dialogue between cultures and providing a forum for international debate. Available online since March 2006, the UNESCO Courier serves readers around the world in the six official languages of the Organization (Arabic, Chinese, English, French, Russian and Spanish), and also in Portuguese, Esperanto and Sardinian. A limited number of issues are also produced in print. 

          French computer scientist Jean-Gabriel Ganascia is a professor at Sorbonne University, Paris. He is also a researcher at LIP6, the computer science laboratory at the Sorbonne, a fellow of the European Association for Artificial Intelligence, a member of the Institut Universitaire de France and chairman of the ethics committee of the National Centre for Scientific Research (CNRS), Paris. His current research interests include machine learning, symbolic data fusion, computational ethics, computer ethics and digital humanities.


          This article was published on Courier (2018-3), which is a UNESCO’s magazine first published in 1948. It aims at promoting UNESCO’s ideals, maintaining a platform for the dialogue between cultures and providing a forum for international debate. Available online since March 2006, the UNESCO Courier serves readers around the world in the six official languages of the Organization (Arabic, Chinese, English, French, Russian and Spanish), and also in Portuguese, Esperanto and Sardinian. A limited number of issues are also produced in print. 

          French computer scientist Jean-Gabriel Ganascia is a professor at Sorbonne University, Paris. He is also a researcher at LIP6, the computer science laboratory at the Sorbonne, a fellow of the European Association for Artificial Intelligence, a member of the Institut Universitaire de France and chairman of the ethics committee of the National Centre for Scientific Research (CNRS), Paris. His current research interests include machine learning, symbolic data fusion, computational ethics, computer ethics and digital humanities.


          This article was published on Courier (2018-3), which is a UNESCO’s magazine first published in 1948. It aims at promoting UNESCO’s ideals, maintaining a platform for the dialogue between cultures and providing a forum for international debate. Available online since March 2006, the UNESCO Courier serves readers around the world in the six official languages of the Organization (Arabic, Chinese, English, French, Russian and Spanish), and also in Portuguese, Esperanto and Sardinian. A limited number of issues are also produced in print. 

          本站僅提供存儲服務(wù),所有內(nèi)容均由用戶發(fā)布,如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請點擊舉報。
          打開APP,閱讀全文并永久保存 查看更多類似文章
          猜你喜歡
          類似文章
          前瞻:人工智能在倉儲情景中的應(yīng)用
          智能ai深度學(xué)習(xí)技術(shù)
          呵呵
          人工智能(AI)資料大全
          人工智能突飛猛進(jìn) 教科文組織協(xié)調(diào)應(yīng)對
          AI
          更多類似文章 >>
          生活服務(wù)
          分享 收藏 導(dǎo)長圖 關(guān)注 下載文章
          綁定賬號成功
          后續(xù)可登錄賬號暢享VIP特權(quán)!
          如果VIP功能使用有故障,
          可點擊這里聯(lián)系客服!

          聯(lián)系客服