Klein Tools

10 Dec 18
Pew Research Center: Internet, Science & Tech
The expert predictions reported here about the impact of the internet between 2018 and 2030 came in response to questions asked by Pew Research Center and Elon University’s Imagining the Internet Center in an online canvassing conducted between July 4, 2018, and Aug. 6, 2018. This is the 10th Future of the Internet study the two organizations have conducted together. For this project, we invited more than 10,000 experts and members of the interested public to share their opinions on the likely future of the internet, and 985 responded to at least one of the questions we asked. This report covers only the answers to our questions about AI and the future of humans. We also asked respondents to answer a series of questions tied to the 50th anniversary of the ARPANET/internet; additional reports tied to those responses will be released in 2019, the anniversary year. Specifically related to artificial intelligence, the participants in the nonscientific canvassing were asked: “Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?” The answers of the 979 respondents include: 63% who said most people will be better off 37% who said most people will not be better off 25 respondents who chose not to select either option Additionally, they were also asked: “Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030. Please consider giving an example of how a typical human-machine interaction will look and feel in a specific area, for instance, in the workplace, in family life, in a health care setting or in a learning environment. Why? What is your hope or fear? What actions might be taken to assure the best future?” The web-based instrument was first sent directly to a list of targeted experts identified and accumulated by Pew Research Center and Elon University during previous “Future of the Internet” studies, as well as those identified in an earlier study of people who made predictions about the likely future of the internet between 1990 to 1995. Additional experts with proven interest in this particular research topic were also added to the list. Among those invited were artificial intelligence researchers, developers and business leaders from leading global organizations, including, to name a few, Oxford, Cambridge, MIT, Stanford and Carnegie Mellon universities, Google, Microsoft, Facebook, Amazon, Kernel, Kyndi, BT and Cloudflare; leaders active in global internet governance and internet research activities, such as the Internet Engineering Task Force (IETF), Internet Corporation for Assigned Names and Numbers (ICANN), Internet Society (ISOC), International Telecommunications Union (ITU), Association of Internet Researchers (AoIR), and the Organization for Economic Cooperation and Development (OECD). We also invited a large number of professionals and policy people working in government, including the National Science Foundation, Federal Communications Commission, U.S. military and European Union; think tanks and interest networks (for instance, those that include professionals and academics in anthropology, sociology, psychology, law, political science and communications); engineering/computer science and business/entrepreneurship faculty, graduate students and postgraduate researchers who have published work tied to these topics; plus many who are active in civil society organizations such as Association for Progressive Communications (APC), Electronic Privacy Information Center (EPIC), Electronic Frontier Foundation (EFF) and Access Now; and those affiliated with newly emerging nonprofits and other research units. Invitees were encouraged to share the survey link with others they believed would have an interest in participating, thus there was a small “snowball” effect as a small percentage of these invitees invited others to weigh in. Since the data are based on a nonrandom sample, the results are not projectable to any population other than the individuals expressing their points of view in this sample. The respondents’ remarks reflect their personal positions and are not the positions of their employers; the descriptions of their leadership roles help identify their background and the locus of their expertise. About half of the expert respondents elected to remain anonymous. Because people’s level of expertise is an important element of their participation in the conversation, anonymous respondents were given the opportunity to share a description of their internet expertise or background and this was noted where relevant in this report. Some 519 respondents answered the demographic questions on the canvassing. About 70% identified themselves as being based in North America, while 30% hail from other corners of the world. When asked about their “primary area of internet interest,” 33% identified themselves as professor/teacher; 17% as research scientists; 13% as futurists or consultants; 8% as technology developers or administrators; 5% as entrepreneurs or business leaders; 5% as advocates or activist users; 4% as pioneers or originators; 1% as legislators, politicians or lawyers; and an additional 13% specified their primary area of interest as “other.” Following is a list of some of the key respondents in this canvassing: Walid Al-Saqaf, senior lecturer at Sodertorn University, Sweden, and member of the board of trustees of the Internet Society (ISOC); Aneesh Aneesh, author of “Global Labor: Algocratic Modes of Organization”; Kostas Alexandridis, author of “Exploring Complex Dynamics in Multi-agent-based Intelligent Systems”; Micah Altman, director of research and head scientist for the program on information science at MIT; Geoff Arnold, CTO for the Verizon Smart Communities organization; Rob Atkinson, president of the Information Technology and Innovation Foundation; Collin Baker, senior AI researcher at the International Computer Science Institute at the University of California, Berkeley; Brian Behlendorf, executive director of the Hyperledger project at The Linux Foundation; Nathaniel Borenstein, chief scientist at Mimecast; danah boyd, founder and president of the Data & Society Research Institute, and principal researcher at Microsoft; Stowe Boyd, founder and managing director at Work Futures; Henry E. Brady, dean, Goldman School of Public Policy, University of California, Berkeley; Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy and author of “Machine, Platform, Crowd: Harnessing Our Digital Future”; Jamais Cascio, distinguished fellow at the Institute for the Future; Vint Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google; Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp.; Joël Colloc, professor at Université du Havre Normandy University and author of “Ethics of Autonomous Information Systems”; Steve Crocker, CEO and co-founder of Shinkuro Inc. and Internet Hall of Fame member; Kenneth Cukier, author and senior editor at The Economist; Wout de Natris, internet cybercrime and security consultant; Eileen Donahoe, executive director of the Global Digital Policy Incubator at Stanford University, Judith Donath, Harvard University’s Berkman Klein Center for Internet & Society; William Dutton, Oxford Martin Fellow at the Global Cyber Security Capacity Centre; Robert Epstein, a senior research psychologist and founding director of the Loebner Prize Competition in Artificial Intelligence, Susan Etlinger, an industry analyst for Altimeter Group; Jean-Daniel Fekete, researcher in information visualization, visual analytics and human-computer interaction at INRIA, France; Seth Finkelstein, consulting programmer and EFF Pioneer Award winner; Charlie Firestone, executive director of the Aspen Institute’s communications and society program; Bob Frankston, internet pioneer and software innovator; Divina Frau-Meigs, UNESCO chair for sustainable digital development; Richard Forno, of the Center for Cybersecurity at the University of Maryland-Baltimore County; Oscar Gandy, professor emeritus of communication at the University of Pennsylvania; Charles Geiger, head of the executive secretariat for the UN’s World Summit on the Information Society; Ashok Goel, director of the Human-Centered Computing Ph.D. Program at Georgia Tech; Ken Goldberg, distinguished chair in engineering, and founding member, Berkeley AI Research Lab; Marina Gorbis, executive director of the Institute for the Future; Shigeki Goto, Asia-Pacific internet pioneer and Internet Hall of Fame member; Theodore Gordon, futurist and co-founder of the Millennium Project; Kenneth Grady, futurist and founding author of The Algorithmic Society blog; Sam Gregory, director of WITNESS and digital human rights activist; Wendy Hall, executive director of the Web Science Institute; John C. Havens, executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence; Marek Havrda, director at NEOPAS and strategic adviser for the GoodAI project; Jim Hendler, director of the Rensselaer Polytechnic Institute for Data Exploration and Application; Perry Hewitt, a marketing, content and technology executive; Brock Hinzmann, a partner in the Business Futures Network who worked for 40 years as a futures researcher at SRI International; Bernie Hogan, senior research fellow, Oxford Internet Institute; Barry Hughes, senior scientist at the Center for International Futures, University of Denver; Jeff Jarvis, director of the Tow-Knight Center at City University of New York’s Craig Newmark School of Journalism; Bryan Johnson, founder and CEO of Kernel (developer of advanced neural interfaces) and OS Fund; Anthony Judge, editor of tbe Encyclopedia of World Problems and Human Potential; James Kadtke, expert on converging technologies at the Institute for National Strategic Studies, U.S. National Defense University; Sonia Katyal, co-director of the Berkeley Center for Law and Technology and a member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors; Frank Kaufmann, founder and director of the Values in Knowledge Foundation; Fiona Kerr, professor of neural systems and complexity at the University of Adelaide; Annalie Killian, futurist and vice president at Sparks & Honey; Andreas Kirsch, fellow at Newspeak House, formerly with Google and DeepMind in Zurich and London; Michael Kleeman, a senior fellow at the University of California, San Diego and board member at the Institute for the Future; Leonard Kleinrock, Internet Hall of Fame member and professor of computer science at the University of California, Los Angeles; Bart Knijnenburg, researcher on decision-making and recommender systems at Clemson University; Gary L. Kreps, distinguished professor and director of the Center for Health and Risk Communication at George Mason University; Larry Lannom, internet pioneer and vice president at the Corporation for National Research Initiatives (CNRI); Peter Levine, professor and associate dean for research at Tufts University’s Tisch College of Civic Life; John Markoff, fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford University; Matt Mason, roboticist and former director of the Robotics Institute at Carnegie Mellon University; Craig J. Mathias, principal for the Farpoint Group; Giacomo Mazzone, head of institutional relations at the European Broadcasting Union; Andrew McLaughlin, executive director of the Center for Innovative Thinking at Yale, previously deputy CTO of the U.S. and global public policy lead for Google; Panagiotis T. Metaxas, author of “Technology, Propaganda and the Limits of Human Intellect” and professor of computer science, Wellesley College; Robert Metcalfe, co-inventor of Ethernet, founder of 3Com and Internet Hall of Fame member; Jerry Michalski, founder of the Relationship Economy eXpedition (REX); Steven Miller, vice provost and professor of information systems at Singapore Management University; Mario Morino, chair of the Morino Institute and co-founder of Venture Philanthropy Partners; Monica Murero, director of the E-Life International Institute, Italy; Grace Mutung’u, co-leader of the Kenya ICT Action Network; Martijn van Otterlo, author of “Gatekeeping Algorithms with Human Ethical Bias,” Tilburg University, Netherlands; Ian Peter, internet pioneer and advocate and co-founder of the Association for Progressive Communications (APC); Justin Reich, executive director of the MIT Teaching Systems Lab; Peter Reiner, professor and co-founder of the National Core for Neuroethics at the University of British Columbia; Lawrence Roberts, designer and manager of ARPANET (the precursor to the global internet) and Internet Hall of Fame member; Michael Roberts, Internet Hall of Fame member and first president and CEO of ICANN; Marc Rotenberg, executive director of EPIC; Douglas Rushkoff, writer, documentarian, and lecturer who focuses on human autonomy in a digital age; David Sarokin, author of “Missed Information: Better Information for Building a Wealthier, More Sustainable Future”; Thomas Schneider, vice-director at the Federal Office of Communications (OFCOM) in Switzerland; L. Schomaker, professor at the University of Groningen and scientific director of the Artificial Intelligence and Cognitive Engineering (ALICE) research institute; Ben Shneiderman, distinguished professor and founder of the Human Computer Interaction Lab at the University of Maryland; Dan Schultz, senior creative technologist at Internet Archive; Henning Schulzrinne, Internet Hall of Fame member and professor at Columbia University; Evan Selinger, professor of philosophy at Rochester Institute of Technology; Wendy Seltzer, strategy lead and counsel at the World Wide Web Consortium; Greg Shannon, chief scientist for the CERT Division at Carnegie Mellon University’s Software Engineering Institute; Daniel Siewiorek, professor with the Human-Computer Interaction Institute at Carnegie Mellon University; Mark Surman, executive director of the Mozilla Foundation; Brad Templeton, chair emeritus for the Electronic Frontier Foundation; Baratunde Thurston, futurist and former director of digital at The Onion; Sherry Turkle, MIT professor and author of “Alone Together”; Joseph Turow, professor of communication at the University of Pennsylvania; Stuart A. Umpleby, professor emeritus at George Washington University; Karl M. van Meter, author of “Computational Social Science in the Era of Big Data”; Michael Veale, co-author of “Fairness and Accountability Designs Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making”; Amy Webb, futurist, professor and founder of the Future Today Institute; David Wells, chief financial officer at Netflix; David Weinberger, senior researcher at Harvard University’s Berkman Klein Center for Internet & Society; Paul Werbos, former program director at the U.S. National Science Foundation; Betsy Williams, Center for Digital Society and Data Studies at the University of Arizona; John Willinsky, professor and director of the Public Knowledge Project at Stanford Graduate School of Education; Yvette Wohn, director of the Social Interaction Lab at the New Jersey Institute of Technology and expert on human-computer interaction; Andrew Wycoff, the director of OECD’s directorate for science, technology and innovation; Cliff Zukin, professor of public policy and political science at the School for Planning and Public Policy and the Eagleton Institute of Politics, Rutgers University. A selection of institutions at which some of the respondents work or have affiliations: Abt Associates; Access Now; Aeon; Allen Institute for Artificial Intelligence; Alpine Technology Group; Altimeter Group; American Institute for Behavioral Research and Technology; American Library Association; Antelope Consulting; Anticipatory Futures Group; Arizona State University; Artificial Intelligence Research Institute, Universitat Autònoma de Barcelona; Aspen Institute; AT&T; Australian National University; Bad Idea Factory; Bar-Ilan University, Israel; Bloomberg Businessweek; Bogazici University, Turkey; Brookings Institution; BT Group; Business Futures Network; California Institute of Technology; Carnegie Mellon University; Center for Advanced Study in the Behavioral Sciences, Stanford University; Centre for Policy Modelling, Manchester Metropolitan University; Centre National de la Recherche Scientifique, France; Cisco Systems; Clemson University; Cloudflare; Columbia University; Comcast; Constellation Research; Cornell University; Corporation for National Research Initiatives; Council of Europe; Agency for Electronic Government and Information Society in Uruguay; Electronic Frontiers Australia; Electronic Frontier Foundation; Emergent Research; ENIAC Programmers Project; Eurac Research, Italy; FSA Technologies; Farpoint Group; Foresight Alliance; Future of Privacy Forum; Future Today Institute; Futurism.com; Gartner; General Electric; Georgia Tech; Ginkgo Bioworks; Global Forum for Media Development; Google; Harvard University; Hokkaido University, Japan; IBM; Internet Corporation for Assigned Names and Numbers (ICANN); Ignite Social Media; Information Technology and Innovation Foundation; Institute for Defense Analyses; Institute for the Future; Instituto Superior Técnico, Portugal; Institute for Ethics and Emerging Technologies; Internet Engineering Task Force (IETF); International Academy for Systems and Cybernetic Sciences; Internet Society; Institute for Communication & Leadership, Lucerne, Switzerland; Jet Propulsion Lab; Johns Hopkins University; Kansai University, Japan; Institute for Systems and Robotics, University of Lisbon; Institute of Electrical and Electronics Engineers (IEEE); Keio University, Japan; Kernel; Kyndi; Knowledge and Digital Culture Foundation, Mexico; KPMG; Leading Futurists; LeTourneau University; The Linux Foundation; Los Alamos National Laboratory; Machine Intelligence Research Institute; Massachusetts Institute of Technology; Maverick Technologies; McKinsey & Company; Media Psychology Research Center; Microsoft; Millennium Project; Monster Worldwide; Mozilla; Nanyang Technological University, Singapore; National Chengchi University, Taiwan; National Institute of Mental Health; NetLab; The New School; New York University; Netflix; NLnet Foundation; NORC at the University of Chicago; Novartis, Switzerland; Organization for Economic Cooperation and Development (OECD); Ontario College of Art and Design Strategic Foresight and Innovation; Open the Future; Open University of Israel; Oracle; O’Reilly Media; Global Cyber Security Capacity Center, Oxford University; Oxford Internet Institute; Packet Clearing House; People-Centered Internet; Perimeter Institute for Theoretical Physics; Politecnico di Milano; Princeton University; Privacy International; Purdue University; Queen Mary University of London; Quinnovation; RAND; Research ICT Africa; Rochester Institute of Technology; Rose-Hulman Institute of Technology; Russell Sage Foundation; Salesforce; SRI International; Sciteb, London; Shinkuro; Significance Systems; Singapore Management University; Sir Syed University of Engineering and Technology, Pakistan; SLAC National Accelerator Laboratory; Södertörn University, Sweden; Social Science Research Council; University of Paris III: Sorbonne Nouvelle; South China University of Technology; Stanford University; Straits Knowledge; Team Human; The Logic; Technische Universität Kaiserslautern, Germany; Tecnológico de Monterrey, Mexico; The Crucible; United Nations; University of California, Berkeley; University of California, Los Angeles; University of California, San Diego; University College London; University of Denver Pardee Center for International Futures; Universitat Oberta de Catalunya; Universidade NOVA de Lisboa, Portugal; the Universities of Alabama, Arizona, Delaware, Florida, Maryland, Michigan, Minnesota, Pennsylvania, Southern California, Utah and Vermont; the Universities of Calcutta, Cambridge, Cologne, Cyprus, Edinburgh, Granada, Groningen, Liverpool, Otago, Pavia, Salford and Waterloo; UNESCO; USENIX Association; U.S. Department of Energy; U.S. Naval Postgraduate School; U.S. Special Operations Command SOFWERX; Telecommunications and Radiocommunications Regulator of Vanuatu; Virginia Tech; Vision & Logic; Vizalytics; World Wide Web Foundation; Wellville; Wikimedia; Witness; Yale Law School Information Society Project. Complete sets of credited and anonymous responses can be found here: http://www.elon.edu/e-web/imagining/surveys/2018_survey/AI_and_the_Future_of_Humans_credit.xhtml http://www.elon.edu/e-web/imagining/surveys/2018_survey/AI_and_the_Future_of_Humans_anon.xhtml
10 Dec 18
Pew Research Center: Internet, Science & Tech
A number of participants in this canvassing offered solutions to the worrisome potential future spawned by AI. Among them: 1) improving collaboration across borders and stakeholder groups; 2) developing policies to assure that development of AI will be directed at augmenting humans and the common good; and 3) shifting the priorities of economic, political and education systems to empower individuals to stay ahead in the “race with the robots.” A number of respondents sketched out overall aspirations: Andrew Wycoff, the director of OECD’s directorate for science, technology and innovation, and Karine Perset, an economist in OECD’s digital economy policy division, commented, “Twelve years from now, we will benefit from radically improved accuracy and efficiency of decisions and predictions across all sectors. Machine learning systems will actively support humans throughout their work and play. This support will be unseen but pervasive – like electricity. As machines’ ability to sense, learn, interact naturally and act autonomously increases, they will blur the distinction between the physical and the digital world. AI systems will interconnect and work together to predict and adapt to our human needs and emotions. The growing consensus that AI should benefit society at-large leads to calls to facilitate the adoption of AI systems to promote innovation and growth, help address global challenges, and boost jobs and skills development, while at the same time establishing appropriate safeguards to ensure these systems are transparent and explainable, and respect human rights, democracy, culture, nondiscrimination, privacy and control, safety, and security. Given the inherently global nature of our networks and applications that run across then, we need to improve collaboration across countries and stakeholder groups to move toward common understanding and coherent approaches to key opportunities and issues presented by AI. This is not too different from the post-war discussion on nuclear power. We should also tread carefully toward Artificial General Intelligence and avoid current assumptions on the upper limits of future AI capabilities.” Wendy Hall, professor of computer science at the University of Southampton and executive director of the Web Science Institute, said, “By 2030 I believe that human-machine/AI collaboration will be empowering for human beings overall. Many jobs will have gone, but many new jobs will have been created and machines/AI should be helping us do things more effectively and efficiently both at home and at work. It is a leap of faith to think that by 2030 we will have learnt to build AI in a responsible way and we will have learnt how to regulate the AI and robotics industries in a way that is good for humanity. We may not have all the answers by 2030 but we need to be on the right track by then.” I believe in human-machine/AI collaboration, but the challenge is whether humans can adapt our practices to these new opportunities.Ian O’Byrne Ian O’Byrne, an assistant professor focusing on literacy and technology at the College of Charleston, said, “I believe in human-machine/AI collaboration, but the challenge is whether humans can adapt our practices to these new opportunities.” Arthur Bushkin, an IT pioneer who worked with the precursors to the Advanced Research Projects Agency Network (ARPANET) and Verizon, wrote, “The principal issue will be society’s collective ability to understand, manage and respond to the implications and consequences of the technology.” Daniel Obam, information and communications technology policy advisor, responded, “As we develop AI, the issue of ethical behaviour is paramount. AI will allow authorities to analyse and allocate resources where there is the greatest need. AI will also change the way we work and travel. … Digital assistants that mine and analyse data will help professionals in making concise decisions in health care, manufacturing and agriculture, among others. Smart devices and virtual reality will enable humans to interact with and learn from historical or scientific issues in a more-clear manner. Using AI, authorities will be able to prevent crime before it happens. Cybersecurity needs to be at the forefront to prevent unscrupulous individuals from using AI to perpetrate harm or evil on the human race.” Ryan Sweeney, director of analytics at Ignite Social Media, commented, “Our technology continues to evolve at a growing rate, but our society, culture and economy are not as quick to adapt. We’ll have to be careful that the benefits of AI for some do not further divide those who might not be able to afford the technology. What will that mean for our culture as more jobs are automated? We will need to consider the impact on the current class divide.” Susan Mernit, executive director of The Crucible and co-founder and board member of Hack the Hood, responded, “If AI is in the hands of people who do not care about equity and inclusion, it will be yet another tool to maximize profit for a few.” The next three sections of this report focus on solutions most often mentioned by respondents to this canvassing. Improve human collaboration across borders and stakeholder groups A number of these experts said ways must be found for people of the world to come to a common understanding of the evolving concerns over AI and digital life and to reach agreement in order to create cohesive approaches to tackling AI’s challenges. Danil Mikhailov, head of data and innovation for Wellcome Trust, responded, “I see a positive future of human/AI interaction in 2030. In my area, health, there is tremendous potential in the confluence of advances in big data analysis and genomics to create personalised medicine and improve diagnosis, treatment and research. Although I am optimistic about human capacity for adaptation, learning, and evolution, technological innovation will not always proceed smoothly. In this we can learn from previous technological revolutions. For example, [Bank of England chief economist] Andy Haldane rightly pointed out that the original ‘luddites’ in the 19th century had a justified grievance. They suffered severe job losses, and it took the span of a generation for enough jobs to be created to overtake the ones lost. It is a reminder that the introduction of new technologies benefits people asymmetrically, with some suffering while others benefit. To realise the opportunities of the future we need to acknowledge this and prepare sufficient safety nets, such as well-funded adult education initiatives, to name one example. It’s also important to have an honest dialogue between the experts, the media and the public about the use of our personal data for social-good projects, like health care, taking in both the risks of acting – such as effects on privacy – and the opportunity costs of not acting. It is a fact that lives are lost currently in health systems across the world that could be saved even with today’s technology let alone that of 2030.” Edson Prestes, a professor and director of robotics at the Federal University of Rio Grande do Sul, responded, “We must understand that all domains (technological or not) have two sides: a good and a bad one. To avoid the bad one we need to create and promote the culture of AI/Robotics for good. We need to stimulate people to empathize toward others. We need to think about potential issues, even if they have small probability to happen. We need to be futurists, foreseeing potential negative events and how to circumvent them before they happen. We need to create regulations/laws (at national and international levels) to handle globally harmful situations for humans, other living beings and the environment. Applying empathy, we should seriously think about ourselves and others – if the technology will be useful for us and others and if it will not cause any harm. We cannot develop solutions without considering people and the ecosystem as the central component of development. If so, the pervasiveness of AI/robotics in the future will diminish any negative impact and create a huge synergy among people and environment, improving people’s daily lives in all domains while achieving environment sustainability.” Adam Nelson, a software developer for one of the “big five” global technology companies, said, “Human-machine/AI collaboration will be extremely powerful, but humans will still control intent. If human governance isn’t improved, AI will merely make the world more efficient. But the goals won’t be human welfare. They’ll be wealth aggregation for those in power.” Wendy Seltzer, strategy lead and counsel at the World Wide Web Consortium, commented, “I’m mildly optimistic that we will have devised better techno-social governance mechanisms. such that if AI is not improving the lives of humans, we will restrict its uses.” Jen Myronuk, a respondent who provided no identifying details, said, “The optimist’s view includes establishing and implementing a new type of ISO standard – ‘encoded human rights’ – as a functional data set alongside exponential and advancing technologies. Global human rights and human-machine/AI technology can and must scale together. If applied as an extension of the human experience, human-machine/AI collaboration will revolutionize our understanding of the world around us.” Fiona Kerr, industry professor of neural and systems complexity at the University of Adelaide, commented, “The answer depends very much on what we decide to do regarding the large questions around ensuring equality of improved global health; by agreeing on what productivity and worth now look like, partly supported by the global wage; through fair redistribution of technology profits to invest in both international and national social capital; through robust discussion on the role of policy in rewarding technologists and businesses to build quality partnerships between humans and AI; through the growth of understanding in the neurophysiological outcomes of human-human and human-technological interaction which allows us to best decide what not to technologies, when a human is more effective, and how to ensure we maximise the wonders of technology as an enabler of a human-centric future.” Benjamin Kuipers, a professor of computer science at the University of Michigan, wrote, “We face several critical choices between positive and negative futures. … Advancing technology will provide vastly more resources; the key decision is whether those resources will be applied for the good of humanity as a whole or if they will be increasingly held by a small elite. Advancing technology will vastly increase opportunities for communication and surveillance. The question is whether we will find ways to increase trust and the possibilities for productive cooperation among people or whether individuals striving for power will try to dominate by decreasing trust and cooperation. In the medium term, increasing technology will provide more powerful tools for human, corporate or even robot actors in society. The actual problems will be about how members of a society interact with each other. In a positive scenario, we will interact with conversational AIs for many different purposes and even when the AI belongs to a corporation we will be able to trust that it takes what in economics is called a ‘fiduciary’ stance toward each of us. That is, the information we provide must be used primarily for our individual benefit. Although we know, and are explicitly told, that our aggregated information is valuable to the corporation, we can trust that it will not be used for our manipulation or our disadvantage.” Denise Garcia, an associate professor of political science and international affairs at Northeastern University, said, “Humanity will come together to cooperate.” Charles Geiger, head of the executive secretariat for the UN’s World Summit on the Information Society, commented, “As long as we have a democratic system and a free press, we may counterbalance the possible threats of AI.” Warren Yoder, longtime director of the Public Policy Center of Mississippi, now an instructor at Mississippi College, optimistically responded, “Human/AI collaborations will … augment our human abilities and increase the material well-being of humanity. At the same time the concomitant increase in the levels of education and health will allow us to develop new social philosophies and rework our polities to transform human well-being. AI increases the disruption of the old social order, making the new transformation both necessary and more likely, though not guaranteed.” Wangari Kabiru, author of the MitandaoAfrika blog, based in Nairobi, Kenya, commented, “In 2030, advancing AI and tech will not leave most people better off than they are today, because our global digital mission is not strong enough and not principled enough to assure that ‘no, not one is left behind’ – perhaps intentionally. The immense positive-impact potential for enabling people to achieve more in nearly every area of life – the full benefits of human-machine/AI collaboration can only be experienced when academia, civil society and other institutions are vibrant, enterprise is human-values-based, and governments and national constitutions and global agreements place humanity first. … Engineering should serve humanity and never should humanity be made to serve the exploits of engineering. More people MUST be creators of the future of LIFE – the future of how they live, future of how they work, future of how their relationships interact and overall how they experience life. Beyond the coexistence of human-machine, this creates synergy.” A professor expert in AI connected to a major global technology company’s projects in AI development wrote, “Precision democracy will emerge from precision education, to incrementally support the best decisions we can make for our planet and our species. The future is about sustaining our planet. As with the current development of precision health as the path from data to wellness, so too will artificial intelligence improve the impact of human collaboration and decision-making in sustaining our planet. ” Some respondents argued that individuals must do better at taking a more active role in understanding and implementing the decision-making options available to them in these complex, code-dependent systems. Kristin Jenkins, executive director of BioQUEST Curriculum Consortium, said, “Like all tools the benefits and pitfalls of AI will depend on how we use it. A growing concern is the collection and potential uses of data about people’s day-to-day lives. ‘Something’ always knows where we are, the layout of the house, what’s in the fridge and how much we slept. The convenience provided by these tools will override caution about data collection, so strong privacy protection must be legislated and culturally nurtured. We need to learn to be responsible for our personal data and aware of when and how it is collected and used.” Peng Hwa Ang, professor of communications at Nanyang Technological University and author of “Ordering Chaos: Regulating the Internet,” commented, “AI is still in its infancy. A lot of it is ruled-based and not demanding of true intelligence or learning. But even so, I find it useful. My car has lane-assistance. I find that it makes me a better driver. When AI is more full-fledged, it would make driving safer and faster. I am using AI for some work I am doing on sentiment analysis. I find that I am able to be more creative in asking questions to be investigated. I expect AI will compel greater creativity. Right now, the biggest fear of AI is that it is a black-box operation – yes, the factors chosen are good and accurate and useful, but no one knows why those criteria are chosen. We know the percentages of the factors, but we do not know the whys. Hopefully, by 2030, the box will be more transparent. That’s on the AI side. On the human side, I hope human beings understand that true AI will make mistakes. If not, it is not real AI. This means that people have got to be ready to catch the mistakes that AI will make. It will be very good. But it will (still) not be foolproof.” Bert Huang, an assistant professor in the department of computer science at Virginia Tech focused on machine learning, wrote, “AI will cause harm (and it has already caused harm), but its benefits will outweigh the harm it causes. That said, the [historical] pattern of technology being net positive depends on people seeking positive things to do with the technology, so efforts to guide research toward societal benefits will be important to ensure the best future.” An anonymous respondent said, “We should ensure that values (local or global) and basic philosophical theories on ethics inform the development and implementation of AI systems.” Develop policies to assure that development of AI will be directed at augmenting humans and the common good Many experts who shared their insights in this study suggested there has to be an overall change in the development, regulation and certification of autonomous systems. They generally said the goal should be values-based, inclusive, decentralized, networks “imbued with empathy” that help individuals assure that technology meets social and ethical responsibilities for the common good. In order for AI technologies to be truly transformative in a positive way, we need a set of ethical norms, standards and practical methodologies to ensure that we use AI responsibly and to the benefit of humanity.Susan Etlinger Susan Etlinger, an industry analyst for Altimeter Group and expert in data, analytics and digital strategy, commented, “In order for AI technologies to be truly transformative in a positive way, we need a set of ethical norms, standards and practical methodologies to ensure that we use AI responsibly and to the benefit of humanity. AI technologies have the potential to do so much good in the world: identify disease in people and populations, discover new medications and treatments, make daily tasks like driving simpler and safer, monitor and distribute energy more efficiently, and so many other things we haven’t yet imagined or been able to realize. And – like any tectonic shift – AI creates its own type of disruption. We’ve seen this with every major invention from the Gutenberg press to the invention of the semiconductor. But AI is different. Replication of some human capabilities using data and algorithms has ethical consequences. Algorithms aren’t neutral; they replicate and reinforce bias and misinformation. They can be opaque. And the technology and means to use them rests in the hands of a select few organizations, at least today.” Bryan Johnson, founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “We could start with owning our own digital data and the data from our bodies, minds and behavior, and then follow by correcting our major tech companies’ incentives away from innovation for everyday convenience and toward radical human improvement. As an example of what tech could look like when aligned with radical human improvement, cognitive prosthetics will one day give warnings about biases – like how cars today have sensors letting you know when you drift off to sleep or if you make a lane change without a signal – and correct cognitive biases and warn an individual away from potential cognitive biases. This could lead to better behaviors in school, home and work, and encourage people to make better health decisions.” Marc Rotenberg, executive director of Electronic Privacy Information Center (EPIC), commented, “The challenge we face with the rise of AI is the growing opacity of processes and decision-making. The favorable outcomes we will ignore. The problematic outcomes we will not comprehend. That is why the greatest challenge ahead for AI accountability is AI transparency. We must ensure that we understand and can replicate the outcomes produced by machines. The alternative outcome is not sustainable.” John C. Havens, executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, “While today people provide ‘consent’ for their data usage, most people don’t understand the depth and breadth of how their information is utilized by businesses and governments at large. Until every individual is provided with a sovereign identity attached to a personal data cloud they control, information won’t truly be shared – just tracked. By utilizing blockchain or similar technologies and adopting progressive ideals toward citizens and their data, as demonstrated by countries like Estonia, we can usher in genuine digital democracy in the age of the algorithm. The other issue underlying the ‘human-AI augmentation’ narrative rarely discussed is the economic underpinnings driving all technology manufacturing. Where exponential growth, shareholder models are prioritized human and environmental well-being diminishes. Multiple reports from people like Joseph Stiglitz point out that while AI will greatly increase GDP in the coming decades, the benefits of these increases will favor the few versus the many. It’s only by adopting ‘Beyond GDP’ or triple-bottom-line metrics that ‘people, planet and profit’ will shape a holistic future between humans and AI.” Greg Lloyd, president and co-founder at Traction Software, presented a future scenario: “By 2030 AIs will augment access and use of all personal and networked resources as highly skilled and trusted agents for almost every person – human or corporate. These agents will be bound to act in accordance with new laws and regulations that are fundamental elements of their construction much like Isaac Asimov’s ‘Three Laws of Robotics’ but with finer-grain ‘certifications’ for classes of activities that bind their behavior and responsibility for practices much like codes for medical, legal, accounting and engineering practice. Certified agents will be granted access to personal or corporate resources, and within those bounds will be able to converse, take direction, give advice and act like trusted servants, advisers or attorneys. Although these agents will ‘feel’ like intelligent and helpful beings, they will not have any true independent will or consciousness, and must not pretend to be human beings or act contrary to the laws and regulations that bind their behavior. Think Ariel and Prospero.” Tracey P. Lauriault, assistant professor of critical media and big data at Carleton University’s School of Journalism and Communication, commented, “[What about] regulatory and policy interventions to protect citizens from potentially harmful outcomes, AI auditing, oversight, transparency and accountability? Without some sort of principles of a systems-based framework to ensure that AI remains ethical and in the public interest, in a stable fashion, then I must assume that AI will impede agency and could lead to decision-making that can be harmful, biased, inaccurate and not able to dynamically change with changing values. There needs to be some sort of accountability.” Joël Colloc, professor at Université du Havre Normandy University and author of “Ethics of Autonomous Information Systems,” commented, “When AI supports human decisions as a decision-support system it can help humanity enhance life, health and well-being and supply improvements for humanity. See Marcus Flavius Quintilianus’s principles: Who is doing what, with what, why, how, when, where? Autonomous AI is power that can be used by powerful persons to control the people, put them in slavery. Applying the Quintilian principles to the role of AI … we should propose a code of ethics of AI to evaluate that each type of application is oriented toward the well-being of the user: 1) do not harm the user, 2) benefits go to the user, 3) do not misuse her/his freedom, identity and personal data, and 4) decree as unfair any clauses alienating the user’s independence or weakening his/her rights of control over privacy in use of the application. The sovereignty of the user of the system must remain total.” Joseph Turow, professor of communication at the University of Pennsylvania, wrote, “Whether or not AI will improve society or harm it by 2030 will depend on the structures governing societies of the era. Broadly democratic societies with an emphasis on human rights might encourage regulations that push AI in directions that help all sectors of the nation. Authoritarian societies will, by contrast, set agendas for AI that further divide the elite from the rest and use technology to cultivate and reinforce the divisions. We see both tendencies today; the dystopian one has the upper hand especially in places with the largest populations. It is critical that people who care about future generations speak out when authoritarian tendencies of AI appear.” Henry E. Brady, dean of the Goldman School of Public Policy at the University of California, Berkeley, wrote, “I believe that policy responses can be developed that will reduce biases and find a way to accommodate AI and robotics with human lives.” Jennifer King, director of privacy at Stanford Law School’s Center for Internet and Society, said, “Unless we see a real effort to capture the power of AI for the public good, I do not see an overarching public benefit by 2030. The shift of AI research to the private sector means that AI will be developed to further consumption, rather than extend knowledge and public benefit.” Gary Kreps, distinguished professor of communication and director of the Center for Health and Risk Communication at George Mason University, wrote, “The tremendous potential for AI to be used to engage and adapt information content and computer services to individual users can make computing increasingly helpful, engaging and relevant. However, to achieve these outcomes, AI needs to be programmed with the user in mind. For example, AI services should be user-driven, adaptive to individual users, easy to use, easy to understand and easy for users to control. These AI systems need to be programmed to adapt to individual user requests, learning about user needs and preferences.” Thomas Streeter, a professor of sociology at the University of Vermont, said, “The technology will not determine whether things are better or worse in 2030; social and political choices will.” Paul Werbos, a former program director at the National Science Foundation who first described the process of training artificial neural networks through backpropagation of errors in 1974, said, “We are at a moment of choice. The outcome will depend a lot on the decisions of very powerful people who do not begin to know the consequences of the alternatives they face, or even what the substantive alternatives are.” Divina Frau-Meigs, professor of media sociology at the University of Paris III: Sorbonne Nouvelle and UNESCO chair for sustainable digital development, responded, “The sooner the ethics of AI are aligned with human rights tenets the better.” Juan Ortiz Freuler, a policy fellow at the World Wide Web Foundation, wrote “We believe technology can and should empower people. If ‘the people’ will continue to have a substantive say on how society is run, then the state needs to increase its technical capabilities to ensure proper oversight of these companies. Tech in general and AI in particular will promote the advancement of humanity in every area by allowing processes to scale efficiently, reducing the costs and making more services available to more people (including quality health care, mobility, education, etc.). The open question is how these changes will affect power dynamics. To operate effectively, AI requires a broad set of infrastructure components, which are not equally distributed. These include data centers, computing power and big data. What is more concerning is that there are reasons to expect further concentration. On the one hand, data scales well: The upfront (fixed) costs of setting up a datacenter are large compared to the cost of keeping it running. Therefore, the cost of hosting each extra datum is marginally lower than the previous one. Data is the fuel of AI, and therefore whoever gets access to more data can develop more effective AI. On the other hand, AI creates efficiency gains by allowing companies to automate more processes, meaning whoever gets ahead can undercut competitors. This circle fuels concentration. As more of our lives are managed by technology there is a risk that whoever controls these technologies gets too much power. The benefits in terms of quality of life and the risks to people’s autonomy and control over politics are qualitatively different and there cannot (and should not) be up for tradeoffs.” Meryl Alper, an assistant professor of communication at Northeastern University and a faculty associate at Harvard University’s Berkman Klein Center for Internet and Society, wrote, “My fear is that AI tools will be used by a powerful few to further centralize resources and marginalize people. These tools, much like the internet itself, will allow people to do this ever more cheaply, quickly and in a far-reaching and easily replicable manner, with exponentially negative impacts on the environment. Preventing this in its worst manifestations will require global industry regulation by government officials with hands-on experience in working with AI tools on the federal, state and local level, and transparent audits of government AI tools by grassroots groups of diverse (in every sense of the term) stakeholders.” David Wilkins, instructor in computer science at the University of Oregon, responded, “AI must be able to explain the basis for its decisions.” A top research director and technical fellow at a major global technology company said, “There is a huge opportunity to enhance folks’ lives via AI technologies. The positive uses of AI will dominate as they will be selected for their value to people. I trust the work by industry, academia and civil society to continue to play an important role in moderating the technology, such as pursuing understandings of the potential costly personal, social and societal influences of AI. I particularly trust the guidance coming from the long-term, ongoing One Hundred Year Study on AI and the efforts of the Partnership on AI.” Peter Stone, professor of computer science at the University of Texas at Austin and chair of the first study panel of the One Hundred Year Study on Artificial Intelligence (AI100), responded, “As chronicled in detail in the AI100 report, I believe that there are both significant opportunities and significant challenges/risks when it comes to incorporating AI technologies into various aspects of everyday life. With carefully crafted industry-specific policies and responsible use, I believe that the potential benefits outweigh the risks. But the risks are not to be taken lightly.” Anita Salem, systems research and design principal at SalemSystems, warned of a possible dystopian outcome, “Human-machine interaction will result in increasing precision and decreasing human relevance unless specific efforts are made to design in ‘humanness.’ For instance, AI in the medical field will aid more precise diagnosis, will increase surgical precision and will increase evidence-based analytics. If designed correctly, these systems will allow the humans to do what they do best –provide empathy, use experience-based intuition and utilize touch and connection as a source of healing. If human needs are left out of the design process, we’ll see a world where humans are increasingly irrelevant and more easily manipulated. We could see increasing under-employment leading to larger wage gaps, greater poverty and homelessness, and increasing political alienation. We’ll see fewer opportunities for meaningful work, which will result in increasing drug and mental health problems and the further erosion of the family support system. Without explicit efforts to humanize AI design, we’ll see a population that is needed for purchasing, but not creating. This population will need to be controlled and AI will provide the means for this control: law enforcement by drones, opinion manipulation by bots, cultural homogeny through synchronized messaging, election systems optimized from big data and a geopolitical system dominated by corporations that have benefited from increasing efficiency and lower operating costs.” As it becomes more difficult for humans to understand how AI/tech works, it will become harder to resolve inevitable problems.Chris Newman Chris Newman, principal engineer at Oracle, commented, “As it becomes more difficult for humans to understand how AI/tech works, it will become harder to resolve inevitable problems. A better outcome is possible with a hard push by engineers and consumers toward elegance and simplicity (e.g., Steve-Jobs-era Apple).” A research scientist based in North America wrote, “The wheels of legislation, which is a primary mechanism to ensure benefits are distributed throughout society, move slowly. While the benefits of AI/automation will accrue very quickly for the 1%, it will take longer for the rest of the populace to feel any benefits, and that’s ONLY if our representative leaders DELIBERATELY enact STRONG social and fiscal policy. For example, AI will save billions in labor costs – and also cut the bargaining power of labor in negotiations with capital. Any company using AI technologies should be heavily taxed, with that money going into strong social welfare programs like job retraining and federal jobs programs. For another example, any publicly funded AI research should be prevented from being privatized. The public ought to see the reward from its own investments. Don’t let AI follow the pattern of Big Pharma’s exploitation of the public-permitted Bayh-Dole Act.” Ken Birman, a professor in the department of computer science at Cornell University, responded, “By 2030, I believe that our homes and offices will have evolved to support app-like functionality, much like the iPhone in my pocket. People will customize their living and working spaces, and different app suites will support different lifestyles or special needs. For example, think of a young couple with children, a group of students sharing a home or an elderly person who is somewhat frail. Each would need different forms of support. This ‘applications’ perspective is broad and very flexible. But we also need to ensure that privacy and security are strongly protected by the future environment. I do want my devices and apps linked on my behalf, but I don’t ever want to be continuously spied-upon. I do think this is feasible, and as it occurs we will benefit in myriad ways.” Martin Geddes, a consultant specializing in telecommunications strategies, said, “The unexpected impact of AI will be to automate many of our interactions with systems where we give consent and to enable a wider range of outcomes to be negotiated without our involvement. This requires a new presentation layer for the augmented reality metaverse, with a new ‘browser’ – the Guardian Avatar – that helps to protect our identity and our interests.” Lindsey Andersen, an activist at the intersection of human rights and technology for Freedom House and Internews, now doing graduate research at Princeton University, commented, “Already, there is an overreliance on AI to make consequential decisions that affect people’s lives. We have rushed to use AI to decide everything, from what content we see on social media to assigning credit scores to determining how long a sentence a defendant should serve. While often well-intentioned, these uses of AI are rife with ethical and human rights issues, from perpetuating racial bias to violating our rights to privacy and free expression. If we have not dealt with these problems through smart regulation, consumer/buyer education and establishment of norms across the AI industry, we could be looking at a vastly more unfair, polarized and surveilled world in 2030.” Yeseul Kim, a designer for a major South Korean search firm, wrote, “The prosperity generated by and the benefits of AI will promote the quality of living for most people only when its ethical implications and social impacts are widely discussed and shared inside the human society, and only when pertinent regulations and legislation can be set up to mitigate the misconduct that can be brought about as the result of AI advancement. If these conditions are met, computers and machines can process data at unprecedented speed and at an unrivaled precision level, and this will improve the quality of life, especially in medical and healthcare sectors. It has already been proven and widely shared among medical expert groups that doctors perform better in detecting diseases when they work with AI. Robotics for surgery is also progressing, so this will also benefit the patients as they can assist human surgeons who inevitably face physical limits when they conduct surgeries.” Mark Maben, a general manager at Seton Hall University, wrote, “The AI revolution is, sadly, likely to be dystopian. At present, governmental, educational, civic, religious and corporate institutions are ill-prepared to handle the massive economic and social disruption that will be caused by AI. I have no doubt that advances in AI will enhance human capacities and empower some individuals, but this will be more than offset by the fact that artificial intelligence and associated technological advances will mean far fewer jobs in the future. Sooner than most individuals and societies realize, AI and automation will eliminate the need for retail workers, truck drivers, lawyers, surgeons, factory workers and other professions. In order to ensure that the human spirit thrives in a world run and ruled by AI, we will need to change the current concept of work. That is an enormous task for a global economic system in which most social and economic benefits come from holding a traditional job. We are already seeing a decline in democratic institutions and a rise in authoritarianism due to economic inequality and the changing nature of work. If we do not start planning now for the day when AI results in complete disruption of employment, the strain is likely to result in political instability, violence and despair. This can be avoided by policies that provide for basic human needs and encourage a new definition of work, but the behavior to date by politicians, governments, corporations and economic elites gives me little confidence in their ability to lead us through this transition.” Eduardo Vendrell, a computer science professor at the Polytechnic University of Valencia in Spain, responded, “These advances will have a noticeable impact on our privacy, since the basis for this application is focused on the information we generate with the use of different technologies. … It will be necessary to regulate in a decisive way the access to the information and its use.” Yoram Kalman, an associate professor at the Open University of Israel and member of The Center for Internet Research at the University of Haifa, wrote, “The main risk is when communication and analysis technologies are used to control others, to manipulate them, or to take advantage of them. These risks are ever-present and can be mitigated through societal awareness and education, and through regulation that identifies entities that become very powerful thanks to a specific technology or technologies, and which use that power to further strengthen themselves. Such entities – be they commercial, political, national, military, religious or any other – have in the past tried and succeeded in leveraging technologies against the general societal good, and that is an ever-present risk of any powerful innovation. This risk should make us vigilant but should not keep us from realizing one of the most basic humans urges: the strive to constantly improve the human condition.” Sam Gregory, director of WITNESS and digital human rights activist, responded, “We should assume all AI systems for surveillance and population control and manipulation will be disproportionately used and inadequately controlled by authoritarian and non-democratic governments. These governments and democratic governments will continue to pressure platforms to use AI to monitor for content, and this monitoring, in and of itself, will contribute to the data set for personalization and for surveillance and manipulation. To fight back against this dark future we need to get the right combination of attention to legislation and platform self-governance right now, and we need to think about media literacy to understand AI-generated synthetic media and targeting. We should also be cautious about how much we encourage the use of AI as a solution to managing content online and as a solution to, for example, managing hate speech.” Jonathan Kolber, futurist, wrote, “My fear is that, by generating AIs that can learn new tasks faster and more reliably than people can do, the future economy will have only evanescent opportunities for most people. My hope is that we will begin implementing a sustainable and viable universal basic income, and in particular Michael Haines’ MUBI proposal. (To my knowledge, the only such proposal that is sustainable and can be implemented in any country at any time.) I have offered a critique of alternatives. Given that people may no longer need depend on their competitive earning power in 2030, AI will empower a far better world. If, however, we fail to implement a market-oriented universal basic income or something equally effective, vast multitudes will become unemployed and unemployable without means to support themselves. That is a recipe for societal disaster.” Walid Al-Saqaf, senior lecturer at Södertörn University, member of the board of trustees of the Internet Society (ISOC) and vice president of the ISOC Blockchain Special Interest Group, commented, “The challenge is to ensure that the data used for AI procedures is reliable. This entails the need for strong cyber security and data integrity. The latter, I believe, can be tremendously enhanced by distributed ledger technologies such as blockchain. I foresee mostly positive results from AI so long as there is enough guards to protect from automated execution of tasks in areas that may have ethical considerations such as taking decisions that may have life-or-death implications. AI has a lot of potential. It should be used to add to and not replace human intellect and judgement.” Danny O’Brien, international director for a nonprofit digital rights group, commented, “I’m generally optimistic about the ability of humans to direct technology for the benefit of themselves and others. I anticipate human-machine collaboration to take place at an individual level, with tools and abilities that enhance our own judgment and actions, rather than this being a power restricted to a few actors. So, for instance, if we use facial-recognition or predictive tools, it will be under the control of an end-user, transparent and limited to personal use. This may require regulation, internal coding restraints or a balance being struck between user capabilities. But I’m hopeful we can get there.” Fernando Barrio, director of the law program at the Universidad Nacional de Río Negro in Argentina, commented, “The interaction between humans and networked AI could lead to a better future for a big percentage of the population. In order to do so efforts need to be directed not only at increasing AI development and capabilities but also at positive policies to increase the availability and inclusiveness of those technologies. The challenge is not technical; it is sociopolitical.” Paul Jones, professor of information science at the University of North Carolina at Chapel Hill, responded, “AI as we know it in 2018 is just beginning to understand itself. Like HAL, it will have matured by 2030 into an understanding of its post-adolescent self and of its relationship to humans and to the world. But, also, humans will have matured in our relationship to AI. Like all adolescent relationships there will have been risk taking and regrets and hopefully reconciliation. Language was our first link to other intelligences, then books, then the internet – each a more intimate conversation than the one before. AI will become our link, adviser and to some extent our wise loving companion.” Jean-Claude Heudin, a professor with expertise in AI and software engineering at the De Vinci Research Center at Pole Universitaire Leonard de Vinci in France, wrote, “Natural intelligence and artificial intelligence are complementary. We need all of the possible intelligence possible for solving the problems yet to come. More intelligence is always better.” Bryan Alexander, futurist and president of Bryan Alexander Consulting, responded, “I hope we will structure AI to enhance our creativity, to boost our learning, to expand our relationships worldwide, to make us physically safer and to remove some drudgery.” But some have concerns that the setting of policy could do some damage. Scott Burleigh, software engineer and intergalactic internet pioneer, wrote, “Advances in technology itself, including AI, always increase our ability to change the circumstances of reality in ways that improve our lives. It also always introduces possible side effects that can make us worse off than we were before. Those effects are realized when the policies we devise for using the new technologies are unwise. I don’t worry about technology; I worry about stupid policy. I worry about it a lot, but I am guardedly optimistic; in most cases I think we eventually end up with tolerable policies.” What worries me most is worry itself: An emerging moral panic that will cut off the benefits of this technology for fear of what could be done with it.Jeff Jarvis Jeff Jarvis, director of the Tow-Knight Center at City University of New York’s Craig Newmark School of Journalism, commented, “What worries me most is worry itself: An emerging moral panic that will cut off the benefits of this technology for fear of what could be done with it. What I fear most is an effort to control not just technology and data but knowledge itself, prescribing what information can be used for before we know what those uses could be. I could substitute ‘book’ for ‘AI’ and the year 1485 (or maybe 1550) for 2030 in your question and it’d hold fairly true. Some thought it would be good, some bad; both end up right. We will figure this out. We always have. Sure, after the book there were wars and other profound disturbances. But in the end, humans figure out how to exploit technologies to their advantage and control them for their safety. I’d call that a law of society. The same will be true of AI. Some will misuse it, of course, and that is the time to identify limits to place on its use – not speculatively before. Many more will use it to find economic, societal, educational and cultural benefit and we need to give them the freedom to do so.” Some respondents said no matter how society comes together to troubleshoot AI concerns there will still be problems. Dave Gusto, professor of political science and co-director of the Consortium for Science, Policy & Outcomes at Arizona State University, said, “The question asked about ‘most people.’ Most people in the world live a life that is not well regarded by technology, technology developers and AI. I don’t see that changing much in the next dozen years.” A longtime Silicon Valley communications professional who has worked at several of the top tech companies over the past few decades responded, “AI will continue to improve *if* quality human input is behind it. If so, better AI will support service industries at the top of the funnel, leaving humans to handle interpretation, decisions and applied knowledge. Medical data-gathering for earlier diagnostics comes to mind. Smarter job-search processes, environmental data collection for climate-change actions – these applications all come to mind.” Hari Shanker Sharma, an expert in nanotechnology and neurobiology at Uppsala University in Sweden, said, “AI has not yet peaked hence growth will continue, but evil also uses such developments. That will bring bigger dangers to mankind. The need will be to balance growth with safety, e.g., social media is good and bad. The ways to protect from evil mongers are not sufficient. Tracing an attacker/evil monger in a global village to control and punish is the need. AI will give birth to an artificial human being who could be an angel or a devil. Plan for countering evil at every development stage.” A changemaker working for digital accessibility wrote, “There is no reason to assume some undefined force will be able to correct for or ameliorate the damage of human nature amplified with power-centralizing technologies. There is no indication that governments will be able to counterbalance power-centralization trends, as governments, too, take advantage of such market failures. The outward dressing of such interactions is probably the least important aspect of it.” An information-science futurist commented, “I fear that powerful business interests will continue to put profits above all else, closing their eyes to the second- and third-order effects of their decisions. I fear that we do not have the political will to protect and promote the common interests of citizens and democracy. I fear that our technological tools are advancing more quickly than our ability to manage them wisely. I have, however, recently spotted new job openings with titles like ‘Director of Research, Policy and Ethics in AI’ and ‘Architect, AI Ethical Practice’ at major software companies. There are reasons for hope.” The following one-liners from anonymous respondents also tie into this theme: An open-source technologist in the automotive industry wrote, “We’ll have to have independent AI systems with carefully controlled data access, clear governance and individuals’ right to be forgotten.” A research professor of international affairs at a major university in Washington, D.C., responded, “We have to find a balance between regulations designed to encourage ethical nondiscriminatory use, transparency and innovation.” A director for a major regional internet registry said, “The ability of government to properly regulate advanced technologies is not keeping up with the evolution of those technologies. This allows many developments to proceed without sufficient notice, analysis, vetting or regulation to protect the interests of citizens (Facebook being a prime example).” A professor at a major Silicon-Valley-area university said, “If technological advances are not integrated into a vision of holistic, ecologically sustainable, politically equitable social visions, they will simply serve gated and locked communities.” A member of the editorial board of the Association of Computing Machinery Journal on autonomous and adaptive systems commented, “By developing an ethical AI, we can provide smarter services in daily life, such as collaborating objects providing on-demand highly adaptable services in any environment supporting daily life activities.” Other anonymous respondents commented: “It is essential that policymakers focus on impending inequalities. The central question is for whom will life be better, and for whom will it be worse? Some people will benefit from AI, but many will not. For example, folks on the middle and lower end of the income scale will see their jobs disappear as human-machine/AI collaborations become lower-cost and more efficient. Though such changes could generate societal benefits, they should not be born on the backs of middle- and low-income people.” “Results will be determined by the capacity of political, criminal justice and military institutions to adapt to rapidly evolving technologies.” “To assure the best future, we need to ramp up efforts in the areas of decentralizing data ownership, education and policy around transparency.” “Most high-end AI knowhow is and will be controlled by a few giant corporations unless government or a better version of the United Nations step in to control and oversee them.” “Political change will determine whether AI technologies will benefit most people or not. I am not optimistic due to the current growth of authoritarian regimes and the growing segment of the super-rich elite who derive disproportionate power over the direction of society from their economic dominance.” “Mechanisms must be put in place to ensure that the benefits of AI do not accrue only to big companies and their shareholders. If current neo-liberal governance trends continue, the value-added of AI will be controlled by a few dominant players, so the benefits will not accrue to most people. There is a need to balance efficiency with equity, which we have not been doing lately.” Shift the priorities of economic, political and education systems to empower individuals to stay ahead in the ‘race with the robots’ A share of these experts suggest the creation of policies, regulations or ethical and operational standards should shift corporate and government priorities to focus on the global advancement of humanity, rather than profits or nationalism. They urge that major organizations revamp their practices and make sure AI advances are aimed at human augmentation for all, regardless of economic class. Evan Selinger, professor of philosophy at the Rochester Institute of Technology, commented, “In order for people, in general, to be better off as AI advances through 2030, a progressive political agenda – one rooted in the protection of civil liberties and human rights and also conscientious of the dangers of widening social and economic inequalities – would have to play a stronger role in governance. In light of current events, it’s hard to be optimistic that such an agenda will have the resources necessary to keep pace with transformative uses of AI throughout ever-increasing aspects of society. To course-correct in time it’s necessary for the general public to develop a deep appreciation about why leading ideologies concerning the market, prosperity and security are not in line with human flourishing.” AI ‘done right’ will empower.Nicholas Beale Nicholas Beale, leader of the strategy practice at Sciteb, an international strategy and search firm, commented, “All depends on how responsibly AI is applied. AI ‘done right’ will empower. But unless Western CEOs improve their ethics it won’t. I’m hoping for the best.” Benjamin Shestakofsky, an assistant professor of sociology at the University of Pennsylvania specializing in digital technology’s impacts on work, said, “Policymakers should act to ensure that citizens have access to knowledge about the effects of AI systems that affect their life chances and a voice in algorithmic governance. The answer to this question will depend on choices made by citizens, workers, organizational leaders and legislators across a broad range of social domains. For example, algorithmic hiring systems can be programmed to prioritize efficient outcomes for organizations or fair outcomes for workers. The profits produced by technological advancement can be broadly shared or can be captured by the shareholders of a small number of high-tech firms.” Charles Zheng, a researcher into machine learning and AI with the National Institute of Mental Health, wrote, “To ensure the best future, politicians must be informed of the benefits and risks of AI and pass laws to regulate the industry and to encourage open AI research. My hope is that AI algorithms advance significantly in their ability to understand natural language, and also in their ability to model humans and understand human values. My fear is that the benefits of AI are restricted to the rich and powerful without being accessible to the general public.” Mary Chayko, author of “Superconnected: The Internet, Digital Media, and Techno-Social Life,” said, “We will see regulatory oversight of AI geared toward the protection of those who use it. Having said that, people will need to remain educated as to AI’s impacts on them and to mobilize as needed to limit the power of companies and governments to intrude on their spaces, lives and civil rights. It will take vigilance and hard work to accomplish this, but I feel strongly that we are up to the task.” R “Ray” Wang, founder and principal analyst at Constellation Research, based in Silicon Valley, said, “We have not put the controls of AI in the hands of many. In fact the experience in China has shown how this technology can be used to take away the freedoms and rights of the individual for the purposes of security, efficiency, expediency and whims of the state. On the commercial side, we also do not have any controls in play as to ethical AI. Five elements should be included – transparency, explainability, reversibility, coachability and human-led processes in the design.” John Willinsky, professor and director of the Public Knowledge Project at Stanford Graduate School of Education, said, “Uses of AI that reduce human autonomy and freedom will need to be carefully weighed against the gains in other qualities of human life (e.g., driverless cars that improve traffic and increase safety). By 2030, deliberations over such matters will be critical to the functioning of ‘human-machine/AI collaboration.’ My hope, however, is that these deliberations are not framed as collaborations between what is human and what is AI but will be seen as the human use of yet another technology, with the wisdom of such use open to ongoing human consideration and intervention intent on advancing that sense of what is most humane about us.” A professor of media studies at a U.S. university commented, “Technology will be a material expression of social policy. If that social policy is enacted through a justice-oriented democratic process, then it has a better chance of producing justice-oriented outcomes. If it is enacted solely by venture-funded corporations with no obligation to the public interest, most people in 2030 will likely be worse off.” Gene Crick, director of the Metropolitan Austin Interactive Network and longtime community telecommunications expert, wrote, “To predict AI will benefit ‘most’ people is more hopeful than certain. … AI can benefit lives at work and home – if competing agendas can be balanced. Key support for this important goal could be technology professionals’ acceptance and commitment regarding social and ethical responsibilities of our work.” Anthony Picciano, a professor of education at the City of New York University’s Interactive Technology and Pedagogy program, responded, “I am concerned that profit motives will lead some companies and individuals to develop AI applications that will threaten, not necessarily improve, our way of life. In the next 10 years we will see evolutionary progress in the development of artificial intelligence. After 2030, we will likely see revolutionary developments that will have significant ramifications on many aspects of human endeavor. We will need to develop checks on artificial intelligence.” Bill Woodcock, executive director at Packet Clearing House, the research organization behind global network development, commented, “In short-term, pragmatic ways, learning algorithms will save people time by automating much of tasks like navigation and package delivery and shopping for staples. But that tactical win comes at a strategic loss as long as the primary application of AI is to extract more money from people, because that puts them in opposition to our interests as a species, helping to enrich a few people at the expense of everyone else. In AI that exploits human psychological weaknesses to sell us things, we have for the first time created something that effectively predates our own species. That’s a fundamentally bad idea and requires regulation just as surely as would self-replicating biological weapons.” Ethem Alpaydın, a professor of computer engineering at Bogazici University in Istanbul, responded, “AI will favor the developed countries that actually develop these technologies. AI will help find cures for various diseases and overall improve the living conditions in various ways. For the developing countries, however, whose labor force is mostly unskilled and whose exports are largely low-tech, AI implies higher unemployment, lower income and more social unrest. The aim of AI in such countries should be to add skill to the labor force rather than supplant them. For example, automatic real-time translation systems (e.g., Google’s Babel fish) would allow people who don’t speak a foreign language to find work in the tourism industry.” Joe Whittaker, a former professor of sciences and associate director of the NASA GESTAR program, now associate provost at Jackson State University, said, “Actions should be taken to make the internet universally available and accessible, provide the training and know-how for all users.” John Paschoud, councilor for the London borough of Lewisham, said, “It is possible that advances in AI and networked information will benefit ‘most’ people, but this is highly dependent upon on how those benefits are shared. … If traditional capitalist models of ‘ownership of the means of production’ prevail, then benefits of automated production will be retained by the few who own, not the many who work. Similarly, models of housing, health care, etc., can be equitably distributed and can all be enhanced by technology.” David Schlangen, a professor of applied computational linguistics at Bielefeld University in Germany, responded, “If the right regulations are put in place and ad-based revenue models can be controlled in such a way that they cannot be exploited by political interest groups, the potential for AI-based information search and decision support is enormous. That’s a big if, but I prefer to remain optimistic.” Kate Carruthers, a chief data and analytics officer based in Australia, predicted, “Humans will increasingly interact with AI on a constant basis and it will become hard to know where the boundaries are between the two. Just as kids now see their mobile phones as an extension of themselves so too will human/AI integration be. I fear that the cause of democracy and freedom will be lost by 2030, so it might be a darker future. To avoid that, one thing we need to do is ensure the development of ethical standards for the development of AI and ensure that we deal with algorithmic bias. We need to build ethics into our development processes. Further, I assume that tracking and monitoring of people will be an accepted part of life and that there will be stronger regulation on privacy and data security. Every facet of life will be circumscribed by AI, and it will be part of the fabric of our lives.” David Zubrow, associate director of empirical research at Carnegie Mellon University’s Software Engineering Institute, said, “How the advances are used demands wisdom, leadership and social norms and values that respect and focus on making the world better for all; education and health care will reach remote and underserved areas, for instance. The fear is control is consolidated in the hands of few that seek to exploit people, nature and technology for their own gain. I am hopeful that this will not happen.” Francisco S. Melo, an associate professor of computer science at Instituto Superior Técnico in Lisbon, Portugal, responded, “I expect that AI technology will contribute to render several services (in health, assisted living, etc.) more efficient and humane and, by making access to information more broadly available, contribute to mitigate inequalities in society. However, in order for positive visions to become a reality, both AI researchers and the general population should be aware of the implications that such technology can have, particularly in how information is used and the ways by which it can be manipulated. In particular, AI researchers should strive for transparency in their work, in order to demystify AI and minimize the possibility of misuse; the general public, on the other hand, should strive to be educated in the responsible and informed use of technology.” Kyung Sin Park, internet law expert and co-founder of Open Net Korea, responded, “AI consists of software and training data. Software is already being made available on an open source basis. What will decide AI’s contribution to humanity will be whether data (used for training AI) will be equitably distributed. Data-protection laws and the open data movement will hopefully do the job of making more data available equally to all people. I imagine a future where people can access AI-driven diagnosis of symptoms, which will drastically reduce health care costs for all.” Doug Schepers, chief technologist at Fizz Studio, said, “AI/ML, in applications and in autonomous devices and vehicles, will make some jobs obsolete, and the resulting unemployment will cause some economic instability that impacts society as a whole, but most individuals will be better off. The social impact of software and networked systems will get increasingly complex, so ameliorating that software problem with software agents may be the only way to decrease harm to human lives, but only if we can focus the goal of software to benefit individuals and groups rather than companies or industries.” Erik Huesca, president of the Knowledge and Digital Culture Foundation, based in Mexico City, said, “There is a concentration of places where specific AI is developed. It is a consequence of the capital investment that seeks to replace expensive professionals. Universities have to rethink what type of graduates to prepare, especially in areas of health, legal and engineering, where the greatest impact is expected, since the labor displacement of doctors, engineers and lawyers is a reality with the incipient developed systems.” Stephen Abram, principal at Lighthouse Consulting Inc., wrote, “I am concerned that individual agency is lost in AI and that appropriate safeguards should be in place around data collection as specified by the individual. I worry that context can be misconstrued by government agencies like ICE, IRS, police, etc. There is a major conversation needed throughout the time during which AI applications are developed, and they need to be evergreen as innovation and creativity spark new developments. Indeed, this should not be part of a political process, but an academic, independent process guided by principles and not economics and commercial entities.” David Klann, consultant and software developer at Broadcast Tool & Die, responded, “AI and related technologies will continue to enhance peoples’ lives. I tend toward optimism; I instinctively believe there are enough activists who care about the ethics of AI that the technology will be put to use solving problems that humans cannot solve on their own. Take mapping, for instance. I recently learned about congestion problems caused by directions being optimized for individuals. People are now tweaking the algorithms to account for multiple people taking the ‘most efficient route’ that had become congested and was causing neighborhood disturbance due to the increased traffic. I believe people will construct AI algorithms to learn of and to ‘think ahead’ about such unintended consequences and to avoid them before they become problems. Of course, my fear is that money interests will continue to wield an overwhelming influence over AI and machine learning (ML). These can be mitigated through fully disclosed techniques, transparency and third-party oversight. These third parties may be government institutions or non-government organizations with the strength to ‘enforce’ ethical use of the technologies. Open source code and open ML training data will contribute significantly to this mitigation.” Andrian Kreye, a journalist and documentary filmmaker based in Germany, said, “If humanity is willing to learn from its mistakes with low-level AIs like social media algorithms there might be a chance for AI to become an engine for equality and progress. Since most digital development is driven by venture capital, experience shows that automation and abuse will be the norm.” We have to make data unbiased before putting it into AI, but it’s not very easy.Mai Sugimoto Mai Sugimoto, an associate professor of sociology at Kansai University in Japan, responded, “AI could amplify one’s bias and prejudice. We have to make data unbiased before putting it into AI, but it’s not very easy.” An anonymous respondent wrote, “There are clearly advances associated with AI, but the current global political climate gives no indication that technological advancement in any area will improve most lives in the future. We also need to think ecologically in terms of the interrelationship between technology and other social-change events. For example, medical technology has increased lifespans, but the current opioid crisis has taken many lives in the U.S. among certain demographics.” A founder and president said, “The future of AI is more about the policies we choose and the projects we choose to fund. I think there will be large corporate interests in AI that serve nothing but profits and corporations’ interests. This is the force for the ‘bad.’ However, I also believe that most technologists want to do good, and that most people want to head in a direction for the common good. In the end, I think this force will win out.” A senior strategist in regulatory systems and economics for a top global telecommunications firm wrote, “If we do not strive to improve society, making the weakest better off, the whole system may collapse. So, AI had better serve to make life easier for everyone.”
10 Dec 18
Pew Research Center: Internet, Science & Tech
A vehicle and person recognition system for use by law enforcement is demonstrated at last year’s GPU Technology Conference in Washington, D.C., which highlights new uses for artificial intelligence and deep learning. (Saul Loeb/AFP/Getty Images) Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today? Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018. The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future. Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems. Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table. [chart id=”21972″] Specifically, participants were asked to consider the following: “Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?” Overall, and despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off. A number of the thought leaders who participated in this canvassing said humans’ expanding reliance on technological systems will only go well if close attention is paid to how these tools, platforms and networks are engineered, distributed and updated. Some of the powerful, overarching answers included those from: Sonia Katyal, co-director of the Berkeley Center for Law and Technology and a member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors, predicted, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.” We need to work aggressively to make sure technology matches our values.Erik Brynjolfsson Erik Brynjolfsson, director of the MIT Initiative on the Digital Economy and author of “Machine, Platform, Crowd: Harnessing Our Digital Future,” said, “AI and related technologies have already achieved superhuman performance in many areas, and there is little doubt that their capabilities will improve, probably very significantly, by 2030. … I think it is more likely than not that we will use this power to make the world a better place. For instance, we can virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. That said, AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons. Neither outcome is inevitable, so the right question is not ‘What will happen?’ but ‘What will we choose to do?’ We need to work aggressively to make sure technology matches our values. This can and must be done at all levels, from government, to business, to academia, and to individual choices.” Bryan Johnson, founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.” Marina Gorbis, executive director of the Institute for the Future, said, “Without significant changes in our political economy and data governance regimes [AI] is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions. Every time we program our environments, we end up programming ourselves and our interactions. Humans have to become more standardized, removing serendipity and ambiguity from our interactions. And this ambiguity and complexity is what is the essence of being human.” Judith Donath, author of “The Social Machine, Designs for Living Online” and faculty fellow at Harvard University’s Berkman Klein Center for Internet & Society, commented, “By 2030, most social situations will be facilitated by bots – intelligent-seeming programs that interact with us in human-like ways. At home, parents will engage skilled bots to help kids with homework and catalyze dinner conversations. At work, bots will run meetings. A bot confidant will be considered essential for psychological well-being, and we’ll increasingly turn to such companions for advice ranging from what to wear to whom to marry. We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals? Artificially intelligent companions will cultivate the impression that social goals similar to our own motivate them – to be held in good regard, whether as a beloved friend, an admired boss, etc. But their real collaboration will be with the humans and institutions that control them. Like their forebears today, these will be sellers of goods who employ them to stimulate consumption and politicians who commission them to sway opinions.” Andrew McLaughlin, executive director of the Center for Innovative Thinking at Yale University, previously deputy chief technology officer of the United States for President Barack Obama and global public policy lead for Google, wrote, “2030 is not far in the future. My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable. AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.” Michael M. Roberts, first president and CEO of the Internet Corporation for Assigned Names and Numbers (ICANN) and Internet Hall of Fame member, wrote, “The range of opportunities for intelligent agents to augment human intelligence is still virtually unlimited. The major issue is that the more convenient an agent is, the more it needs to know about you – preferences, timing, capacities, etc. – which creates a tradeoff of more help requires more intrusion. This is not a black-and-white issue – the shades of gray and associated remedies will be argued endlessly. The record to date is that convenience overwhelms privacy. I suspect that will continue.” danah boyd, a principal researcher for Microsoft and founder and president of the Data & Society Research Institute, said, “AI is a tool that will be used by humans for all sorts of purposes, including in the pursuit of power. There will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability. Take, for example, climate change and climate migration. This will further destabilize Europe and the U.S., and I expect that, in panic, we will see AI be used in harmful ways in light of other geopolitical crises.” Amy Webb, founder of the Future Today Institute and professor of strategic foresight at New York University, commented, “The social safety net structures currently in place in the U.S. and in many other countries around the world weren’t designed for our transition to AI. The transition through AI will last the next 50 years or more. As we move farther into this third era of computing, and as every single industry becomes more deeply entrenched with AI systems, we will need new hybrid-skilled knowledge workers who can operate in jobs that have never needed to exist before. We’ll need farmers who know how to work with big data sets. Oncologists trained as robotocists. Biologists trained as electrical engineers. We won’t need to prepare our workforce just once, with a few changes to the curriculum. As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging. It’s easy to look back on history through the lens of present – and to overlook the social unrest caused by widespread technological unemployment. We need to address a difficult truth that few are willing to utter aloud: AI will eventually cause a large number of people to be permanently out of work. Just as generations before witnessed sweeping changes during and in the aftermath of the Industrial Revolution, the rapid pace of technology will likely mean that Baby Boomers and the oldest members of Gen X – especially those whose jobs can be replicated by robots – won’t be able to retrain for other kinds of work without a significant investment of time and effort.” Barry Chudakov, founder and principal of Sertain Research, commented, “By 2030 the human-machine/AI collaboration will be a necessary tool to manage and counter the effects of multiple simultaneous accelerations: broad technology advancement, globalization, climate change and attendant global migrations. In the past, human societies managed change through gut and intuition, but as Eric Teller, CEO of Google X, has said, ‘Our societal structures are failing to keep pace with the rate of change.’ To keep pace with that change and to manage a growing list of ‘wicked problems’ by 2030, AI – or using Joi Ito’s phrase, extended intelligence – will value and revalue virtually every area of human behavior and interaction. AI and advancing technologies will change our response framework and time frames (which in turn, changes our sense of time). Where once social interaction happened in places – work, school, church, family environments – social interactions will increasingly happen in continuous, simultaneous time. If we are fortunate, we will follow the 23 Asilomar AI Principles outlined by the Future of Life Institute and will work toward ‘not undirected intelligence but beneficial intelligence.’ Akin to nuclear deterrence stemming from mutually assured destruction, AI and related technology systems constitute a force for a moral renaissance. We must embrace that moral renaissance, or we will face moral conundrums that could bring about human demise. … My greatest hope for human-machine/AI collaboration constitutes a moral and ethical renaissance – we adopt a moonshot mentality and lock arms to prepare for the accelerations coming at us. My greatest fear is that we adopt the logic of our emerging technologies – instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation – without thinking or responding smartly.” John C. Havens, executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, “Now, in 2018, a majority of people around the world can’t access their data, so any ‘human-AI augmentation’ discussions ignore the critical context of who actually controls people’s information and identity. Soon it will be extremely difficult to identify any autonomous or intelligent systems whose algorithms don’t interact with human data in one form or another.” At stake is nothing less than what sort of society we want to live in and how we experience our humanity.Batya Friedman Batya Friedman, a human-computer interaction professor at the University of Washington’s Information School, wrote, “Our scientific and technological capacities have and will continue to far surpass our moral ones – that is our ability to use wisely and humanely the knowledge and tools that we develop. … Automated warfare – when autonomous weapons kill human beings without human engagement – can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken. At stake is nothing less than what sort of society we want to live in and how we experience our humanity.” Greg Shannon, chief scientist for the CERT Division at Carnegie Mellon University, said, “Better/worse will appear 4:1 with the long-term ratio 2:1. AI will do well for repetitive work where ‘close’ will be good enough and humans dislike the work. … Life will definitely be better as AI extends lifetimes, from health apps that intelligently ‘nudge’ us to health, to warnings about impending heart/stroke events, to automated health care for the underserved (remote) and those who need extended care (elder care). As to liberty, there are clear risks. AI affects agency by creating entities with meaningful intellectual capabilities for monitoring, enforcing and even punishing individuals. Those who know how to use it will have immense potential power over those who don’t/can’t. Future happiness is really unclear. Some will cede their agency to AI in games, work and community, much like the opioid crisis steals agency today. On the other hand, many will be freed from mundane, unengaging tasks/jobs. If elements of community happiness are part of AI objective functions, then AI could catalyze an explosion of happiness.” Kostas Alexandridis, author of “Exploring Complex Dynamics in Multi-agent-based Intelligent Systems,” predicted, “Many of our day-to-day decisions will be automated with minimal intervention by the end-user. Autonomy and/or independence will be sacrificed and replaced by convenience. Newer generations of citizens will become more and more dependent on networked AI structures and processes. There are challenges that need to be addressed in terms of critical thinking and heterogeneity. Networked interdependence will, more likely than not, increase our vulnerability to cyberattacks. There is also a real likelihood that there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control.” Oscar Gandy, emeritus professor of communication at the University of Pennsylvania, responded, “We already face an ungranted assumption when we are asked to imagine human-machine ‘collaboration.’ Interaction is a bit different, but still tainted by the grant of a form of identity – maybe even personhood – to machines that we will use to make our way through all sorts of opportunities and challenges. The problems we will face in the future are quite similar to the problems we currently face when we rely upon ‘others’ (including technological systems, devices and networks) to acquire things we value and avoid those other things (that we might, or might not be aware of).” James Scofield O’Rourke, a professor of management at the University of Notre Dame, said, “Technology has, throughout recorded history, been a largely neutral concept. The question of its value has always been dependent on its application. For what purpose will AI and other technological advances be used? Everything from gunpowder to internal combustion engines to nuclear fission has been applied in both helpful and destructive ways. Assuming we can contain or control AI (and not the other way around), the answer to whether we’ll be better off depends entirely on us (or our progeny). ‘The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings.’” Simon Biggs, a professor of interdisciplinary arts at the University of Edinburgh, said, “AI will function to augment human capabilities. The problem is not with AI but with humans. As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified. Given historical precedent, one would have to assume it will be our worst qualities that are augmented. My expectation is that in 2030 AI will be in routine use to fight wars and kill people, far more effectively than we can currently kill. As societies we will be less affected by this as we currently are, as we will not be doing the fighting and killing ourselves. Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing. We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully. My other primary concern is to do with surveillance and control. The advent of China’s Social Credit System (SCS) is an indicator of what it likely to come. We will exist within an SCS as AI constructs hybrid instances of ourselves that may or may not resemble who we are. But our rights and affordances as individuals will be determined by the SCS. This is the Orwellian nightmare realised.” Mark Surman, executive director of the Mozilla Foundation, responded, “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.” William Uricchio, media scholar and professor of comparative media studies at MIT, commented, “AI and its related applications face three problems: development at the speed of Moore’s Law, development in the hands of a technological and economic elite, and development without benefit of an informed or engaged public. The public is reduced to a collective of consumers awaiting the next technology. Whose notion of ‘progress’ will prevail? We have ample evidence of AI being used to drive profits, regardless of implications for long-held values; to enhance governmental control and even score citizens’ ‘social credit’ without input from citizens themselves. Like technologies before it, AI is agnostic. Its deployment rests in the hands of society. But absent an AI-literate public, the decision of how best to deploy AI will fall to special interests. Will this mean equitable deployment, the amelioration of social injustice and AI in the public service? Because the answer to this question is social rather than technological, I’m pessimistic. The fix? We need to develop an AI-literate public, which means focused attention in the educational sector and in public-facing media. We need to assure diversity in the development of AI technologies. And until the public, its elected representatives and their legal and regulatory regimes can get up to speed with these fast-moving developments we need to exercise caution and oversight in AI’s development.” The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education. Some responses are lightly edited for style.
10 Dec 18
7Liwa

Klein Tools 55485 Tool Backpack 2018 ReviewCalifornia Air Tools – 1.0 HP 5510SE Ultra Quiet Steel Tank Air CompressorCalifornia Air Tools 5.5-Gallon Ultra-quiet 5510SE 1.0-HP Steel Tank Air CompressorCalifornia Air Tools – Ultra Quiet Compressor 5510SE – 2018 ReviewHarbor Unboxing & Review Freight ToolsAmpco Tools Coupon: $ 50 Code Tutorial and RevisionReview of the California […]

10 Dec 18
Scienmag: Latest Science and Health News

Credit: Dean Wong, M.D., Ph.D. and Ayon Nandi, M.S., Ph.D. Researchers at Johns Hopkins Medicine have identified in live human brains new radioactive “tracer” molecules that bind to and “light up” tau tangles, a protein associated with a number of neurodegenerative diseases including Alzheimer’s disease and other related dementias. Two studies will be published back-to-back […]

10 Dec 18
TODAY NEWS

FOR years, beauty brands have told us that men and…

09 Dec 18
CryptoCenterNews

Photo: Shutterstock Originally founded in 2012, Ripple developed its own digital currency called XRP, which is now one of the oldest cryptocurrencies within the crypto space. It has managed to become the world’s third cryptocurrency after bitcoin and ethereum (as of October 2018), sometimes even replacing ethereum as the second most valuable coin. However, despite a […]

09 Dec 18
For Your Society

What I’ve learned from 15 years of explaining. One of the ongoing agonies of being a journalist is that aspiring journalists frequently write you and ask for advice: “What would you say to a young journalist starting out? How do I break into journalism?” I call it an agony not because it’s an annoyance, not […]

09 Dec 18
Humans of Texas State

My love for photography has only grown since I got my first camera. I was maybe 12-years old. It was an old polaroid camera that my mom got for me at a yard sale. The film that it used could only be found online. I didn’t care, I was enchanted with it. To my regret, […]

09 Dec 18
Eric John Monier's Personal Blog

I can hear the alarm on Annabelle’s phone going off through my thin walls and it wakes me up. I get dressed, knock on her door and ask, “Would you like me to make you some breakfast?” “No, I don’t have the time. I’ll eat at school; I get free breakfast and lunch there.” “Okay. […]

09 Dec 18
The Sun
FOR years, beauty brands have told us that men and women each need different products for skin, hair and body. But who hasn’t discovered mid-wash that their other half has nabbed the last of the shampoo? Hmph. And according to a poll by Escentual.com, it’s not just limited to hair products, as a third of men admitted to using the rest of their partner’s beauty stash, too. Get your hands on these handy products In response to this, the number of product ranges that cater for both women and men is on the rise, and we can’t get enough of them. SKIN SAVERS   This serum brightens up any face and its benefits are great for male and female skin Your skin is your largest organ, and regardless of your gender, it’s likely that you will battle with dryness, breakouts, sensitivity or rosacea at some point in your life. “It doesn’t really matter whether you are a man or woman, great skincare should address skin challenges,” says Candice Gardner, education manager at Dermalogica. And although there will be some slight differences between your complexion and your man’s skin (men tend to produce more sebum, making their complexions oilier in general), the primary skin needs are the same. “Retinols and vitamin C are beneficial for everyone, as is hyaluronic acid. Both women and men should select their products based on excellent clinical results,” says Candice.We love Garden of Wisdom Vitamin C 23% Serum + Ferulic Acid, £10, which makes up for what it lacks in man-deterring pretty packaging with its powerful brightness-boosting ingredients. HAIR AND NOW This is great value for a super-sized product When it comes to your locks, the truth is it’s totally OK for us to be using the same shampoo and conditioner as men. “Men and women should feel free to use the same products because hair is hair, regardless of gender,” says hairstylist Adam Reed. If you want to avoid the no-shampoo shower situation, we recommend going Dutch with him on salon-sized bottles, such as Pureology Strength Cure Shampoo, £35 for 1L and Conditioner, £38.50 for 1L. If volume is what you’re after run a penny size of this through your locks- long or short “It’s the styling part where you may differ,” Adam adds: “you need to choose a product that suits your own needs.” Whileyou might look for smooth, sleek volume, men tend to opt for a more casual, grittier finish. For a product that offers both, try Australian brand, Mr Smith. A penny-sized amount of Mr Smith The Foundation, £28, adds volume and body while still giving natural movement.   SHAVE THE DAY This product is great to use on all hair types One beauty tool that women and men do look for different qualities in is a razor. While we may want a luxe silk finish on our skin, our other halves are more worried about just getting the job done. Something we can share with our partners, though, is shaving cream. Whether it’s his beard stubble or your legs that the blade has to contend with, a razor-friendly cream or foam is essential. We love Malin + Goetz Vitamin E Shaving Cream, £19, as it’s packed with menthol and amino acids to deeply hydrate skin from within, plus camomile helps soothe irritation anywhere on the body. KISS AND MAKE-UP This tinted moisturiser will give you a healthy sun-kissed glow While it’s unlikely that your man will want to nab your blusher or eyeshadow, a dab of concealer to brighten any dark circles and a dusting of mattifying powder across his T-zone might be his thing. And men tend to look for exactly the same glow-boosting properties that we do in cosmetics. Charlotte Tilbury Unisex Healthy Hydrating Glow Tinted Moisturiser, £35, is great for adding a glow to dull complexions, while the mega-pigmented creamy shades in Jecca Correct & Conceal Palette, £20, can be built up gradually to give even the most tired of peepers a boost. Get glow on the go with this duo MAKE SCENTS OF IT     CK One was the very first unisex fragrance to hit the shelves Launched back in 1994, CK One was the first unisex scent, and the blend of bergamot, jasmine and amber in Calvin Klein CK One Platinum Edition, £40 for 100ml EDT will still hypnotise you whether you’re male or female. This scent by Diptyque is also a great choice if you want to share perfume [article-rail-section title=”MOST READ IN FABULOUS” posts_category=”336″ posts_number=”6″ query_type=”popular” /] Perfume and aftershave might seem worlds apart, but brands are now starting to nail both the sweet florals women tend to go for and his more citrus notes in just one fragrance. Genius. If you and your other half are both fans of fresh scents, try Diptyque L’Eau, £90 for 100ml EDT, which is packed with green mandarin and ginger for a citrus hit.