[Part 1] What issues does AI pose for our lives?
[Part 2] How can we connect SF Prototyping with innovation?
[Part 3] What will the future relationship between social systems and AI be like?
How should we view artificial intelligence in the future?
My name is Yukinobu Maruyama, and I am a Head of Design in the Research & Development Group, Hitachi, Ltd. I will be the host of this conversation. Today's topic is artificial intelligence (AI), which continues to grow at a rapid pace. Originally, AI signified the research field of "artificial intelligence," but recently, it has come to be more widely recognized as something real — in the world around us. Today, we would like to propose tips for understanding AI.
Today's guests are Dr. Osawa Hirotaka, the co-author of "SF Prototyping: A New Strategy for Creating Innovation from Science Fiction," which was published in June 2021, and Itaru Nishizawa of the Research & Development Group, Hitachi, Ltd.
I am an Assistant Professor at the University of Tsukuba, in the Faculty of Engineering's Information and Systems department. I am also the principal investigator of the Human-Agent Interaction Laboratory. My research subjects are humans, and robots and characters that feel — or seem — human, and "agents," which is a general term for artificial systems. In addition to studying ways to create an interface that utilizes human touch, I've also been working on research into the social intelligence of humans, which allows us to read the intentions of others and gain trust.
Recently, I have been studying the effect of so-called "anthropomorphization" on humans and agents appearing in fiction, from the perspective of looking at the kinds of issues AI might pose when it becomes a greater part of our social systems. In doing so, I came to realize the importance of science fiction. So I have been studying "SF Prototyping," which is a way of thinking about the future using science fiction methodology.
As a member of the Research & Development Group, Hitachi, Ltd., I've taken part in the research and development of data management related technologies including database systems and real-time data processing systems. After working in this department, I participated in customer co-creation projects in the communication, media and entertainment, and then the finance sectors. I am currently in charge of the research and development in the fields of digital technologies including AI, measurement devices and healthcare. AI is a closely watched topic among our researchers as well, so I've been looking forward to this conversation.
"Issues of fairness" and "psychological effects" posed to society by AI
Now, let's begin with our first topic. "What kind of problems will occur as AI becomes deeply embedded in our lives?" Dr. Osawa, what do you think about this?
Firstly, as we begin to leave some human decision-making to AI, there is the problem of separating what humans should judge and what AI should judge. It's not easy for humans to understand AI's machine learning (*1) processes. There are problems, such as in one case where a seemingly well-operating AI system made a mistake in judgment and caused a major accident, not to mention that AI is very weak in learning malicious data.
*1 A method or program in which a computer learns from given data and autonomously finds rules and patterns in the data.
In the research world of today, XAI (Explainable AI) is a popular topic. XAI is artificial intelligence in which the reason behind the judgment can be logically explained to humans. There is also the issue of "fairness" in AI. If there is unnecessary bias in the data, like when AI makes a judgment about a person, it may be influenced by factors such as ethnicity and gender. How can we avoid this? That is a problem.
In the field of human-agent interaction, which I specialize in, progress is being made in the research of human-like robots and virtual agents (*2), and their psychological effect on humans. For example, there is a study on how the environment of children giving orders to smart speakers at home can affect their growth and their minds. There have also been debates over the need to more accurately design smart speakers to feel human.
*2 Virtual characters created from computer graphics, animations or artificial intelligence.
I often come upon the phrase "Explainable AI" these days, but I think there is still a lot to work on. For example, if we were to explain the numerical processes of AI to the staff of a financial institution with no simplifications, it would be very difficult for them to understand. Before being applied in practice, AI should first be capable of explaining the basics of the financial business. A truly explainable AI system should be something that can be explained — and be understandable — to both AI experts and non-experts alike. Don't you think so?
I completely agree. What AI needs now is "to be able to give a sincere explanation to humans." Some experts say that this is an extremely difficult task considering the principles of AI, but research is underway to overcome this.
Born in 1982. Assistant Professor at the University of Tsukuba, Faculty of Engineering, Information and Systems.
Principal investigator of the Human-Agent Interaction Laboratory. Board member of the Science Fiction and Fantasy Writers Club of Japan. Doctor of Engineering (Keio University). Osawa Hirotaka specializes in human-agent interaction and social intelligence. He is the leader of the "Human Information Technology Ecosystem" program "Updating Imagination: The Design of Fiction by Artificial Intelligence" at the Research Institute of Science and Technology for Society, at the Japan Science and Technology Agency. He is a co-author of publications including "SF Prototyping: A New Strategy for Creating Innovation from Science Fiction," "AI Wolf: AI that Befools, Detects and Persuades," "Designing the Gap Between Humans and Robots," "Can Humans Coexist with AI?" and "Defining Trust: From Leviathan to AI." He is also the supervising editor of "SF Thinking: Skills to Think About Business and Your Future."
Deputy General Manager, Digital Technology Innovation Center, Research & Development Group, Hitachi, Ltd. Doctor of Electrical Engineering (University of Tokyo).
Professional Engineer, Japan (Information Engineering). After joining Hitachi, he took part in the research and development of platform systems at the Central Research Laboratory. He led customer co-creation projects in the finance sector and also directed research in AI and data science. He took up his current position in 2020. He was a visiting scholar in the Department of Computer Science at Stanford University from 2002 to 2003. He completed the Advanced Management Program at Harvard Business School in 2018. He is a member of the Association for Computing Machinery (ACM), the Information Processing Society of Japan and the Institute of Electronics, Information and Communication Engineers.
Yukinobu Maruyama, host
Head of Design, Global Center for Social Innovation – Tokyo, Research & Development Group, Hitachi, Ltd.
After joining Hitachi, Yukinobu Maruyama built his career as a product designer. He was involved in the foundation of Hitachi Human Interaction Laboratory in 2001 and launched the field of vision design research in 2010 before becoming laboratory manager of the Experience Design Lab's UK Office in 2016. After returning to Japan, he worked in robotics, AI, and digital city service design before being transferred to Hitachi Global Life Solutions, Inc. to promote a vision-driven product development strategy. He is also involved in developing design methodologies and human resource education plans. He took up his current position in 2020.