[Part 1] How will social systems change with AI? (First half)
[Part 2] How will social systems change with AI? (Second half)
[Part 3] How do we handle the asymmetric nature of cyberspace and the real world?
The limits of AI and the space for "human" intervention
During the previous dialogue, I talked about a writing engine that recommends headlines for online news. I actually experienced something which I thought was interesting. If you keep on using headlines recommended by AI for your articles, they all end up resembling each other and the number of hits gradually reaches a ceiling. However, there was one article whose number of hits spiked. The person who was doing the data analysis asked me why. So I read the article and noticed that the wording of the headline was completely different from the previous ones. Instead of using an AI, that particular headline was written by a human. It skillfully predicted and described the "mood of the era." From this experience, I felt that there was still more room for humans to intervene in AI.
In other words, something unexpected happened with the sudden intrusion of a human element into a statistically convergent state. How can such a thing be possible?
Machine learning becomes more and more accurate as it is trained, but to take it to another level of accuracy, it is necessary to learn something heterogeneous at some point, in other words, a discontinuity is necessary. In the past, there were many examples of genetic algorithms that were able to produce significant changes through mutation. I believe that many researchers are working on various applications right now, and perhaps there will eventually be a computer that starts doing it on its own.
* A method where you prepare several "individuals" which represent the data (possible solution) by way of genes, and preferentially select the individual with high levels of adaptability, then repeat manipulations such as mutation to find the approximate solution.
"Prosecution by AI" vs "prosecution by humans"
There is an attorney named Nicole Shanahan in California. She is the Founder and CEO of ClearAccessIP, a startup that provides management solutions for intellectual property rights. She is also a research fellow at CodeX, the Stanford Center of Legal Informatics, which specializes in an academic field called "Legal Informatics." Legal Informatics is a field that fuses the study of law and computer science. I remember Ms. Shanahan talking about a research that she performed. She had a prosecutor's indictment system with AI and a human prosecutor make a prosecution respectively, and made them compete over the probability of prosecution.
And the result showed that humans had a higher probability of prosecution. This doesn't mean that humans are superior to AI. It's just that humans can't escape their biases when it comes to prosecution and dismissal. In the past, she mentioned that people of color in particular are indicted more often.
When I heard this, I found it interesting that AI was free from the subjectivity of humans and was making a "fair" judgment in a way. However, a reverse phenomenon is beginning to be seen recently. AI has covered more and more information on the website, but the information itself was biased. For example, the recognition rate of images of people of color was lower. How much human judgment can be incorporated into AI without overestimating its capabilities. I think striking a balance between these elements is key.
So, we are at the point where we need to think about the necessity of human intervention in AI.
After all, computers are very honest. If the data itself is biased, the machine will simply calculate and give us a biased result. I think AI still needs a lot of governance mechanisms such as bias cancellation and human monitoring of the data.
"Natural digital" as the future ideal
I'm eager to know what trials in the future will be like. Like, a human suing an AI or an AI suing another AI. In such cases, I suppose the defendant will be the company that provides the AI service or the company that developed the AI. Either way, I think another AI that monitors whether the AI is normal or not will be necessary in the future. Because in many science fiction films, AI suddenly goes wild and causes chaos, you know. If we can understand the AI's biases and its possible vulnerabilities by identifying elements such as data that AI has learned in the past and its evaluation criteria algorithm, I think major accidents can be avoided. And perhaps, a kind of system to avoid such a situation is needed from now.
In the field of research, XAI (Explainable AI) has been a popular topic at the moment. Researchers are expected to create a system that can explain how the AI works and what kind of factors make what kind of things happen. In addition to this, we need to work on developing a system that unites various opinions, not only an AI that specializes in different fields.
I've been thinking about the same thing too. Today, there are many people working on the development of AGI (Artificial General Intelligence) with emotion or thought or whatever it needs to be similar to humans. But I think the intention of the researcher plays an important role here. If a researcher who believes that every global problem can be solved by a mathematical formula takes part in the development, an AI with such thoughts will be made. In fact, the products created by Steve Jobs give the impression that the creator's strong aesthetic demands the product to be free and independent from others. And Dyson products strongly reflect the philosophy of the founder, Sir James Dyson, who believes that "good function means good design." Like the diversity of people, a state where various AI exists might be described as a "natural digital" state.
Co-founder, Chairman and CEO and Chief Visionary Officer of INFOBAHN Inc.
Hiroto Kobayashi has published various print and digital media including the Japanese edition of "WIRED" and "Gizmodo Japan." In 1998, he founded INFOBAHN Inc., a company that supports corporate digital communications, and has pioneered the fields of content marketing and owned media. He currently supports digital transformation and innovation implementation of companies and municipalities. He is the author of publications such as "After GAFA: The Future Map of a Decentralizing World" (Kadokawa), "Rise of Corporate Generated Media" (Gijutsu-Hyoron Co., Ltd.). He also supervised or wrote a commentary for books such as "Free," "Share," "Public" (NHK Publishing, Inc.) and many more.
Director, Advanced AI Innovation Center, Research & Development Group, Hitachi, Ltd. Doctor of Engineering.
Tatsuhiko Kagehiro specializes in image recognition processing, pattern recognition and machine learning. After joining Hitachi, he headed the research and development of video surveillance systems and media processing technologies for industries at the Central Research Laboratory. He was a visiting scholar at the University of Surrey in 2005. Since 2015, he has taken part in Hitachi’s humanoid robot, Emiew project at the Global Center for Social Innovation (CSI). In 2017, he took office as the Department Manager of the Media Intelligent Processing Research Department. He took up his current position in 2020. He is a visiting associate professor at the University of Tsukuba Graduate School of Integrative and Global Majors, Empowerment Informatics Program. He is a member of the Information Processing Society of Japan and the Institute of Electronics, Information and Communication Engineers.
Yukinobu Maruyama, host
Head of Design, Global Center for Social Innovation – Tokyo, Research & Development Group, Hitachi, Ltd.
After joining Hitachi, Yukinobu Maruyama built his career as a product designer. He was involved in the foundation of Hitachi Human Interaction Laboratory in 2001 and launched the field of vision design research in 2010 before becoming laboratory manager of the Experience Design Lab UK Office in 2016. After returning to Japan, he worked in robotics, AI, and digital city service design before being dispatched to Hitachi Global Life Solutions, Inc. to promote a vision-driven product development strategy. He is also involved in developing design methodology and human resource education plan. He took up his current position in 2020.