[Part 1] How will social systems change with AI? (First half)
[Part 2] How will social systems change with AI? (Second half)
[Part 3] How do we handle the asymmetric nature of cyberspace and the real world?
Two vulnerabilities of "Digital Twins"
Our next topic is the "asymmetric nature of cyberspace and the real world." Cyberspace is defined as something that coexists with the real world. It is also called "CPS (Cyber-Physical System)" or the "Mirror World." However, starting with the metaverse (*), which has been garnering attention recently, new worlds which only exist within cyberspace are being created.
*A term coined by the combination of two words; meta and universe. The metaverse is a virtual space created on a network where a large number of users can freely move around.
I think there are two patterns in the asymmetric nature between cyberspace and the real world. Firstly, cyberspace can completely fool the real world. For example, there is an IoT insurance policy that can be joined for a low price as long as you provide proof that you are healthy by measuring your heart rate with a digital watch. To tell you the truth, you can hack it easily by attaching the watch to a metronome. So data which enters cyberspace is wrong. I believe that this is the fragility of Digital Twins.
Secondly, there is also a phenomenon where scores or ratings in cyberspace differ from those in the real world. Take a restaurant review site for example. Say there's a restaurant that's highly popular among locals and always only has regulars. Regulars don't go out of their way to write reviews of restaurants, so if the restaurant somehow acquires a low rating on one of these review sites, the rating gains importance and the restaurant's potential high rating is ignored. Even a professional critic needs to have skills to praise someone, but speaking ill of someone is easy. The fact that the comment sections of major websites tend to be tramped by trolls is a typical example of this. In addition to this, not everyone has the motivation to go to the trouble of writing a review on websites. You also need to keep in mind that reviewers are following a certain form of influence peddling, as in the form of a restaurant rating, for example.
If it is based on data taken from a device, like with the IoT, I think the symmetric nature can be maintained to some degree, as long as you assume that you can prevent the aforementioned hacking. But like in the case of restaurant ratings, once human emotion starts to get involved, the asymmetric nature can't be avoided.
I think this is the current state of cyberspace; a world where you can fool a third party depending on the intention of the human who entered the data. However, once cyberspace develops further, more and more data may go into it without the cheater knowing about it, and the AI in the upper stream may judge it by saying "Your rating may be malicious and will be considered void." I think such things will become a reality in the future.
Would a "computer that forgets" be humanity's next greatest invention?
There has also been controversy over whether we should forgive someone's past foul deeds or failures. People in the real world have long forgotten it, but cyberspace will never forget it. I think this is another form of this asymmetric nature. What do you think?
As a concept of providing privacy protection in cyberspace, Europe emphasizes the importance of the "right to be forgotten." During your youth, you may have gone wild on the internet and it has remained as a so-called "digital tattoo (*1)." But is it fair to evaluate or judge a person for life over a single matter? Instead of persecuting the person for one's failures forever, shouldn't society help the person grow? In order to think about an ideal digitalized society, I think we need to venture into the fields of philosophy and ethics to some degree.
My favorite author, Milan Kundera (*2) writes about forgetting. "The wonderful feature of a human is that he can forget." His message which is slightly infused with irony gets to the point.
*1 An expression that likens the fact that once a comment, image or video goes viral, it remains on the internet quasi-permanently to a tattoo that is hard to get rid of.
*2 Milan Kundera (1929-): a French writer of Czech origin.
As society's common sense, there is an unspoken understanding that "these kinds of past failures should be forgiven." If we are to realize this with a computer, I think it will be very important to set up a parameter of factors such as cycles and the types of failures that should be "forgotten." For example, should we make them automatically forget after several thousand hours, or just keep events with critical content forever? In order to set up criteria that can convince the world, a diverse discussion is essential.
Humanity's next greatest invention may be a "computer that forgets." Like if a conflict between humans were to occur, but the data that sparked it no longer exists in cyberspace. Once the data goes public, should we keep it forever or should we forget it. I think it depends on the content of the data. This delicate balancing act is in and of itself a human behavior. If we are to leave this to a computer, we may open a Pandora's box of discussion.
Cyberspace that intentionally forgets data that should be forgotten in a certain cycle. Such a system may be necessary if we are to proceed with a sound social system.
We were able to hear many thoughts about the interaction between society and AI. I think they were discussions that wouldn't have been able to happen just within the walls of Hitachi. Mr. Kobayashi and Mr. Kagehiro, thank you very much.
Co-founder, Chairman and CEO and Chief Visionary Officer of INFOBAHN Inc.
Hiroto Kobayashi has published various print and digital media including the Japanese edition of "WIRED" and "Gizmodo Japan." In 1998, he founded INFOBAHN Inc., a company that supports corporate digital communications, and has pioneered the fields of content marketing and owned media. He currently supports digital transformation and innovation implementation of companies and municipalities. He is the author of publications such as "After GAFA: The Future Map of a Decentralizing World" (Kadokawa), "Rise of Corporate Generated Media" (Gijutsu-Hyoron Co., Ltd.). He also supervised or wrote a commentary for books such as "Free," "Share," "Public" (NHK Publishing, Inc.) and many more.
Director, Advanced AI Innovation Center, Research & Development Group, Hitachi, Ltd. Doctor of Engineering.
Tatsuhiko Kagehiro specializes in image recognition processing, pattern recognition and machine learning. After joining Hitachi, he headed the research and development of video surveillance systems and media processing technologies for industries at the Central Research Laboratory. He was a visiting scholar at the University of Surrey in 2005. Since 2015, he has taken part in Hitachi’s humanoid robot, Emiew project at the Global Center for Social Innovation (CSI). In 2017, he took office as the Department Manager of the Media Intelligent Processing Research Department. He took up his current position in 2020. He is a visiting associate professor at the University of Tsukuba Graduate School of Integrative and Global Majors, Empowerment Informatics Program. He is a member of the Information Processing Society of Japan and the Institute of Electronics, Information and Communication Engineers.
Yukinobu Maruyama, host
Head of Design, Global Center for Social Innovation – Tokyo, Research & Development Group, Hitachi, Ltd.
After joining Hitachi, Yukinobu Maruyama built his career as a product designer. He was involved in the foundation of Hitachi Human Interaction Laboratory in 2001 and launched the field of vision design research in 2010 before becoming laboratory manager of the Experience Design Lab UK Office in 2016. After returning to Japan, he worked in robotics, AI, and digital city service design before being dispatched to Hitachi Global Life Solutions, Inc. to promote a vision-driven product development strategy. He is also involved in developing design methodology and human resource education plan. He took up his current position in 2020.