Hiromitsu Kato, Research and Development Group, Hitachi, Ltd./Mr. Kentaro Fujimoto, President and CEO of D4DR inc.
Research & Development Group, Hitachi webinar " Innovation Starting with a Question — Social Transition and AI," was streamed on November 29, 2021. In this feature, we bring you the dialogue between Mr. Kentaro Fujimoto, the President and CEO of D4DR inc. and Hiromitsu Kato from Research & Development Group, Hitachi. In order to guarantee the "trust" of social systems, what kind of mechanisms are required in the future digital society?

Japanese

[Part 1] What are the real-life examples of data utilization in social systems?
[Part 2] What is a new data utilization space that should be established between the public and the private area?
[Part 3] How should social systems deal with AI and diverse data?

"Consensus-building" and "Quality Assurance" surrounding AI as the challenge

Maruyama
Let's move on to our third topic. "How should future social systems deal with AI and diverse data?" What will happen if AI, the main topic of our webinar, becomes part of our social system?

Mr. Fujimoto
I think how we think about the consensus-building of judgments made by AI will become important. A fine mechanism where not only the people involved in the semi-public data utilization space, like the engawa that I mentioned previously but local people as well can take part in the decision-making will be necessary. In the "Super City (*)," a concept proposed by the Cabinet Office as the successor to the "Smart City," the importance of consensus-building involving its inhabitants is emphasized as well.

* An initiative with the participation of inhabitants, from their perspectives, which aims to advance the realization of the future society to be realized around the year 2030. The utilization of advanced technologies including AI and Big Data, as well as data coordination among multiple fields for this is advocated for.

What kind of system is generating which data in an urban layer construction that incorporates AI? How are they utilized and what decisions are being made? I think involving a stakeholder, such as inhabitants, is an important part.

Kato
The QA (Quality Assurance) for AI-based products and services, in other words, how you are going to work on QA, is also an important issue too. With a conventional system, you can design how it will behave in advance. But in the case of AI, its behavior depends on the result of what it learned based on the collected data. We need to discuss the whole concept of the form of QA that differs from the conventional system.

As a way of thinking about trust within the digital society, we are proposing the concept of "Trust of Data" and "Trust by Data." "Trust of Data" is a technology that ensures data is "trustworthy," from the perspective of the origin of the data circulated online, as well as its authenticity. "Trust by Data" is a technology that allows people and systems, for example a service created by data, to uncover how it will certainly behave as expected based on the evidence of the "trustworthiness." The question is how to build a data exchange system with these two factors in mind.

AI supervising each other to guarantee the "trust" of the system

Mr. Fujimoto
A mechanism where AI supervises and stimulates each other will be necessary for the future as well. Currently, a "Supervised learning (*)" AI is the most common. But in the case of these AI, their judgements are, of course, influenced by the training data. For example, some tigers can be white. But if you provide the AI with the training data limited to yellowish-brown tigers, it will obtain a biased recognition that "all tigers are yellowish-brown." Such is the potential risk.

* Machine learning methods for having AI learn data labeled with the correct answer.

If there is a case where the number of white tigers is increasing in some areas, an AI that places emphasis on such minor changes in trends can correct the AI that considers all tigers to be yellowish-brown by pointing out that "there are actually white tigers too." I think such mechanisms of AI-AI relationships will be required in the future.

Kato
Today, we live in a world where guaranteeing the "trust" of the system itself is being questioned. Together with the World Economic Forum Centre for the Fourth Industrial Revolution Japan and the Ministry of Economy, Trade and Industry, Hitachi is proposing the concept of "Trust, Governance, Framework" (*). It's a concept that regards the loop of Trust ← Trustworthiness ← Governance ← Trust ← ...... to be essential in order to guarantee the system's "trust." "Trust" here is defined as whether the person or entity can subjectively "trust the system." In contrast, "trustworthiness" indicates that the system's trustworthiness is backed up with objective evidence and experiment.

* Reference: White paper "Rebuilding Trust and Governance: Towards Data Free Flow with Trust (DFFT)"

Conceptual diagram of the Trust, Governance, Framework Source: "Rebuilding Trust and Governance: Towards Data Free Flow with Trust (DFFT)"

In order for a system to gain "trust" from society, it needs to accumulate trustworthy facts (=trustworthiness). The accumulation of trustworthiness is guaranteed by effective governance. And the governance itself can function adequately by gaining trust from citizens. This loop guarantees the system's trust. In addition to this, a "Trust Anchor," which serves as the starting point of the loop, is essential. It raises the importance of setting up this Trust Anchor as public goods.

The possibility of AI surpassing human common sense and the ideal human stance

Mr. Fujimoto
Don't you think that the mindset towards the system's trust is twofold? One is within the scope of society's common sense thus far, and the other surpasses traditional common sense. In the past, there was an experiment by Hitachi where the AI continued to study how to use a swing without the training data. In the end, AI came up with an unexpected way to use a swing. I was taken aback when I saw a video of the experiment. By continuously using the AI, it may reach an unexpected dimension, a place that surpasses human common sense. I think AI holds this potential.

Executive Foresight Online

www.facebook.com

Kato
Currently, a huge amount of data that humans can't handle is being generated. We need to further discuss how we can trust AI, in a situation where we are obliged to depend on AI for analysis. Similar to the governance that I mentioned, I think how humans intervene with AI will become extremely important.

Maruyama
Today, we heard your thoughts on how to incorporate AI into the social system. While it has become impossible to dislike AI, which makes autonomous judgments that surpass humans at times, we need to think about how we interact with it. I was able to realize the importance of this issue. Mr. Fujimoto and Dr. Kato, thank you very much.

This is the end of our dialogue series focusing on "Societal Transition and AI." For the next feature, we bring you a round-table discussion focusing on "Life."

Kentaro Fujimoto
President and CEO of D4DR inc.

Kentaro Fujimoto joined Nomura Research Institute in 1991. He started his internet consulting business in 1993 and has been working in the field since. Fujimoto launched Japan's first e-business innovation project, Cyber Business Park. In 2002, he took office as the president of the consulting firm, D4DR inc. He has been providing broad consulting services through IT in fields such as innovation, new business development, marketing strategy and more. He took part in the management of various startup projects including PLANTIO, which was selected by the J-Startup project, to promote innovation. He works as a part-time lecturer at the Kanto Gakuin University, College of Interhuman Symbiotic Studies. He is the author of "Business Revolution in the Age of the New Normal" (Nikkei Business Publications, Inc.).

Hiromitsu Kato
Department Manager, Center for Technology Innovation – Societal Systems Engineering
Research & Development Group, Hitachi, Ltd.

Hiromitsu Kato joined Hitachi in 1995. He is being involved in the research and development of the Autonomous Decentralized Systems, Systems Science and Mathematical Optimization, Cybersecurity for Industrial Control Systems and more. Kato promotes the operation monitoring and the control of information and control for water supply systems, automotive industries, railways, etc. and the application of systems technology to new services. In 2012, he participated in the projects including rail traffic management and local energy management in the UK. After returning to Japan, he worked as a department manager of the societal infrastructure systems research. He appointed to his current position in 2019. He has won awards including the IPSJ Yamashita SIG Research Award (1999) and the SICE Technology Award (2000 & 2016). Ph.D.

Yukinobu Maruyama, host
Head of Design, Global Center for Social Innovation – Tokyo, Research & Development Group, Hitachi, Ltd.

After joining Hitachi, Yukinobu Maruyama built his career as a product designer. He was involved in the foundation of Hitachi's Human Interaction Laboratory in 2001 and launched the field of vision design research in 2010 before becoming laboratory manager of the Experience Design Lab's UK Office in 2016. After returning to Japan, he worked in robotics, AI, and digital city service design before being transferred to Hitachi Global Life Solutions, Inc. to promote a vision-driven product development strategy. He is also involved in developing design methodologies and human resource education plans. He took up his current position in 2020.