We see that hybrid approaches will be useful in next-generation intelligent systems where robust learning of complex models is combined with symbolic logic that provides knowledge representation, reasoning, and explanation facilities. The knowledge could be, for example, universal laws of physics or the best-known methods in a specific domain.
Intelligent systems must be endowed with the ability to make decisions autonomously to fulfill given objectives, robustness to be able to solve a problem in several different ways, and flexibility in decision-making by utilizing various pieces of both prepopulated and learned knowledge.
Distributed and decentralized intelligence
In a large distributed system, decisions are made at different locations and levels. Some decisions are based on local data and governed by tight control loops with low latency. Other decisions are more strategic, affect the system globally, and are made based on data collected from many different sources. Decisions made at a higher global level may also require real-time responses in critical cases such as power-grid failures, cascading node failures, and so on. The intelligence that automates such large and complex systems must reflect their distributed nature and support the management topology.
Data generated at the edge, in device or network edge node, will at times need to be processed in place. It may not always be feasible to transfer data to a centralized cloud; there may be laws governing where data can reside as well as privacy or security implications for data transfer. The scale of decisions in these cases is restricted to a small domain, so the algorithms and computing power necessary are usually fast and light. However, local models could be based on incomplete and biased statistics, which may lead to loss of performance. There is a need to leverage the scale of distribution, make appropriate abstractions of local models and transfer the insights gained to other local models.
Learning about global data patterns from multiple networked devices or nodes without having access to the actual data is also possible. Federated learning has paved the way on this front and more distributed training patterns such as vertical federated learning or split learning have emerged. These new architectures allow machine-learning models to adapt their deployments to the requirements they are to fulfill in terms of data transfer or compute, as well as memory and network resource consumption while maintaining excellent performance guarantees. However, more research is needed, in particular, to cater to different kinds of models and model combinations and stronger privacy guarantees.
A common distributed and decentralized paradigm is required to make the best use of local and global data and models as well as determine how to distribute learning and reasoning across nodes to fulfill extreme latency requirements. Such paradigms themselves may be built using machine learning and other AI techniques to incorporate features of self-management, self-optimization, and self-evolution.
AI-based autonomous systems comprise complex models and algorithms; moreover, these models evolve over time with new data and knowledge without manual intervention. The dependence on data, the complexity of algorithms, and the possibility of unexpected emergent behavior of the AI-based systems requires new methodologies to guarantee transparency, explainability, technical robustness and safety, privacy and data governance, nondiscrimination and fairness, human agency and oversight, and societal and environmental wellbeing and accountability. These elements are crucial for ensuring that humans can understand and — consequently — establish calibrated trust in AI-based systems .
Explainable AI(XAI) is used to achieve transparency of AI-based systems that explain for the stakeholder why and how the AI algorithm arrived at a specific decision. The methods are applicable to multiple AI techniques like supervised learning, reinforcement learning (RL), machine reasoning, and so on . XAI is acknowledged as being a crucial feature for the practical deployment of AI models in systems, for satisfying the fundamental rights of AI users related to AI decision-making, and is essential in telecommunications where standardization bodies such as ETSI and IEEE emphasize the need for XAI for the trustworthiness of intelligent communication systems.
The evolving nature of AI models requires either new approaches or extensions to the existing approaches to ensure the robustness and safety of AI models during both training and deployment in the real world. Along with statistical guarantees provided by adversarial robustness, formal verification techniques could be tailored to give deterministic guarantees for safety-critical AI-based systems. Security is one of the contributors to robustness, where both data and models are to be protected from malicious attacks. Privacy of the data, that is, the source and the intended use, must be preserved. The models themselves must not leak privacy information. Furthermore, data should be validated for fairness and domain expectations because of the bias it can introduce to AI decisions.
Since the stakeholders of AI systems are ultimately humans, methods such as those based on causal reasoning and data provenance need to be developed to provide accountability of decisions. The systems should be designed to continuously learn and refine the stakeholder requirements they are set to meet and escalate to a higher level of automated decision making or eventually to human level when they do not have sufficient confidence in certain decisions.
Connected, intelligent machines of varied types are becoming more present in our lives, ranging from virtual assistants to collaborative robots or cobots . For a proper collaboration, it is essential that these machines can understand human needs and intents accurately. Furthermore, all data related to these machines should be available for situational awareness. AI is fundamental throughout this process to enhance the capabilities and collaboration of humans and machines.
Advances in natural language processing and computer vision have made it possible for machines to have a more accurate interpretation of human inputs. This is leveraged by considering nonverbal communication, such as body language and tone of voice. The accurate detection of emotions is now evolving and can support the identification of more complex behaviors, such as tiredness and distraction. In addition, progress in areas such as scene understanding and semantic-information extraction is crucial to having a complete knowledge representation of the environment (see Figure 3). All the perceptual information should be used by the machine to determine the optimum action that maximizes the collaboration. Reinforcement learning (RL), which is where a policy is trained to take the best action, given the current state and observation of the environment, is receiving increasingly more attention . To avoid unsafe situations, strategies like safe AI are under investigation to ensure safety along the RL model life cycle. Details of RL is provided in the next section.
AI has also enabled a more complete understanding of how the machine operates with the aid of digital twins. Extended reality (XR) devices are becoming more present in mixed- reality setups to visualize detailed data of machines and interact with digital twins at the same time. This increases human understanding of how machines are operating and helps anticipate their actions. In combination with the XR interface, XAI can be applied to provide reasons for a certain decision taken by the machine.
To make collaboration happen, it is also important that the machines respond and interact with humans in a timely manner. As AI methods involved in the collaborative setup can have high computing complexity and machines might have constrained hardware resources, a distributed intelligence solution is required to achieve real-time responses. This means that the communication infrastructure plays a key role in the whole process by supporting ultra-reliable and low-latency communication networks.
social experiment by Livio Acerbo #greengroundit #live https://www.ericsson.com/en/reports-and-papers/white-papers/artificial-intelligence-in-next-generation-connected-systems