IA is the process of integrating actors’ intentions into a unified view of the surrounding environment. IA includes many of the basic principles of SA, such as consistent tracking and extrapolation of objects in the user’s environment, but also exploits circumstantial semantics and sentics associated with objects and actors of the operating environment. Semantics and sentics, in fact, are key for sensemaking, in which they extend the meaningfulness that the system attributes to every single object/actor of the environment by considering it as an entity carrying specific conceptual and affective information, rather than simply a point in a space or time window. In the context of home automation, for example, a greater deal of intentions can be inferred, if the semantics and sentics associated with the object ‘coffee machine’ are known. Such semantics and sentics can be retrieved from external resources, e.g., affective common-sense knowledge bases for concept-level sentiment analysis [18]. For example, if the system has access to pieces of information such as ‘a coffee machine is used for making coffee’ and ‘people generally like coffee’, the repeated proximity of a human actor to the object ‘coffee machine’ can be exploited to infer the intention of the human actor to make coffee.
Intention in computation
The BDI model functions by establishing a decision tree whose nodes represent possible future outcomes, and assigning probabilities to each of these based on calculations of intent. Computational models of intention, such as the BDI software model, attempt to optimize the time spent allocating computational power for given tasks and actually executing them. While the BDI model is fairly rudimentary in its analysis of intent (essentially parsing past requests and actions to find trends), it epitomizes the increasing feasibility of implementing IA in computational systems. Four basic components of BDI allow speedy processing of intentional information:
-
Beliefs
-
Desires
-
Intentions
-
Events
BDI formal architecture consists of “beliefs,” or the set of inference rules that an agent uses to derive basic intentional information. “Desires” represent the distinction between goal end-states and the current state. That is, desires are tasks actively pursued by the agent, such as performing a mathematical operation on a set of numbers. Intentions in BDI represent what has already been decided upon by the agent, but is not necessarily being achieved yet. For instance, the primary distinction between intentions and desires is the level of commitment. Intention, both in human and computational systems, requires some sort of signaling to transform from desire.
What is particularly significant about BDI modeling in software is its use of the intent/action chronological dichotomy to construct information-rich event narratives. While more mature implementations of intention in computational systems will be inherently intention-aware than BDI, this model illustrates the importance of actor-driven decision making.
Formal models of intention
In the Control Theory approach to system behavior analysis, states, or the distinction between intentions and whether they have been executed and goals, or short-term components of intention, are central. For instance, feedback control mechanisms are components of systems constructed for the purpose of reducing the difference between an actor’s goal and their current state, and these systems tend to exhibit some form of efficiency or consistency, such as the consistent rational behavior we attribute to intentionality in humans [19–21]. The important distinction between goal and present systemic state is traceable to the type of applied rationality, i.e., in selecting strategies. “Feedback control reacts to sensory observations with actions designed to correct errors between observed and desired states,” while “feed-forward uses a model of the world to predict what actions are required to achieve the goal” [22].
The distinguishing characteristic of feedback control is that is data-driven, and often results in reactive behavior. Observers can obtain information about past events, but such definitive information is largely unavailable when future events, beliefs, and perceptions are under analysis and, hence, projection plays a significant role in perception. Thus, feed-forward control results in proactive anticipatory behavior.
Control over a system is, simply put, a means of reducing the variety that the operating environment imposes. In a military environment, superior intelligence offers battlefield commanders greater control because this necessarily reduces variety (i.e., uncertainty). Ashby’s Law of Requisite Variety tells us that “only variety in [response] can force down the variety due to [disturbance]; variety can destroy variety” [23]. Simply put, a system reacting to variety in the environment must have more inherent variety in order to seek its goals.
Reactive response in terms of possibly unlimited variety creates a unique problem. Specifically, the system’s internal intrinsic variety must be maximized so that unexpected developments can be dealt without losing control. The best way to solve a problem, in fact, is to already know a solution for it. But, if we have to face a problem we have never met before, we need to use our intuition. Intuition can be explained as the process of making analogies between the current situation and the ones solved in the past to make a suitable decision at the present time. Such reasoning by analogy can be emulated, by means of sentic computing [1], through the ensemble application of semantic multi-dimensional scaling [24] and neural-network-based clustering [25].
Feed-forward controls assume that systems are reflective of the environment in which they are situated and that they can also predict states and actions by analysis of that system. As Ashby predicts, “Any regulator able to confine the fluctuations in the system to be regulated must not only have adequate amounts of variety available to control that system but also be or have a homomorphic representation of that system” [26]. In the early 1970s, Maturana and Varela developed the theory of Autopoiesis, which tends to support this perspective [27, 28]. Autopoiesis is closely related to self-referentiality, or self-awareness attributes that we associate with human actors, and sees living systems as dualistic. That is, Autopoiesis covers both the internal organization of these systems, such as structure and metabolic functions, as well as their external component, or agency. The theory does this by presenting three key notions: operational closure, component production networks (which is an abstraction of the concept of metabolism), and spatio-topological unity between the individual and any physical borders he/she may encounter. Proponents of Autopoiesis claim that recursivity, or an organization’s resemblance to a closed loop, is generated by the components and production processes inherent to the system that develops a complementary relationship between the network and border [2, 28]. This system begins to develop a unique identity based on the set of couplings and shared components and processes within it, and thus a holistic character of its component processes [27, 29]. Such a system must also be able to regulate the flow and consumption of matter and energy in order to facilitate self-constructive processes, as well as exchanges of material and information with the surrounding environment. The system must therefore be able to generate its defining factors, such as boundary conditions, that define it as such.
Based on this discussion, we propose a list of necessary attributes for systems incorporating intentionality in any way:
-
1.
Self-existence is necessary because this will allow the system to increase its internal variety consistently.
-
2.
The Rule of Requisite Variety must be satisfied in s series of transitory states at least once, which leads to the formation of intention.
-
3.
We can also deduce that since intentions are hierarchical, they are so organized according to their inherent rationality. Thus, a system will exhibit more unconventional behavioral solutions to problems presented by the environment by forming intentions of its own volition, and then acting upon them. This is also observed in studies of AI and intentionality, such as in [15] citing [30] “that Sandy [,] of the Coffee shop Conversation [,] claims that the really interesting things in AI will only begin to happen when the program itself adopts the intentional stance towards itself.”
-
4.
Intentional systems with higher internal variety tend to reach their goals more frequently in constantly changing environments.
-
5.
The internal state of a system tends to be richer in terms of intentionality and rationality than its external state.
-
6.
Competing instances of rationality and intentionality contribute to an operating environment’s current state, and state transitions can be attributed to the result of these phenomena. This type of competition helps to explain why the potential internal variety of an individual system is greater than the actual scope of states that an operating system can exhibit.
-
7.
Individual system intentionality can be independent, shared, complementary, and conflicting.
-
8.
Because intentionality is hierarchical, there are equivalent mappings between actions and intentions.
-
9.
Intentionality is shaped not only by semantics associated with the operating environment, but also by sentics each individual system associates with specific objects and events of that environment.
A number of scholars argue that goal-directed planning happens below the threshold of consciousness and that as a result, components of action are organized in a more hierarchical fashion [12, 31]. Affective information processing, in fact, mainly takes place at unconscious level [32]. Reasoning, at this level, relies on experience and intuition, which allow considering issues intuitively and effortlessly through reasoning by analogy in a multi-dimensional space where the exact relationships between specific concepts are lost, but in which it is easier to infer how such concepts are semantically and affectively related [33]. Hence, rather than reflecting upon various considerations in sequence, the unconscious level forms a global impression of the different issues. In addition, rather than applying logical rules or symbolic codes (e.g., words or numbers), the unconscious level considers vivid representations of objects or events. Such representations are laden with the emotions, details, features, and sensations that correspond to objects and events of the operating environment.
Forming a hierarchy of high-, medium-, and low-level intentions presupposes that the intentional system in question possesses a highly nuanced comprehension of the operating environment in which other individuals’ intentions are formed, as well as relevant details about the other individuals themselves.
We employ Lattice Theory to bring some insight to this complex problem of intentions, actions, and attributes. The Law of Modularity is of particular use here, since it tells us that for any three components of a lattice a, b an c, x≤b →→x ∨ (a ∧ b)=(x ∨ a) ∧ b, using an AND operation to represent the joining operation, and the OR operation to portray the meet operations. What this tells us is that for a modular lattice, the highest lower bound on a partial order within the lattice is inversely isomorphic with the lowest upper bound.
We can then apply the Law of Modularity to our intentional hierarchy because in order to construct intentions, they must first be isomorphic because they influence each other in some way, no matter how trivial. For instance, we already know that the development of higher-level intentions gives rise to low- and middle-level intentions, and that high-level intentions can only be expressed through some combination of these. Interactions between any of these types of intentions are necessarily isomorphic, similar to those between high-level intentions, because they all exist within the same set of physical parameters. As a result, some interaction between subsets of intentions and actor systems will lead to unexpected results at lower levels. Representing intentions as lattice elements allows a better understanding of the intentional environment’s inherent structure, as well as a better understanding of the interactions among them.
Using the lattice structures, we can represent these interactions between sets of intentions as follows:
-
1.
Let the set of all of an individual's intentions be S, where {S: (l ∪ m), h}.
-
a.
l is the set of low-level intentions;
-
a.
m is the set of mid-level intentions;
-
a.
h is the set of high-level intentions.
-
2.
If {l, m, h} ⊆ S, ∀ i∈(l ∪ m), ∃ j ∈ h such that we can define a mutually isomorphic relationship between i and j.
Assigning new information the proper value and priority in a rapidly changing environment remains a challenge to engineers and scholars of SA, since it is not simply the ability to acquire new information, but to interpret it in a way that is beneficial to the actor’s intentions [34]. Here quality trumps quantity; that is, information must be useful more than it must be plentiful. As a simple example, imagine three email servers: one that filters no junk email, one that occasionally marks an important message as junk, and one that occasionally lets junk mail into the inbox. The third server is clearly the superior application of SA, since it prevents inundation (albeit imperfectly) without sacrificing the ultimate mission of the email server, which is to convey important information.
Applying psychological models of intent
In her research into human intentions, Zeigarnik argued that the “effect of intention is equivalent to the creation of an inner personal tension,” meaning that one system is in disequilibrium of relative to surrounding systems. This relative equilibrium is a manifestation of the fundamental forces that cause equalization of other systems. This conclusion suggests that the human mind is itself a system with its own dynamic forces and subsystems. Zeigarnik made four basic assumptions in his theory [31]:
Assumption 1: The intention to reach a certain goal G (to carry out an action leading to G) corresponds to a tension (t) in a certain system S(G) within the person so that t(S(G))>0. This assumption coordinates a dynamic model (system in tension) with “intention.”
Assumption 2: The tension system t(S(G)) is released if the goal G is reached. T(S(G))=if P accomplishes G. Zeigarnik uses the tendency to recall activities as a symptom for the existence of tension The expectation of the existence of such a system is based on the following:
Assumption 3: To a need for G corresponds a force f(P,G) action upon the person and causing a tendency of locomotion toward G. If t(S(G))>0 then f(P,G)>0. This assumption determines the relation between need and locomotion - motion from one place to another place. In other words it means a construct of tension in the person and the construct of force for locomotion in the environment.
Assumption 3a: A need leads not only to a tendency of actual locomotion towards the goal region but also to thinking about this type of activity; in other words, the force f(P,G) exits not only on the level of doing (reality) but also on the level of thinking (reality); if t(S(G))>0 f(P,R)>0 where R means recall.
From these assumptions we can theorize that intentional systems such as humans have a stronger tendency to recall interrupted activities than finished ones. We can make the corresponding derivation as follows, where C is the completed task, U is the unfinished task, and the corresponding systems are represented by S(C) and S(U), respectively [2]:
-
1.
t(S(U))>0 according to Assumption 1
-
2.
t(S(C ))=0 according to Assumption 2
-
3.
f(P,U)>f(P,C) according to Assumption 3a
Note: Zeigarnik computed the ratio as RU/RC=1.9, where RU is unfinished tasks, an RC represents completed tasks.
This model has several prerequisites regarding the dynamic character of this field, specifically with respect to the maintenance of tension and internal variety during a specific period of time. Since we expect this to be a very fluid field, any differences between the tension levels of the various systems will tend to disappear quickly as they tend toward equilibrium.
A human actor thus must have some degree of variance, or fluidity, in regard to the communication of his/her systems that cause tension. Thus type of fluidity must clearly vary between people and situations, but if we assume the constancy of structural relations between them, we can express them in the following way [2]:
Corollary: Let us indicate the absolute difference between the tension t(S1) and the tension t(S2) of two neighboring systems S1 and S2 at the time the tensions are being built up by the time since then elapsed by Ti, the tension difference at this time by , and the fluidity by . Then we can state, where symbolizes a monotonously increasing function. This means: the change in the tension difference of neighboring systems depends upon the time interval and the fluidity. Of course, this holds true only if the tensions of these systems are not changed by other factors such as, e.g., release of tension by reaching the goal?
Applying field theory to this new observation, we can make the following propositions:
-
a)
Behavior is derived from the universe of coexisting facts;
-
b)
These coexisting facts are similar in behavior to a dynamic field. That is, the state of any subfield depends on the rest of the field as a whole.
Proposition (a) presupposes we are dealing with a manifold, the internal relationships of which must use the concept of space for representation.
Spatial relationships between different psychological data cannot be represented in physical space, and instead must be treated as existing in mental space. It is everywhere accepted that the "life space" includes the person and his mental environment or world.
Locating and applying an appropriate geometry that can represent the spatiotemporal relationships of psychological facts is one of the first requirements of representing the mental space. With modern computational hardware, this is more possible today than ever before, especially in Zeigarnik’s era. Today, one can find many geometries that permit mathematical understanding of the hodological space.
This is a finitely structured space, meaning that its subspaces and components are not infinitely divisible, but they are composed of certain units. Direction and distance are quantities that can be represented by "distinguished paths," which are easily correlated to psychological locomotion.
The geometry of hodological space is sufficiently well-defined that it can adequately represent most psychological processes, and it also permits an answer to the puzzling necessity to ascribe different psychological intentions and actions to locomotions in the same physical direction and dimension. This feature is particularly critical for the roundabout route problem: hodological space permits the description of structural relations inherent to the actor as well as in psychological operating environment.
Hodological space is equally useful for describing the structure and functions of groups of people. It is even more useful, however, when describing dynamic systems. Field theory tells us that behavior depends on the present field, not either the past or future fields. However, this stands in contrast to the theory of teleology, which opines that the future is the cause of behavior, and associationism, which opines that the past is the cause of behavior.
In addition, one must not make the mistake of assuming that directed factors are in themselves characteristic of teleology. These causal assumptions are even visible in physics, since physical force is a directed entity consisting of magnitude and the directional quality that allows its representation as a vector. The same goes for metapsychology, which resorts to constructs of similar vector-like character, such as psychological forces.
By defining directed forces in hodological space, we can adequately represent other teleological claims. The strange relationship between knowledge and dynamics that teleology has attempted to represent intuitively is made more comprehensible in one fundamental way: it becomes clear why ignorance serves as a barrier.
Zeigarnik’s model provides a useful basis for modeling IA. The A-type model is the present model of the world. The type B model is a hypothetical future model of the world based on intentions and their ability to be achieved. A system of intentions can be used to define a map from a type-A-model to type-B-model, and finally, there is SA of the type-A-model. The fact that each agent at a given point in time has two models (SA and IA) creates many challenges to highlight, including the proliferation of multiple perceptions, expectations and awareness models that sometimes come into conflict with one another.
Thus emerges the four-part hypothesis [2]:
-
1.
Model A(X) – Model A’(X) 0.
-
2.
Model B(X) – Model B’(X) 0.
-
3.
Model A(X) – Model A(Y) 0.
-
4.
Model B(X) – Model B(Y) 0.
Intention awareness: a new role for intent in computation
Linking intent to SA-driven pictures of the environment requires more than simply creating an “intent” category in data analysis. From a computational perspective, intentions are causal inferences made from a series of events and a series of prior, linked intentions. Thus, in order for a system to possess IA, it must not simply infer based on available (i.e., situational) data, but must incorporate prior intentional analysis as one of its information sources. Thus, one of the fundamental distinctions between SA as we now know it and IA is that, while the former concerns data analysis that humans probably cannot process in the amount of time required, the latter concerns information analysis, a task that humans perform on a regular basis, but that can be enhanced by artificial systems (Figure 2).
One promising example of the extension of intention into SA is computational “sensemaking,” or the process by which humans (or other reasoners) attach semantics and sentics to their observations of the operating environment in a parallel and dynamic way [35]. IA, in fact, is a viable means of improving synchronization in sensemaking between the human reasoner and associate system interfaces. This is due to two primary factors. The first is that IA is becoming increasingly viable thanks to advances in computer hardware performance. Second, since IA simultaneously frees some of the analytical burden from human operators and uses similar structures as are found in human cognition, interface-based latency is minimized.