Understanding the importance and the role of tacit knowledge is key element to formulating knowledge management strategies for mobile-learning systems. Unlike explicit knowledge that is coded knowledge, tacit knowledge is knowledge that is embedded in system actors (i.e. learners and other dynamic context elements) [56] and which is difficult to be shared and distributed. In this study, the upper ontology space, built out of the five integrated sub-ontologies is used in such a way to allow tacit knowledge related to learners and their context to be defined at the semantic level and used to a great extent in the personalization process. In particular, knowledge in the upper ontology space is structured in such a way to allow building a conceptual learner model out of a sequence of learning activities by linking a user’s conceptualization to particular subject domain ontology. This constitutes the key design aspect of our upper ontology space to enable personalized learning. A learning activity is characterized by user interactions, through which contextual information related to the user’s surroundings, as well as concepts covered, queries performed, and learning resources consumed are stored and used as knowledge facts to enable further inferences aiming to achieve better adaptation. Thus, the perceived context and the accessed domain information are efficiently used for building a personalized learning path that is aware of the learner’s interaction history, preferences, background knowledge, and operating environment. The used approach models the learner at the semantic level by providing a formal learner’s conceptualization defined in OWL, and thus allows reasoning upon it in order to infer the learner’s understanding of the subject domain. The reasoning is performed in terms of SWRL (Semantic Web Rule Language) rules that are applied on knowledge represented in the OWL-DL (Description Logics) ontology. It should be noted here that reasoning in systems integrating DL ontologies and rules is a very hard task [56]. This is mainly due to undecidability of reasoning in such systems. This is particularly the case for those systems integrating DL ontologies with recursive or hybrid rules [57],[58]. So, bridging the discrepancy in these two knowledge representations is a challenging problem. Many studies [57]–[60] have shown that to avoid undecidability of reasoning, practically all decidable approaches to integrating ontologies and rules should impose specific conditions which restrict the interaction between the rules and the ontology. In this work, we adopt a similar decidable reasoning approach by adhering to restrictions, such as avoiding recursive and hybrid rules. In particular, we use the SWRL-Jess Bridge, a bridge that provides the infrastructure for incorporating the Jess rule engine into Protégé-OWL to execute SWRL rules. The system also relies on some SWRL-built-in libraries, which include an implementation of the core SWRL built-ins, as well as mathematical built-ins, to support the use of complex expressions in rules, reasoning with temporal information and querying OLW ontologies.
The sequence of steps given below illustrates the personalization process adopted by our system in a typical learning scenario where a learner submits a query in a specific subject domain area and receives a planned learning sequence fulfilling the learning goal. This learning scenario is also depicted graphically in Figure 10.
-
1.
When the learner logs in, his background, preferences, and previous learning activity are retrieved.
-
2.
The learner uses the domain ontology vocabulary to query the system.
-
3.
The subject-domain ontology related to the learner’s query is identified and retrieved.
-
4.
Based on the learner’s query, the system infers the related ontology concept(s) and identifies those concepts that are part of prerequisite knowledge, core-knowledge, and related knowledge using HasPrerequisite, HasNecessaryPart, and HasPart properties respectively.
-
5.
Next, the system uses the perceived device and environment atomic context elements to infer metadata that adapts the search for those learning resources that are suitable for the system-centric context.
-
6.
The metadata generated in (4) dealing with related domain ontology concepts, and the system-centric metadata generated in (5) are then used to discover and filter out learning resources from various learning repositories.
-
7.
The system will then determine the learner’s expertise in the subject-domain (i.e. tacit knowledge) by inferring previous learning activities, covered concepts, adopted learning paths, and consumed learning resources. This knowledge is used to build a personalized learning sequence that is aware of the learner’s history and available learning time. Thus, the newly constructed learning sequence consists of optimized system-centric learning resources to fulfill the current learner’s activity and goals.
-
8.
The personalized learning sequence is then provided to the learner for navigation.
-
9.
Based on the newly selected concept, learner’s expertise is automatically updated and the personalized learning path is re-adjusted by resuming processing from step (4).
-
10.
The learning activity terminates when either the learner logs out the system, or when all domain concepts related to current activity have been covered.
The above strategy strives to meet the best personalized learning path by dynamically updating the learning sequence based on the learner interactions with the system and surrounding environment. Below, we give a detailed description of the various system-centric adaptations and learner-centric adaptations used in the personalization process. We also provide the algorithm used to help navigate a learning path.
4.1 System-centric adaptation
In a mobile environment it is important for the system to consider system-centric adaptations to cope with the lack of resources in used devices and the unsecured low-bandwidth wireless network. Thus, system-centric adaptations aim at filtering out those learning resources that can efficiently be transmitted over the network and properly run on used devices. These adaptations are triggered by the context monitoring process which identifies context changes and proactively performs actions such as restricting media type and pruning large learning resources from the learning path in case of low network bandwidth. A number of inference rules have been developed to operate on perceived device and environment atomic context elements to achieve this goal. The diagram shown in Figure 11 describes the logical steps used to achieve the main system-centric adaptations considered in this study.
When the learner logs into the system, first, the system senses the used network adaptor and retrieves its connection speed. Connection speed is an attribute that is straightforward to obtain and it represents the maximum theoretical speed for the used wireless adapter [56]. Knowing the type of the network connection, such as IEEE 802.11 or GPRS, gives our reasoning engine the insight that allows it to make some adaptation choices related to media-type and size of resources that are to be retrieved. This is achieved by taking into account the available bandwidth and device features. For example, if the network connection is IEEE 802.11, the system does not need to sense the network bandwidth and does not need to make any restrictions on media type because the available bandwidth is stable and large enough to handle all type of resources. However, if the sensed connection is GPRS, the system adapts the media-type and size based on the available bandwidth as explained below. For example, for a GPRS connection with a maximum connection speed of 48 kbps, the actual network bandwidth is usually less than 48 kbps due to traffic on the network [54]. Ideally, the system should continuously sense and update the current network bandwidth whenever bandwidth change occurs. However, the process of continuously sensing and updating the ever changing bandwidth is time and resource consuming as it involves sending data packets through the network. To solve this problem, we only sense the actual bandwidth at some points in time, and we use a fuzzy logic approach in conjunction with SWRL rules to predict the available bandwidth between these points. Also, to reason with bandwidth, we translate the predicted current bandwidth into meaningful symbolic values such as low, medium, and high bandwidth. Fuzzy logic is also used to predict the maximum size of learning resources that can be communicated without incurring long delays. For instance, we only search for learning resources with text type if a mobile device, operating on a GPRS network for instance, has very low bandwidth. However, we can extend media type to image and video if the network bandwidth is high and the available device memory is large. We also perform other checks to adapt to features such as screen resolution, used operating system, and network security. Below we describe the fuzzy logic approach used to predict media type and size and we give a full scenario to illustrate all system-centric adaptations.
We make use of the fuzzy logic truth values in conjunction with SWRL rules to allocate symbolic value to the predicted current network bandwidth. Figure 12 shows the membership function for network bandwidth. This is used to predict the network bandwidth using the fuzzy qualifying linguistic variables such as low, medium, and high. Note that Maxband which stands for maximum bandwidth is associated with the maximum connection speed of the used wireless adaptor. The symbol μ
A
(x) represents a truth value that is between 0 and 1. Based on the membership function given in Figure 12, μ
A
(x) can be computed by (1).
(1)
In Rule-Set-1 we describe the SWRL rules that are used to infer the truth values of classified symbolic network bandwidth given in (1). For the sake of space we only give TruthValueRule1 and TruthValueRule2 which are respectively related to L
1
and L
2
of (1), and which are used to infer the truth values associated to low network bandwidth. The abstract SWRL syntax which is consistent with the OWL specification is rather verbose and not particularly easy to read [61]. Instead, we use a relatively informal “human readable” form where both antecedent and consequent are conjunctions of atoms with variables prefixed with a question mark (e.g., ?x), and which may also include functional notations as shown in Rule-Set-1. The latter shows the two above-mentioned rules which use the property UsedDevice(?act,?dev) which associates a learner identified by an activity identifier act to their handheld device dev, and the data properties HasBandwidth and MaxBandwidth which describe respectively the current network bandwidth and maximum connection speed of a used handheld device.
Rule-Set-1 SWRL rules for fuzy logic truth values
TruthValueRule1
ActivityID(?act) ∧ UsedDevice(?act, ?dev)∧ HasBandwidth(?dev, ?b) ∧ HasNetworkAdaptor(?dev, GPRS) ∧MaxBandwidth(?dev,?Maxband) ∧ swrlb:multiply(?Lowband,?Maxband, 0.25) ∧ swrlb:lessThanOrEqual(?b, ?Lowband)→ProbLow(?dev, 1.0) ∧ NetworkBandwidth(?dev, “Low”).
TruthValueRule2
ActivityID(?act) ∧ UsedDevice(?act, ?dev) ∧ HasBandwidth(?dev, ?b) ∧ HasNetworkAdaptor(?dev, GPRS) ∧ MaxBandwidth(?dev,?Maxband) ∧ swrlb:multiply(?Lowband,?Maxband,0.25) ∧ swrlb:multiply(?Mediumband, ?Maxband, 0.5) ∧ swrlb:greaterThan(?b, ?Lowband) ∧ swrlb:lessThanOrEqual(?b, ?Mediumband) ∧ swrlb:subtract (?z1, ?Mediumband, ?b) ∧ swrlb:subtract(?z2, ?Mediumband,?Lowband) ∧ swrlb:divide(?z, ?z1, ?z2)→ProbLow (?dev, ?z) ∧ NetworkBandwidth(?dev, “Low”)
The following real-life scenario illustrates the way we apply the SWRL rules shown in Rule-Set-1. For instance, let’s assume that Irene is using the “Motorola W270” mobile device model, called here MotoW270 which supports a maximum connection speed of 32.0 kbps. Let’s also assume that the value (x*) which represents the most recently sensed network bandwidth is found to be around 18.0 kbps. This is fluctuating between medium to high bandwidth with respect to the maximum connection speed as shown in Figure 12. When TruthValueRule4 and TruthValueRule5 are applied, facts (A2) and (B2) are inferred, resulting into the addition of four statements to the list of facts as shown in Rule-Set-2. These new facts reveal the probabilities for the predicted current bandwidth which were found to be 0.75 for medium bandwidth and 0.25 for high bandwidth.
The inferred probabilities associated with current bandwidth are then used to determine the maximum allowable resource size. The aim here is to target resources which have reasonable size, as it is not practical to offer the learner a large learning resource (e.g. few Mbytes) if the used device operates on a low bandwidth (e.g. few kbps). So, we conducted few experiments on three devices with different capabilities to identify some typical threshold values related to resource sizes that can be used for specific bandwidth ranges. The used devices are a basic Nokia phone emulator, a Sony-Ericson W830C and an HTC-S261 smartphone which have a maximum bandwidth of 32kpbs, 48 kbps and 120 kbps respectively. We used these devices to request learning resources of different media type and size under a network bandwidth varying from 8 kbps to 120 kbps. Based on these experiments, we have adopted the following assumptions aiming to keep a reasonable response time. If a mobile device has a connection speed less than 32 kbps, we should not consider resources that exceed 500Kbytes. However, if the connection speed was between 32 kbps to 66 kbps, then resources over 1Mbytes should not be considered. We used these threshold values along with the previously inferred symbolic values associated with current network bandwidth to predict the maximum allowable resource size.
Rule-Set-2 Examples of truth value inferences
Ontology related facts
A1) before applying TruthValueRule4
ActivityID(Irene) UsedDevice(Irene, MotoW270) HasBandwidth(MotoW270, 18.0) HasNetworkAdaptor(MotoW270, GPRS) MaxBand(MotoW270, 32.0) swrlb:multiply(?Highband,32.0,0.75) swrlb:multiply(?Mediumband, 32.0, 0.5) swrlb:greaterThan(18.0, 16.0) swrlb:lessThanOrEqual(18.0, 24.0) swrlb:subtract(?z1, 24.0, 18.0) swrlb:subtract(?z2, 24.0, 16.0) swrlb:divide(?z, 6.0, 8.0)
B1) before applying TruthValueRule5
UsedDevice(Irene, MotoW270) HasBandwidth(MotoW270, 18.0) HasNetworkAdaptor(MotoW270, GPRS) MaxBand(MotoW270, 32.0) swrlb:multiply(?Highband,32.0,0.75) swrlb:multiply(?Mediumband, 32.0, 0.5) swrlb:greaterThan(18.0, 16.0) swrlb:lessThanOrEqual(18.0, 24.0) swrlb:subtract(?z1, 18.0, 16.0) swrlb:subtract(?z2, 24.0, 16.0) swrlb:divide(?z,2.0,8.0)
OInferred facts
A2) after applying TruthValueRule4
ProbMedium(MotoW270, 0.75) NetworkBandwidth(MotoW270, "Medium")
B2) after applying TruthValueRule5
ProbHigh(MotoW270, 0.25) NetworkBandwidth (MotoW270, "High")
It should be noted that the system is designed in such a way that the maximum tolerable response time, that depends on the user’s activity and the nature of requested resources, can be easily modified to accommodate learners with more or less restrictive time constraints. Figure 13 shows the main components of the used fuzzy system which consists of singleton fuzzifier, product inference engine, fuzzy rule base, and center average defuzzifier. The system starts with a fuzzification of the input variable, then rule evaluation, followed by aggregation. The latter is the process of unification of the outputs of all rules. The last step of the fuzzy system is defuzzification to obtain a crisp output [50]. Figure 14 shows the membership function for resource size.
The three fuzzy sets Low, Medium, and High describing predicted network bandwidth are used as an input space in the fuzzy system to predict the maximum allowable resource size. We also define three fuzzy sets Small, Medium, and Large as the output space (resource size) as shown in Figure 14. Note that we use SmallSize, MediumSize, and LargeSize to refer to center average values for small, medium, and large fuzzy sets respectively. The fuzzy rule base consists of three simple rules as shown below.
R1: if network bandwidth (B) is Low then resource size (Z) is set to Small
R2: if network bandwidth (B) is Medium then resource size (Z) is set to Medium
R3: if network bandwidth (B) is High then resource size (Z) is set to High
Suppose that the fuzzy set Fk in the fuzzy rule base Rk is normal with center . Then the crisp output from the fuzzy system with singleton fuzzifier, product inference engine, center average defuzzifier, and rule base Rk where, Rkis defined as follows: if x
1
is A
1
k and… and x
n
is A
n
k , then y is Fk, k = 1,…N is given by [50]:
(2)
In our case, (2) results into:
(3)
In Rule-Set-3, we show the SWRL rule (ResourceSizeRule) associated with equation (3). In ResourceSizeRule, the data properties ProbLow, ProbMedium, and ProbHigh are those probabilities obtained in TruthValueRules (see Rule-Set-1). To show how these rules are applied in our system, we provide a real-life scenario. Let’s assume that our learner Irene is using a GPRS connection with a maximum connection speed of 32 kbps. This connection speed delimits the maximum resource size to 500Kbytes as described above. These assumptions are represented by facts (A1) in Rule-Set-3. When ResourceSizeRule is applied, fact (A2) is inferred resulting into the addition of statement ResourceSize(MotoW270, 281.25) to the list of facts. Indeed, since the previously sensed network bandwidth was 18 kbps, our system chooses not to exchange resources over 281.25Kbytes as deduced from the set of inferences shown in Rule-Set-3.
Rule-Set-4 describes the SWRL rules used to select the media type of retrieved learning resources based on predicted current bandwidth. The data properties NetworkBandwidth and AvailableMemory represent respectively the current bandwidth and available device memory. In MediaRule1-to-3, the system sets the media type to the appropriate format (i.e. text, image, video) based on the value of predicted current network bandwidth. The system also sets the maximum allowable resource size, as computed in Rule-Set-3, based on the device’s available memory. If the size of the device memory is smaller than the maximum allowable media size computed in Rule-Set-3, then AllowedResourceSizeRule1 sets the maximum media size to the device memory size; otherwise the maximum media size remains unchanged as stated in AllowedResourceSizeRule2.
Rule-Set-3 SWRL rule and inferences for determining maximum allowed resource size
ResourceSizeRule
ActivityID(?act) ∧ UsedDevice(?act, ?dev) ∧ ProbLow(?dev, ?Tl) ∧ ProbMedium(?dev,?Tm) ∧ ProbHigh(?dev, ?Th) ∧ MaxSize(?dev, ?Maxsize) ∧ swrlb:multiply(?Lowsize, 0.25, ?Maxsize) ∧ swrlb:multiply(?Mediumsize,0.5,?Maxsize) ∧ swrlb:multiply(?Largesize, 0.75, ?Maxsize) ∧ swrlb:multiply(?l, ?Lowsize, ?Tl)∧swrlb:multiply(?m, ?Mediumsize,?Tm) ∧ swrlb:multiply(?h, ?Largesize, ?Th) ∧ swrlb:add(?z1, ?l, ?m, ?h) ∧ swrlb:add(?z2, ?Tl, ?Tm, ?Th) ∧ swrlb:equal(?z2, 1) ∧ swrlb:divide(?z, ?z1, ?z2) → FileSize(?dev, ?z)
Ontology related facts
A1) before applying ResourceSizeRule
ActivityID(Irene)
UsedDevice(Irene, MotoW270)
ProbLow(MotoW270, 0.0)
ProbMedium(MotoW270, 0.75)
ProbHigh(MotoW270, 0.25)
MaxSize(MotoW270, 500)
swrlb:multiply(?Lowsize, 0.25, 500.0) swrlb:multiply(?Mediumsize, 0.5, 500.0) swrlb:multiply(?Largesize, 0.75, 500.0) swrlb:multiply(?l, 125.0, 0.0) swrlb:multiply(?m, 250.0, 0.75) swrlb:multiply(?h, 375.0, 0.25)
swrlb:add(?z1, 0.0, 187.5, 93.75)
swrlb:add(?z2, 0.0, 0.75, 0.25)
swrlb:divide(?z, 281.25, 1)
Inferred facts
A2) after applying ResourceSizeRule
FileSize (MotoW270, 281.25)
To show how the above rules are applied we use the previous scenario of learner Irene who is using device MotoW270 operating at a bandwidth of 18.0 kbps to access the system services. Following the reasoning shown in Rule-Set-2, the system infers a bandwidth fluctuating between medium to high as shown in facts (A1) and (B1). When applying MediatypeRule2 and MediatypeRule3, facts (A2) and (B2) are respectively inferred and added to the list of facts as shown in Rule-Set-5. In (C1), AllowedResourceSizeRule2 is applied to compare the maximum allowed resource size previously inferred in Rule-Set-3 with the device’s available memory, leading to the addition of statement AllowedSize(MotoW270, 281.25) to the list of inferred facts as shown in (C2). The outcome of the ontology reasoning for this scenario is that all types of media can be selected for delivery, while their size should not exceed 281.25Kbytes for them to be efficiently ported on to the used device, and thus avoiding long communication delays.
Rule-Set-4 SWRL rules for media type and resource size selection
MediaRule1
ActivityID(?act) ∧ UsedDevice(?act,?dev) ∧ NetworkBandwidth(?dev,“Low”) →HasMediaType(?dev, Text)
MediaRule2
ActivityID(?act) ∧ UsedDevice(?act, ?dev) ∧ NetworkBandwidth(?dev,“Medium”) → HasMediaType(?dev, Text) ∧ HasMediaType(?dev, Image)
MediaRule3
ActivityID(?act) ∧ UsedDevice(?act, ?dev) ∧ NetworkBandwidth(?dev, “High”) → HasMediaType(?dev,Text) ∧ HasMediaType(?dev,Image) ∧ HasMediaType(?dev,Video)
AllowedResourceSizeRule1
ActivityID(?act) ∧ UsedDevice(?act, ?dev) ∧ FileSize(?dev,?Size) ∧ AvailableMemory(?dev,?MemorySize) ∧ swrlb:lessThan(?MemorySize,?Size) → AllowedSize(?dev,?MemorySize)
AllowedResourceSizeRule2
ActivityID(?act) ∧ UsedDevice(?act,?dev) ∧ FileSize(?dev,?Size) ∧ AvailableMemory(?dev, ?MemorySize) ∧ swrlb:greaterThanOrEqual(?MemorySize, ?Size) → AllowedSize(?dev,?Size)
Rule-Set-5 Inferences for media type selection
Ontology related facts
A1) before applying MediatypeRule2
ActivityID(Irene)
UsedDevice(Irene, MotoW270) NetworkBandwidth(MotoW270,“Large”)
B1) before applying MediatypeRule3
ActivityID(Irene)
UsedDevice(Irene, MotoW270) NetworkBandwidth(MotoW270,“Large”)
C1) before applying AllowedResourceSizeRule2
ActivityID(Irene)
UsedDevice(Irene, MotoW270) FileSize(MotoW270, 281.25) AvailableMemory(MotoW270, 1024.0) swrlb:greaterThanOrEqual(1024.0, 281.25)
Inferred facts
A2) after applying MediatypeRule2
HasMediaType(MotoW270,Text)
HasMediaType(MotoW270,Image)
B2) after applying MediatypeRule3
HasMediaType(MotoW270,Text)
HasMediaType(MotoW270,Image)
HasMediaType(MotoW270,Video)
C2) after applying AllowedResourceSizeRule2
AllowedSize(MotoW270,281.25)
Another type of adaptation considered in this study is language adaptation that takes into account the language preferred and used by the learner. LanguageRule in Rule-Set-6 establishes a constraint represented by the relationship SearchLanguage that associates a language with a specific learner activity. The property PreferredLanguage(?act, ?lan) links an activity identifier act to a preferred language lan. The property SupportLanguage(?dev, ?lan) links learner’s handheld device dev to its support language lan. For instance, let’s assume French is the preferred language for learner Irene. Let’s also assume that English and French are languages supported by the used device MotoW270. When applying LanguageRule1, as shown by (A1) in Rule-Set-6, we can infer (A2) that is SearchLanguage (Irene, French), confirming that French can be used as a search language because it is supported by the used device. The knowledge base developed in this study also includes rules that deal with other system-centric adaptations such as network security, operating system compatibility, and screen resolution.
Rule-Set-6 SWRL rule for language adaptation
LanguageRule
ActivityID(?act) ∧ UsedDevice(?act,?dev) ∧ PreferredLanguage(?act, ?lan) ∧ SupportLanguage(?dev, ?lan) → SearchLanguage(?act, ?lan)
Ontology related facts
A1) before applying LanguageRule
ActivityID(Irene)
UsedDevice(Irene, MotoW270)
PreferredLanguage(Irene, French) SupportLanguage(MotoW270, English) SupportLanguage(MotoW270, French)
A2) after applying LanguageRule
SearchLanguage(Irene, French)
Rule-Set-7 SWRL rules for learning sequence construction
SimilarResourceRule1
: ConductedLearningActivity(?L,?act) ∧ MakeQuery(?act,?Q) ∧
HasKeyword(?Q,?C) ∧ IsMappedTo(?C,?LR) ∧ HasLearningTime(?L, t) ∧
LearningTime(?LR, ?t1) ∧ swrlb:greaterThanOrEqual(?t, ?t1) → SimilarLR(?act,?LR)
SimilarResourceRule2: ConductedLearningActivity(?L,?act) ∧ MakeQuery(?act,?Q) ∧
HasKeyword(?Q,?C) ∧ Has(?C,?Ci) ∧ ¬Covered(?L,?Ci) ∧ IsMappedTo(?Ci,?LRi) ∧ ¬Consumed(?L,?LRi) ∧ HasLearningTime(?L, t) ∧ LearningTime(?LRi, ?t1) ∧ swrlb:greaterThanOrEqual(?t, ?t1) → SimilarLR(?act,?LRi)
SimilarResourceRule3: ConductedLearningActivity(?L,?act) ∧ MakeQuery(?act,?Q) ∧
HasKeyword(?Q,?C)∧ Isa(?C,?Ci)∧¬ Covered(?L,?Ci)∧ IsMappedTo(?Ci,?LRi)∧ ¬Consumed(?L,?LRi) ∧ HasLearningTime(?L, t) ∧ LearningTime(?LRi, ?t1) ∧ swrlb:greaterThanOrEqual(?t, ?t1) → SimilarLR(?act,?LRi)
PrerequisiteResourceRule: ConductedLearningActivity(?L,?act)∧MakeQuery(?act,?Q)∧
HasKeyword(?Q,?C)∧ HasPrerequisite(?Q,?Ci)∧ ¬ Covered(?L,?Ci)∧ IsMappedTo(?Ci,?LRi) ∧¬Consumed(?L,?LRi) ∧ HasLearningTime(?L, t) ∧ LearningTime(?LRi, ?t1) ∧ swrlb:greaterThanOrEqual(?t, ?t1) → PrerequisiteLR(?act,?LRi)
CoreResourceRule1: ConductedLearningActivity(?L,?a) ∧ MakeQuery(?act,?Q) ∧
HasKeyword(?Q,?C)∧ HasNecessaryPart(?Q,?Ci) ∧ ¬Covered(?L,?Ci)∧ IsMappedTo(?Ci,?LRi)∧ ¬ Consumed(?L,?LRi) ∧ HasLearningTime(?L, t) ∧ LearningTime(?LRi, ?t1) ∧ swrlb:greaterThanOrEqual(?t, ?t1) → CoreLR(?act,?LRi)
CoreResourceRule2: ConductedLearningActivity(?L,?a) ∧ MakeQuery(?act,?Q) ∧
HasKeyword(?Q,?C) ∧ IsNecessaryPartOf(?Q,?Ci) ∧ ¬Covered(?L,?Ci) ∧ IsMappedTo(?Ci,?LRi) ∧ ¬ Consumed(?L,?LRi) ∧ HasLearningTime(?L, t) ∧ LearningTime(?LRi, ?t1) ∧ swrlb:greaterThanOrEqual(?t, ?t1) → CoreLR(?act,?LRi)
NonCoreRelatedResourceRule1: ConductedLearningActivity(?L,?act) ∧ MakeQuery(?act,?Q) ∧
HasKeyword(?Q,?C) ∧ HasPart(?Q,?Ci) ∧ ¬Covered(?L,?Ci) ∧ IsMappedTo(?Ci,?LRi) ∧ ¬Consumed(?L,?LRi) ∧ HasLearningTime(?L, t) ∧ LearningTime(?LRi, ?t1) ∧ swrlb:greaterThanOrEqual(?t, ?t1) → NonCoreRelatedLR(?act,?LRi)
NonCoreRelatedResourceRule2: ConductedLearningActivity(?L,?act)∧MakeQuery(?act,?Q)∧HasKeyword(?Q,?C) ∧ IsPartOf(?Q,?Ci)∧ ¬Covered(?L,?Ci)∧ IsMappedTo(?Ci,?LRi)∧ ¬Consumed(?L,?LRi) ∧ HasLearningTime(?L, t) ∧ LearningTime(?LRi, ?t1) ∧ swrlb:greaterThanOrEqual(?t, ?t1) → NonCoreRelatedLR(?act,?LRi)
4.2 Learner-centric adaptation
The best personalization of learning is the one that supplements domain knowledge with the learner’s tacit knowledge. This approach is adopted in the learner-centric adaptation process to produce ontology-compliant learning sequence aware of the learner’s background and task at hand. As shown in Figure 10, first, the system infers concepts related to the learner’s query using knowledge embedded in the subject domain ontology. This is done to ensure that the initial learning sequence is ontology compliant. Then, knowledge related to learner’s context is used to avoid reiterated covered concepts and consumed learning resources. In Rule-Set-7, we show the rules used to derive the initial learning sequence. First, the reasoning engine uses SimilarResourceRule1-to-3 to check for resources that are directly related to concept(s) in the learner’s query. Second, PrerequisiteResourceRule is applied to infer prerequisite knowledge based on HasPrerequisite relationship. Third, the reasoner checks for core knowledge using CoreResourceRule which is based on the necessary part-whole (HasNecessaryPart) relationship, thus inferring sub-concepts that should be covered to completely understand the queried concept(s). Finally, the learning sequence is complemented with non-core related knowledge inferred using NonCoreResourcesRule1&2 which are based on HasPart and its reciprocal relationship IsPartOf. Each of the above-mentioned rules checks whether the time needed to consume the recommended resource is within the learner’s available time. It also checks for already covered concepts and consumed resources which are automatically eliminated from the learning sequence. Thus, making use of tacit knowledge retrieved from the learner profile.
Rule-Set-8 example of learning sequence construction
Ontology related facts
A1) before applying SimilarResourceRule1:
ConductedLearningActivity(Irene,A1) MakeQuery(A1, Logical Express)
HasKeyword(Logical Express, C28)
IsMappedTo(C28,LR28a)IsMappedTo(C28, R28b)
B1) before applying CoreResourceRule1
ConductedLearningActivity(Irene,A1) MakeQuery(A1, Logical Express)
HasKeyword(Logical Express, C28)
HasNecessaryPart(C28,C29)
HasNecessaryPart(C28,C30)
HasNecessaryPart(C28,C31)
HasNecessaryPart(C28,C32)
¬Covered(Irene,C31)
¬Covered(Irene,C32)
IsMappedTo(C31,LR31a)
IsMappedTo(C31,LR31b)
IsMappedTo(C32,LR32a)
¬Consumed(Irene,LR31b)
¬Consumed(Irene, LR32a)
C1) before applying NonCoreRelatedResourceRule2:
ConductedLearningActivity(Irene,A1)
MakeQuery(A1, Logical Express)
HasKeyword(Logical Express, C28)
IsPartOf(C28,C5)
¬Covered(Irene,C5)
IsMappedTo(C5,LR5a)
IsMappedTo(C5,LR5b)
IsMappedTo(C5,LR5a)
¬Consumed(Irene,LR5a)
¬Consumed(Irene, LR5c)
Inferred facts
A2) after applying SimilarResourceRule1
SimilarLR(A1, LR28a)
SimilarLR(A1, LR28b)
B2) after applying CoreResourceRule1
CoreLR(A1,LR31b)
CoreLR(A1,LR32a)
C2) after applying NonCoreRelatedResourceRule2
NonCoreRelatedLR (A1,LR5a)
NonCoreRelatedLR (A1,LR5c)
To illustrate the reasoning mechanism adopted in the learner-adaptation process, we use a real life learning scenario based on the C++ Programming ontology shown in Figure 15. Let’s assume that learner Irene wants to learn about “logic expressions” of the C++ programming language. This query has similar keywords with the ontology concept C28 which describes “Logical_Expression” as shown in Figure 15. The reasoning engine is invoked by firing SimilarResourceRule1 which operates on facts (A1) and infers (A2) as shown in Rule-Set-8. The results of this inference consist of mapping learning resources LR28a and LR28b to concept C28. Facts (A2) are then added to the knowledge base. It should be noted here that concept C28 does not have any similar or prerequisite concepts in the C++ ontology. Therefore, the application of SimilarResourceRule2&3 and PrerequisiteResourceRule do not produce any useful results in this case. However, when applying CoreResourceRule on facts (B1), concepts C29(Boolean_data), C30(Relational_Operators), C31(Logical_Operators), and C32(Operators_Precedence) are inferred as core knowledge that need to be offered to the learner to fully understand the queried concept (i.e. Logical_Expressions). We also infer that concepts C29 and C30 have already been covered by Irene in previous studies, thus, the reasoner automatically eliminates them from the learning sequence. The system also infers that learning resources (LR31a, LR31b) and LR32a are mapped to concepts C31 and C32 respectively, and that learning resources LR31b and LR32a have not been consumed by Irene. These resources are therefore prescribed to Irene and facts (B2) are added to the knowledge base. Finally, NonCoreResourceRule2 is applied on facts (C1) to infer (C2) which states that concept C28(Logical_Expression) is part of C5(Selection), and that LR5a and LR5c which correspond to concept C5 have not been consumed by Irene so far, and therefore can be prescribed to her as non-core related knowledge. The learning sequence resulting from the application of the above mentioned rules produces the learning sequence shown in Figure 16 where the type of each learning resource (e.g. core, non-core, prerequisite, etc.) is clearly specified to the learner.
The learning path navigation algorithm, described below, is triggered once the learner starts interacting with the initially recommended learning sequence. The invocation of any learning resource leads to updating the list of consumed learning resources, and generating a new sub-learning path associated with the newly invoked concept. The new path is then added to the global learning path as shown in step 10 of the Learning_path_Navigation algorithm. The new sub-learning path is constructed by the learning_path_generation procedure, which first infers core concepts associated with the newly invoked concept and appends their respective resources. Then, for each core concept, it infers resources to fulfill their prerequisite knowledge. The learning path thus, grows dynamically as the learner invokes new concepts. However, when the learner makes a backward move, the corresponding sub-learning path is completely pruned from the global learning path. The learning session terminates when the learner either logs out of the system or consumes all prescribed learning resources.