Submitted to 5th Int. Workshop on SW Composition (SC 2006), March 2006, Vienna

Semantically Annotated Software Components Peter Graubmann1, Mikhail Roshchin2 1

Siemens AG, Otto-Hahn-Ring 6, 81730, Munich, Germany [email protected] 2 Siemens AG, Otto-Hahn-Ring 6, 81730, Munich, Germany Volgograd State Technical University, Volgograd, Russia [email protected]

Abstract: The aim of this contribution is to present concepts and to propose techniques and a methodical support for automated software composition using “rich” semantic descriptions of components and services. Our approach is based upon a Component Description Reference Model (CDRM) for which both, semantic description patterns and inference mechanisms are defined, which offer variability in expressiveness, reasoning power and the required analysis depth for the identification of component properties and qualities.

1 Introduction To survive in the struggle about market shares, the IT industry is increasingly reliant on compositional system and software development in order to remain cost effective. The major part of today’s software systems usually consists of commodity software used by all competitors in the respective business domain. This kind of software is most favourably provided by specialized enterprises (for instance, SMEs as “third party providers”) or by Open Source communities; the system provider (or integrator) itself has to concentrate on its core competence and develop the “distinguishing” parts of its products wherein the competitive advantage lies. Around this compositional software development approach new software engineering methods and business opportunities developed, based upon the idea of a quick and flexible composition of components from heterogeneous origins. Thus, software components are in the main focus of today’s software engineering activities. Component-based systems are easier to understand, build, debug and maintain than the monolithic systems of the earlier days. Middleware technologies associated with components (as, for instance, CORBA, COM, JavaBeans) as well as service-oriented approaches (like Serviceoriented Architectures (SOA) or web services) provide standardized, off-the-shelf solutions for component interconnection, thereby moving software closer to plug-andplay systems and better supporting re-use within system families. Furthermore, component-based systems are more easily evolved in order to accommodate upcoming market needs. The above mentioned technologies particularly offer component models that support system evolution. As a result, components are being used in a growing number of projects.

1/16

Submitted to 5th Int. Workshop on SW Composition (SC 2006), March 2006, Vienna The basic prerequisite for all compositional approaches to system and software development is that adequate components or services have to be discovered and retrieved from mostly rather heterogeneous “asset spaces” (which might be a closed system family component repository restricted to one company, a several companyspanning collection of commonality software, an Open Source portal or the internet where services of all kind can be found). Suitable components have to be integrated with the already composed parts of the system, they have to be invoked and executed and probably monitored. All this – finding, integrating, running and maintaining – can only be done if there is enough information available about the component or service, its functional and non-functional properties, its behaviour, its intended business logic, its required or provided guarantees with respect to its qualities, its dependence on the environment and its runtime requirements, etc.1 This is even more the case, if an automated composition is envisaged. And since, in the large, identification of composable components and their actual integration prove to be both complex and complicated, automating support seems to be inevitable to make system composition cost effective. We assume that all the information needed by an automated composition approach is made available by annotations to the components or services which provide semantically sufficient description means to extend existing syntactic standards. To summarize, applications for which the approach presented here will mostly be utilized show the following characteristics: • System integration is largely based upon components from heterogeneous origins. This is particularly true for web service composition (SOA) with its vision to integrate and exploit the (best) suitable services that are ad hoc available in the local area at the time of the service request. • Domain specific components (for instance for system families) have to serve in several configurations, often requiring the capability for interaction protocol variability. Recurring examples for configuration patterns in system families are (a) the replacement of a certain component in many slightly different configurations, or (b) the configuration of many slightly different products out of components with slightly different interfaces. This requires, besides the precise information about the component properties and qualities, a thorough description of the component variability. Explicit configuration and manual adaptation is tedious and very costly, in many cases simply not feasible (see the SOA paradigm of ad hoc service composition). Thus, there is an urgent need for concepts and support for automated configuration of components and web services with semantically incongruent but exhaustive descriptions of their functionalities and their properties and qualities including the variability in their interaction protocols. In other words, a key issue for any success in component or web service composition is the appropriate formalism to express and relate component or service semantics. Parallel to component-based software development and the web service paradigm with its SOA approach, the idea of the Semantic Web [6] evolved within the realm of the internet: Semantic Web basically means to associate semantics with information 1

In the following, we will subsume all these various pieces of information around components and services as their “properties and qualities”.

2/16

Graubmann, Roshchin: Semantically Annotated Software Components on the web which in turn can be used by intelligent applications that collect information from various independent internet sources, and process and integrate this information automatically despite the fact that the meaning of the information on different web sites is neither aligned directly nor mutually agreed upon in advance. The key concept here is to describe a selected domain with the help of an ontology; that essentially means to capture the domain knowledge and to characterize domain specific relations and inference rules. The latest standard for defining an ontology – proposed by W3C – is the Web Ontology Language (OWL) [7] based on XML-syntax and the description logics formalism [8], which is employed to define the concepts that are necessary to accommodate automated inference techniques. Thus, by offering these mechanisms, the Semantic Web approach leads to much better results if it comes to comparing and relating information from different sources, than, for instance, a simple keyword search, which doesn’t make sense at all when the amount of information is growing exponentially. It turns out that Semantic Web techniques and the mechanisms needed for adequately describing and relating component and service semantics are quite similar: hence, we chose to solve the problem of annotating components and services for composition and co-operation in an heterogeneous environment by introducing a logical formalism for knowledge representation into the annotation process2, together with ontologies which serve as main part of the application and application domain description. That will provide the possibilities for automated reasoning about component and service properties (usable for analysis, consistency checking, information extraction and its representation in various forms specifically adapted to the needs of different users – which might be either humans or machines); it will further support search techniques by providing inference mechanisms that match preand post-conditions, inputs and outputs (with respect to syntax and given semantic specifications) and the semantics of properties/guarantees and requirements. One of the formalisms attempting to provide for the necessary functionalities is OWL-S (formerly DAML-S) [9]. It is an ontology specifically tailored to services that supports service providers with a core set of mark-up language constructs for describing the properties and capabilities of their services in an unambiguous, computer-interpretable form. Furthermore, OWL-S is capable to include also domain specific ontologies. Nevertheless, OWL-S turns out to be not the proper solution for representing semantics for services since it does, for instance, not provide enough expressivity for describing quality properties and, furthermore, it is not aligned with the existing web service standards [10]. There are also a number of other approaches (like METEOR-S [12]} and WSMF [13]) which propose different ways to describe services based on their different foci. Most of these approaches rely on different knowledge representation formalisms. Their usability for service and component description in our context depends to a large extent on their expressiveness and the associated inference mechanisms. This important question about the adequacy of semantics representation – with respect to complexity and expressivity – has so far not yet got the attention it deserves.

2

We speak about an annotation process since the provision of the information necessary for component or service composition has to be done in a structured and well-planned way.

3/16

Submitted to 5th Int. Workshop on SW Composition (SC 2006), March 2006, Vienna Furthermore, there is not yet a clear and common understanding about the proper way to describe software components independently from the services they provide and what information exactly has to be provided. With respect to this last point, we strongly advocate in this contribution that, when annotating components, interfaces should not only be described statically, but they should also be annotated with a behaviour description including a description of the variability that results from environment variants and other components involved in the respective business process scenarios. Our approach is based on the idea that restricting the annotation process to one semantic representation formalism only is an unfortunate limitation of descriptive power (if at the same time unneeded complexity has to be avoided). Providing flexibility in choosing the best suited semantic mechanisms for the given problem allows finding the right balance between expressivity and computational complexity. This consideration influenced our Component Description Reference Model (CDRM) presented in section 2.1 and prompted us to introduce our “Logic-on-Demand” concept which will be discussed in section 2.2. The following section 2.3 introduces the “Triple Semantic Model” with its notion of dynamic annotations that allows us to cope with semantic descriptions that are changing over time due to changing environments or time-dependant requirements. This concludes the presentation of our approach. Section 3 illustrates the introduced concepts with an example. Section 4 concludes this presentation and gives an outlook to our future work on semantically annotated software components.

2 Approach In order to support automated component and service composition, we aim at the development of methods and techniques for intelligent machine-based component and service discovery, integration and interoperation adaptation. As a constraint, we assume that the software components and services belong to possibly heterogeneous application domains and therefore are far from being described in a way that allows comparing their properties easily. The reasoning mechanisms that have to be in place in order to determine which components fit together and how they have to be adapted if necessary have to be adjusted to the requirements with respect to expressiveness and decidability of the involved description formalisms and the required analytic depth of the inference engines. We developed the “Component Description Reference Model” (CDRM) to provide a proper annotation and reasoning process that determines the semantic description patterns and the related inference mechanisms that are to be used. To overcome the problems resulting from the usage of – on a first glance – rather incommensurable description means, within CDRM we integrate: • The annotation process, which provides a structured procedure for associating arbitrary information with components or services and their interfaces. • Behavioural descriptions of the component or service interaction protocol – To describe the interaction protocols, we employ the Message Sequence Charts (MSCs) and the cognate UML 2.0 Interactions extended for this purpose with the

4/16

Graubmann, Roshchin: Semantically Annotated Software Components connector concept (see [2]) and means to express variability in the interactions between components and services. • Knowledge-based techniques – Here we employ (a) Semantic Web techniques (ontologies and semantic descriptions given by annotations), (b) the adequate logic formalisms – adapted to the respective needs – in order to describe and reason about functional and non-functional properties, behaviour, requirements and guarantees, business rules and usage scenarios, and (c) the inference engines for automated reasoning, acquisition and consistency checking. In this paper, we focus on the Semantic Web approach, enriched by rule-based and modal logic formalisms, and its integration into an annotation process. This integration means that the annotation process is extended by a logical formalism for knowledge representation and by ontologies as the main part of an application domain description which gives us the means to identify semantically related annotations. The knowledge representation mechanisms and the ontologies will also serve as a means to support analysis, consistency checks and the extracting of information and its rerepresentation in modified form (thus allowing to flexibly transform the given information into proper input for certain tools, if necessary). 2.1 Component Description Reference Model The Component Description Reference Model (CDRM) encapsulates the basic concepts for annotating software components and web services with respect to compositionality. First, it is intended to unify company based description models and therefore proffers annotation patterns. Second, it allows the mapping of component or web service descriptions from other companies or from different domains in order to make them comparable and accessible for component selection. The CDRM comprises three parts (see Fig. 1): • the Semantics Model, • the Ontologies, and • the Logics and Inference Mechanisms. The Semantics Model: The semantics model in the CDRM is built by a hierarchy of so-called Semantics Micro-Models (SMMs). Each SMM defines a cluster of related properties and qualities. The contents of an SMM is characterised by (1) its domain or business logical coherence and (2) by its capability to be expressed within the same (description) logic for knowledge representation (as, for instance, classical FOL, Description Logics, Modal Logics, etc.). The latter includes also the possibility to reason about and to analyse it with the same inference mechanisms. Thus, the various SMMs allow making variable use of semantic techniques. The SMMs form a lattice with respect to a consistency relation that is the basis to combine the various description clusters for complex description tasks. The Ontologies: Ontologies are used to create a shared understanding of application domain and development process knowledge that is crucial for component-based software development activities such as matching component or service properties in order to satisfy the composition requirements. The ontology associated with the CDRM is comprised of (a) a general ontology (which may be

5/16

Submitted to 5th Int. Workshop on SW Composition (SC 2006), March 2006, Vienna formed by any suitable collection of public available ontologies), (b) companyspecific ontologies, and (c) the specific ontologies for the SMMs. The Logics and Inference Mechanisms: The logics and inference mechanisms are the collection of the logics and the corresponding inference engines that are used by the various SMMs and their related ontologies. Thus, associated with each SMM, there is a particular logic and a particular inference mechanism; some SMMs may though use the same logic and inference mechanism. Component Description Reference Model CDRM

External Component Description Models

General Ontologies Company-specific Ontologies

General Ontologies Domain Specific Ontologies Semantics Models

Specific Ontologies for Semantics Micro-Models

Company External Servers / Clients

Used Logics

Annotation Component/Service Property Description

Matchmaking between external Annotations: Mapping onto the CDRM

Ontology cluster of related semantics with common requirement with respect to expressiveness Semantics Micro-Model

Company Internal Servers / Clients Inference Engine

Semantics Micro-Model Semantics Micro-Model Semantics Micro-Model

Inference Engine Inference Engine Inference Engine

Logic

Annotation Component/Service Property Description

Logic Matchmaking between internal Annotations: Based on the CDRM Mechanisms

Logic

Annotation Client Requirements Description

Logic

Semantics Micro-Model

Semantics Model

Annotation Client Requirements Description

Schema for Semantic Component/Service Description

Logic & Inference

“_ is consistent with _” “logic & inference machine is used within _” “ontology defines specific terms within _”

Annotation Process

“ontology is used by _”

Fig. 1. The Component Description Reference Model (CDRM).

From the CDRM, schemata are derived which are used in the annotation process. These schemata ensure a correct and adequate semantic description of the companyinternal components. For the composition processes within the company, “matchmaking” is based upon these schemata and the directly corresponding mechanisms in the CDRM. If components from outside the company for which the CDRM is defined have to be incorporated, a semantic mapping of the component description into the CDRM has to be performed. Thereby, general available ontologies as well as publicly accessible domain-specific ontologies are used. The mapping mechanism also recurs to semantics models of the respective component and the logic, used for its description. The CDRM as reference base for component and web service descriptions has to be modelled as well as the specific annotations of the components and web services. From this modelling point of view, we distinguish between:

6/16

Graubmann, Roshchin: Semantically Annotated Software Components The meta-modelling level: It is concerned with defining the CDRM and includes (a) defining the semantics model – this means, to define the “universe” of description means for components and services (in the form of schemata, thereby identifying and collecting what kind of information about components and services is needed) and (b) defining the ontologies tailored to the semantics model (or other forms of respective model representations). These ontologies are used to relate annotations given in different formalisms and languages. The modelling level: It is concerned with annotating concrete components or web services. It takes the annotation schemata defined within the CDRM and consists of (static) annotations and dynamic annotations: (a) The (static) annotations represent concrete information about a component or web service, provided in the form that is given by an annotation schema from the CDRM. The information presented with annotations is readily usable; in particular, it can be used by interference engines for reasoning about component and service properties. (b) The dynamic annotations define a set of rules to derive dynamically changing information whenever this information is needed (for instance, if the usage cost of a service is based on a currency other than the requestor’s currency, it might be necessary to calculate the actual costs, using the exchange rate valid at the time of the service request). The use of dynamic annotations is discussed in section 2.3 where the integration of (static) annotations, dynamic annotations and ontologies into the so-called Triple Semantic Model are discussed. The adaptation level: In order to relate components or services that are not described according to the Semantic Model in the CDRM, it becomes necessary, to define or identify ontologies that can be used for a translation of their annotations. 2.2 The Concept of Logic-on-Demand Semantic modelling of components and services involves a large variety of information from different application domains and of various categories, like names and definitions, behaviour rules, probability relations, and temporal properties. Thus, it seems to be the obvious to choose the most expressive logical formalism that is capable to formulate and formalise the entire needed information. But, doing so very likely results in severe decidability problems. Several ontology languages have been developed, each aiming at solving particular aspects of ontology modelling. Some of them, such as for instance RDF(S) [14], are simple languages offering only elementary support to express classes and properties for ontology modelling in the Semantic Web. There are others, more complex languages firmly grounded in formal logic, that are particularly focused on advanced inference capabilities in order to automatically derive facts not explicitly present in the model. An example is F-logic [15] which offers ontology modelling through an object-oriented extension of Horn logic. Then again, several other description logic languages (as, for instance, OWL [7]) are deliberately restricted to a carefully selected subset of first-order logic with the intention to allow for decidable and complete inference procedures. Each one of these logic formalisms is particularly dedicated to represent and express specific entities and features: description logics, for instance, describes classes and notions, F-logic describes objects and rules, and modal logics

7/16

Submitted to 5th Int. Workshop on SW Composition (SC 2006), March 2006, Vienna allow expressing and assigning features of a modal (possibility and necessity), temporal (liveness and safety) or probabilistic (degrees of likelihood) kind. Applications / Users / etc.

Racer / FaCT

Description Logic

Hoolet / Vampire

Horn Clause Rules

FaCT

Modal Logic

Semantic Modeling

OWL

OWL Parser

RDF

RDF Parser

XML

XML Parser

XML Programming

Logic Programming

Fig. 2. The Logic-on-Demand Concept.

Very often, logic-based approaches focus primarily on the expressivity of the model, thereby neglecting the impact that expressivity has on the performance of reasoning algorithms and the facility to integrate the logic formalism into industrial applications. Due to these problems, there has up to now not yet been a large number of successful applications of ontology-based techniques in industry [4]. Our approach, based on the concept of Logic-on-Demand (LoD), is supposed to overcome the problems by accommodating the expressivity of the proposed ontology languages to the varying needs and requirements, in particular with respect to decidability3. The main purpose of the LoD concept is to provide an adequate and adaptive way that is based on uniform principles for describing all the notions, relations and rules, the behaviour and anything else that proves necessary during the component or service annotation process. To achieve this, LoD means to define a basic logical formalism that is adequate and tailored to the application domain and to incorporate additional logic formalisms and description techniques with further expressivity as optional features that can be used whenever needed. These additional formalisms share notions and terms with the basic formalism which will be grounded syntactically in OWL and semantically in the description logic. Description logic (DL) consists of concepts (classes), logical definitions of concepts, properties (roles) of concepts and instances of them. In first order logic, classes can be understood as unary predicates, and properties as binary predicates. 3

Decidability comprises soundness and completeness. That means that positive answers are always correct and all positive answers are found.

8/16

Graubmann, Roshchin: Semantically Annotated Software Components DL provides us with basic features for the modelling of software components and services. It is sufficient to define a terminology, hierarchical structures of terms, and definitions of concepts and their properties through terms. It plays an essential role in our modelling framework. However, the expressivity of DL is, for instance, not sufficient to describe implication rules, modalities and probabilities which are needed for a proper reasoning in service or component composition and therefore, we propose to extend DL with these description means. When we have achieved to get the appropriate syntactic and semantic descriptions of a software component/service and its functionalities, we have to further add dynamic characteristics with respect to the concrete values of the involved notions and terms, for instance, to express behaviour variability in response to different situations in heterogeneous environments. To cope with this issue, rule-based techniques have to be employed. So, we first of all add rule-based mechanisms to DL. We extend the DL notions with an implication operator applied to the concepts, their instances, and values. We thereby follow an approach described in [4] using the Semantic Web Rule Language (SWRL) [16], which extends the set of OWL axioms to include the possibility to express rules. SWRL rules are given in the form of an implication between conditions and consequences. Traditional techniques and methods for software component description do not provide sufficient expressivity to describe non-functional properties and requirements, which usually are based on real-time and probability constraints. To express this, and to integrate it into the reasoning process, we have to resort to modal logic. Also, talking about behaviour and quality properties, we need to take into account the temporal and probabilistic character of that knowledge for which we need to define modal logic constraints. Therefore, we suggest extending the notions of DL with modalities and probability functions. Our approach is based on the firm belief that current standards for software component description and specification profit from the extension with semantics. We propose to support current techniques instead of thinking about new semantic models. With support we mean here such interface and component property description techniques that help to automatically generate new services or software applications out of a given set of available components and services through their interacting and interoperating. So, with this end in mind we propose our approach that is based on the idea of Logic-on-Demand which is structured according to the schema in fig. 2 existing component or service description techniques are enriched with semantics. Focal point thereby is the structured annotation process. 2.3 Triple Semantic Model The purpose of the Triple Semantic Model is to define a distributed computing model for software components and services, and to provide mechanisms to distinguish between different entities represented within that model. It consists of three levels: • the Ontology Level, • the Dynamic Annotation Level, and • the Annotation Level.

9/16

Submitted to 5th Int. Workshop on SW Composition (SC 2006), March 2006, Vienna

-• Probability Probability Temporality •- Temporality -• Fuzziness Fuzziness Uncertainty •- Uncertainty

-• Rules -• Process -• Constraints -• Definitions

Terminology -• Terminology Structure -• Structure •- Relations Relations Definitions -• Definitions

Modal

Logic

M odal

Rule-based

Logic

Rule-based Lo gic

Description Logic

Description Logic

Ontology

Logic

Dynamic Annotation

Meta-Model Level

Description Logic

Annotation

Model Level

Fig. 3. The Triple Semantic Model.

The ontologies on the Ontology Level are intended to provide a general framework, in most cases based on a specific application domain4, to describe any kind of software component or service based on this domain. Since ontologies enforce proper definitions of the concepts in the application domain, they also play an essential role in standardising the definitions of component or service properties, requirements and interfaces with respect to their domain. Our central goal, the automated discovery of components and services appropriate for a given composition task is closely connected with the ontology level. Searching for an appropriate component or service means, that certain concepts (component properties) have to be checked whether they fit or not to the given query parameters. If several domains are involved then only ontology mapping mechanisms help to identify a match of query parameters and component properties. To achieve this domain spanning approach, it becomes necessary to relate the domain-specific notions. To provide for this, knowledge engineers have to define component or service properties, in particular also dynamic properties, like run-time scenarios, behaviour patterns, or the like, on the ontology level. For these definitions, description logic with the extensions, presented in the previous section has to be utilized, whereby the principles of our LoD concepts have to be taken into account.

4

Ontologies, however, may also span several domains or be defined for even more general purposes.

10/16

Graubmann, Roshchin: Semantically Annotated Software Components On the Ontology Level, basic notions, information and knowledge are defined which hold independently from actual circumstances, the situation in the environment or the actual time. However, such dependencies from actual, dynamically changing circumstances do have an important influence in the compositional approach. Hence, rules determining how to cope with this dynamicity have to be provided if one has to include it into the reasoning. They are specified on the Dynamic Annotation Level: Dynamic annotations play the role of mediators between the ontology and the static semantic annotations that describes the component or service properties and qualities, and in particular its requirements with respect to composition. As an example, consider delay time characteristics of a service which depend on the platform it will be executed: thus, delay time relevant platform dependency for a particular service has to be specified on the level of dynamic annotations. Concrete values can then be derived when the circumstances of a concrete execution become known. The Annotation Level comprises the static descriptions of the properties and qualities of components or services. A brief sketch of the component or service composition process according to the Triple Semantic Model now would comprise the following steps: • Requirements on a component or service to be integrated into the system are collected. They serve as selection criteria when candidates are checked. • The dynamic annotation and the (static) annotation of the candidate component/service are used to create an annotation that is valid in the given situation and time. • This annotation is analysed and compared with the initial requirements. • If the result shows that the component fits it may be integrated (what may include the generation of data transformations in order to adapt the interfaces). The most basic and technically inevitable description of software components or services is given through standard interface description languages like WSDL, CORBA IDL, etc. These languages are used to specify input and output information merely on a syntactic level, but give no clue about behaviour and qualities of the component/service. Thus, we do not have any facts to reason about the component or service properties. To allow for that, additional information has to be added and this information is collected according to the Triple Semantic Model. So, software engineers and system developers have to define their specific view on the concrete component/service and they naturally formulate this information in the terminology of the domain or system family to which the component/service belongs. If the annotating is done properly, we have the complete information about the component/service properties. Due to the Logic-on-Demand concept, this information is available not only for the developers but also presented in a form that is readable for automated acquisition and adaptation tools and thus, it allows reasoning and derivation of additional information.

11/16

Submitted to 5th Int. Workshop on SW Composition (SC 2006), March 2006, Vienna

3 Example: A Calling Service The example demonstrates how component or service description is done according to our semantic modelling approach. We chose a service named CallingService which is expected to provide its users with stable, reasonably priced phone connections. It is assumed to be accessible via a service interface. The CallingService belongs to the class of so called least-cost routing services. The user plays the role of a service requestor. The CallingService requires as input either a phone number or the name of the person to be called. In addition, the user may optionally classify a call as “urgent”, “cheapest”, or “confidential”, which presents non-functional requirements and has to match with certain service qualities. The CallingService on its part provides additional information about the quality of the connection: low or high quality, and secure connection, and the type of the phone number which was dialled – that is, a home, office, or mobile number. After getting a request from a user, the CallingService interoperates with an information service called PrefixTable to identify a low-price provider and its prefix numbers – this of course depends upon the location of origin and destination of the call and has as well to take into account whether a fixed network or a mobile phone is being used. Getting back a seemingly correct prefix number from PrefixTable, CallingService accesses the respective provider to ensure that the actual provider’s offer still conforms to the advertisement retrieved from PrefixTable. The CallingService may also request additional information from the provider, as, for instance, connection quality or it may also ask to specify the trustworthiness of prefix number providers. Analyzing the non-functional requirements of the user, CallingService tries to establish the connection with the help of the providers prefix number. If this fails or if not to use the prefix number is apparently preferable due to a mismatch of price or non-functional requirements, then CallingService tries to establish the connection by repeating the whole procedure with a different PrefixTable service and different prefix numbers. class

class

Requirement

specifiedBy

Connection

hasQuality callsTo

class

CallType

cheapestCall confidentialCall urgentCall

class

class

ConnectionQuality

NumberType

lowQuality highQuality secureCall officeNumber homeNumber mobileNumber

Fig. 4. Requirement and Connection Ontology.

So far, this is the procedure followed by the CallingService to provide the users with the cheapest and most reliable connections according to their requirements. The benefit of the service is obvious: it enables users to phone without worrying about price and connection quality. It can be used for phones in fixed networks or be embedded into mobile devices.

12/16

Graubmann, Roshchin: Semantically Annotated Software Components An ontology shown in Fig. 4 is used to provide the major terminology and definitions of these terms (for our example, we use the syntax of Description Logic [8]).

⊆ ∃ specifiedBy(CallType) . ⊆ ∃ hasQuality(ConnectionQuality) AND ∃ callsTo(NumberType) .

Requirement Connection

(S1) (S2)

Statement (S1) means that a requirement – presented as class Requirement – has to be specified at least once by a call type which is defined through the class CallType. Statement.(S2) formulates that Connection, which is a response from the CallingService, has to be defined by indicating a connection quality and the type of the dialled number (that is, ConnectionQuality and NumberType, respectively). As Fig.4 shows, the role names specifiedBy, hasQuality and callsTo determine the new classes ∃ specifiedBy(CallType), ∃ hasQuality(ConnectionQuality), and ∃ callsTo(NumberType), respectively. Requirement and Connection are defined as subclasses of these classes according to the statements (S1) and (S2). The instances of the classes CallType, ConnectionQuality, and NumberType are given as following: {cheapestCall, confidentialCall, urgentCall} : CallType . {lowQuality, highQuality, secureCall} : ConnectionQuality . {officeNumber, homeNumber, mobileNumber} : NumberType .

(S3) (S4) (S5)

Additionally we specify that a secure connection cannot be of low quality which is formalized by:

∃ hasQuality(LowQuality) AND ∃ hasQuality(secureCall) ⊆

FALSE .

(S6)

The only problem we have here is, however, that the user request and the response from CallingService are formulated in incongruent terms. The user requests a confidential or urgent connection, and CallingService is talking in terms of secure, high and low quality connection. So what actually does “confidential” as specified in the input for the CallingService mean? To solve this question, the composition system has to address ontologies and the annotation semantics representing semantic metamodels and rules for matching. Requirements and connections are compatible if their conjunction as classes on the ontology level is satisfiable: NOT (Requirement AND Connection



FALSE) .

(S7)

With rule-based methods using SWRL [16] we define the matching condition that allows the mapping of the user requirements onto connections provided by the CallingService (please note that “?requirement” and “?call”, etc., are variables): (S8)

Requirement(?requirement) AND specifiedBy(?requirement, urgentCall) AND Connection(?call) AND callsTo(?call, mobileNumber) AND hasQuality(?call, highQuality) Æ matches(?call, ?requirement) .

13/16

Submitted to 5th Int. Workshop on SW Composition (SC 2006), March 2006, Vienna Requirement(?requirement) AND specifiedBy(?requirement, confidentialCall) AND Connection(?call) AND callsTo(?call, (mobileNumber OR officeNumber)) AND hasQuality(?call, secureCall) Æ matches(?call, ?requirement) .

(S9)

Statements (S8) and (S9) provide explicit mapping rules that allow to compare connection requirements with responses from the CallingService. Statement (S8) describes under which condition the requirement “urgent call” from a user and the properties of the connection provided by the CallingService match: there has to be a connection to a “mobile number” with “high connection quality”. In the case of confidentiality, statement (S9) requires a secure connection to a mobile or office number. In the same manner a matching rule for “cheapest call” can be defined; there, one would probably specify that a connection that is to be “cheapest” cannot be established with a mobile phone number, since calls to mobile phones usually are far more expensive than those in fixed networks. There is a strong motivation to additionally use modal logic for semantic knowledge representation in our example: we need to add and clarify the notion of trust in responses coming from the PrefixTable and the prefix number service providers. The idea behind the concept of trust is to, for instance, reason about the propagation of trustworthiness among communicating services.. If information from a non-trusted service is conveyed through a trusted service, is it then assumed to be trustworthy or not? The consequences of information from a non-trusted service might be quite different from those derived from information provided by a trusted one. A particular implication would be how to judge (with respect to trustworthiness) succeeding pieces of information: are they correct and should be trusted; or are they possibly not to be trusted at all? To formalize this kind of knowledge and to present it in machine-processable form, we use classical modal logic with the necessity and possibility operators, denoted as □ and ◊ , respectively. Thus, we express a form of doubt for our scenario on a meta-information level. First, we define the role “isTrusted” for PrefixTable, which allows to express that this service might not be trusted by a certain user for some reasons, for instance, because it was sending inconsistent information in the past. Second, we define the role “hasTrust” for the PrefixService class expressing whether a particular user trusts or trusts not in the reliability of the PrefixServic responses. For example, a certain user may not trust a particular PrefixService because he has already had the experience of low quality connections or incorrect price regulations PrefixTable(?table) AND isTrusted(?table, NO) AND PrefixService(?prefix) AND isReceivedFrom(?prefix, ?table) Æ ◊ hasTrust(?prefix, NO) .

(S10)

Statement (S10) means that the service according to a prefix number that was received from a PrefixTable service, that is not trusted, is very probably not trusted as well. This information can be used by the CallingService to reject using this prefix number for establishing a connection. Eventually, if prefix from PrefixTable was specified before as trusted, then we have:

14/16

Graubmann, Roshchin: Semantically Annotated Software Components

prefix =

◊ ∃ hasTrust.NO AND ∃ hasTrust.YES .

(S11)

The interpretation of statement (S11) is clear – we cannot trust that particular prefix and its service. It should be mentioned, that there is a far wider use for modal logic to express qualities of services, for instance for the specification of delay time, pricing information and performance regulations. We are currently working on the proper use and adequate formalisation of such characteristics. Above, we presented step-wise how to annotate the CallingService and its supporting services semantically with the necessary information to allow automated composition. Annotations themselves are supported by Semantic Web techniques (ontologies), rule based techniques and modal logic using the adequate formalisms whenever needed. The example indicates the type of information and knowledge for which specific formalisms are needed. It also presents paradigmatically how the different formalisms may be presented and which reasoning steps are necessary to decide about the service composition.

4 Conclusion and Outlook The central objective of the approach proposed above is to support the process of reusing software components in a plug-and-play fashion. The use of semantic modelling techniques shown in this paper can be applied for many scenarios in the realm of component and service composition. With our semantic modelling techniques, we have presented methods to overcome the problem of providing adequate and semantically sufficiently rich component and service descriptions that are appropriate for both human and machine interpretation and thus ready to be used within an automated composition and utilization process. The limitations of current description techniques with respect to component and service semantics is still a major drawback for automated composition mechanisms – in the field of service-oriented architectures (SOA), where loos coupling and ad hoc selection and composition are the main problems, as well as in system family engineering (SFE), where a more static composition procedure may be feasible, but frequent product configurations ask for automation, and in Ambient Intelligence technology (AmI) with its need for flexible runtime configuration. In these and many more IT areas, automated composition and the provision of sufficient semantics become a must if products shall be produced cost efficiently and with short time-tomarket. Our approach proposes an annotation process and its semantic extensions through knowledge-based techniques as the basis for semantic modelling. The Component Description Reference Model structures the annotation process and introduces flexibility with respect to the description mechanisms what allows for a trade-off between expressivity and complexity and the selection of the appropriate reasoning tools (section 2.1). It is based on the Logic-on-Demand concept which means to achieve a proper compromise between existing semantic approaches and proposes a hybrid knowledge-based solution for annotating software components (section 2.2).

15/16

Submitted to 5th Int. Workshop on SW Composition (SC 2006), March 2006, Vienna The Triple Semantic Model introduces the notion of dynamic annotation which allows us to cope with semantic descriptions that are changing over time due to changing environments or time-dependant requirements, thus, it provides a further abstraction layer to the common modelling principles for properties and qualities of components and services (section 2.3). There are, however, still open questions. We continue to work on automatic mapping of different ontologies from heterogeneous environments and knowledge application domains, on integration of different logic formalisms for component and service description, and on the mutual adaptation of problem solvers based on different logics and inference algorithms, to name but a few of the themes to be tackled in the future. We also will particularly focus on tool support for the proposed techniques to demonstrate the expected benefits, and we will later on integrate the techniques into a software development environment.

References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16.

Peter Graubmann, Evelyn Pfeuffer, Mikhail Roshchin: Web Services Annotation and Reasoning. Position paper, W3C Workshop on Frameworks for Semantics in Web Services, Innsbruck, June 2005. Peter Graubmann: Describing Interactions between MSC Components – The MSC Connectors. In Reed (ed.): Computer Networks, Special Edition, ITU-T System Design Languages (SDL), 42(3):323-342, 2003. Jack Greenfield, Keith Short: Software Factories, Wiley Publishing, 2004 Munindar P. Singh, Michael N. Huhns: Service-Oriented Computing: Semantics, Processes, Agents, John Wiley & Sons, Ltd, 2005 Claus Pahl: Ontologies for Semantic Web Components. ERCIM News No 51, Oct. 2002. http://www.ercim.org/publication/Ercim_News/enw51/pahl.html Semantic Web, http://www.w3.org/2001/sw/ OWL, http://www.w3.org/TR/owl-features/ Description Logic, http://dl.kr.org/ OWL-S, http://www.daml.org/services/owl-s/1.0/ Web Service Standards, http://www.w3.org/2002/ws/ Semantic Web Services Initiative (SWSI), http://www.swsi.org/ METEOR-S, http://lsdis.cs.uga.edu/Projects/METEOR-S/ WSMF, http://wsmf.org/ RDF(S), http://www.w3.org/TR/rdf-schema/ F-Logic, http://www.ontoprise.de/content/e5/e190/e191/tutorial_flogic_ger.pdf SWRL, http://www.daml.org/2003/11/swrl/

16/16

Semantically Annotated Software Components

software development approach new software engineering methods and business ..... The purpose of the Triple Semantic Model is to define a distributed ...

255KB Sizes 0 Downloads 136 Views

Recommend Documents

Components of software development risk: how to ... - IEEE Xplore
component analysis, we derive six components of software development risk. In the fourth section, we examine which risk management practices and ...

Querying Semantically Enriched Sensor Observations
The increase that sensor network deployments are experi- encing makes ..... A.: Ontology-based integration of sensor web services in disaster management.

Annotated Bibliography
philosophy. Plato and environmental ethics, nature as a social construct, aesthetics of environment, sustainability, animal welfare, whaling, zoos. Elliot, Robert ...

Annotated Bibliography
Columbia University Press and Blackwell, Oxford, UK, ... good as a basic resource guide to materials, chronology, .... Sands fly, and a host of other topics. 510.

Annotated Bibliography
good as a basic resource guide to materials, chronology, ... seek their own good and are centers of inherent worth that .... Sands fly, and a host of other topics.

The-New-Annotated-H-P-Lovecraft-Annotated-Books.pdf ...
Poe or Melville. Weaving together a broad base of existing scholarship with his own original insights, Klinger appends. Lovecraft's uncanny oeuvre and Kafkaesque life story in a way that provides ... including "The Call of Cthulhu," At the Mountains

Annotated Outline
participants at a seminar at the Inter-American Development Bank for their comments and suggestions, and to ..... imply an average increase in financial development between 6.4% and 25% of GDP, depending ... 17 For countries like Philippines and Cost

Annotated Bibliography
BCC-UCF Writing Center http://uwc.cah.ucf.edu. 1 of 2. Annotated Bibliography. An annotated bibliography is a list of cited sources about a particular topic, ...

annotated mona_mesopotamia.pdf
Page 1 of 2. Strickland, C. (2007). The Annotated Mona Lisa. Page 1 of 2. Page 2 of 2. Page 2 of 2. annotated mona_mesopotamia.pdf. annotated ...

Annotated Outline
Jul 31, 2010 - 3 Several papers provide an analytical basis for this idea. .... In order to test this hypothesis we use sector-level panel data to build a ..... development and that the effect is bigger for firms in the sector that relies more heavil

Annotated-Argumentation.pdf
gambling can have negative effects on the family, health sector, and the law and. enforcement system, it is the attractive revenue that gambling ... (Govoni, Frisch, & Getty, 1998). The legalization of gambling has allowed the state to take ... Annot

Annotated Outline
The Politics of Financial Development: The Role of Interest Groups and ..... example, as presented in Table A4, developing plastic products is much more capital.

Birkman Behavioral Components - Career Pivot
stressed by perceived control by others or restrictive policy and procedure. Low scores reflect group oriented or conventional thought and action; a preference ...

Semantically Enriching VGI in Support of Implicit ...
This is a new contribution as to the best of our knowledge no other similar .... company CloudMade8 offers geographic-related Web services. Among others, ..... to the server, which processes the request on-the-fly, as described in Section 4.

Semantically-based Human Scanpath Estimation with ...
Illustration of gaze shifts. The left and ... effect of spatial position on gaze shifts. Spatial ..... for Eq. (18), we allow a distance threshold of 50, meaning that if the ...

Annotated Mona Lisa_Greek.pdf
Whoops! There was a problem loading more pages. Retrying... Annotated Mona Lisa_Greek.pdf. Annotated Mona Lisa_Greek.pdf. Open. Extract. Open with.

annotated-bibliography-healthcare.pdf
Order NOW... Read MORE. about our offer. Page 1 of 1. annotated-bibliography-healthcare.pdf. annotated-bibliography-healthcare.pdf. Open. Extract. Open with.

5 Components of Comms - IPTNow
video link. Advantages: More durable than TP. Less susceptible to. RFI & EMI. Supports faster data rates than ... Servers: computers that provide services to other.

5 Components of Comms - IPTNow
CHEATSHEET. Comm System Framework. 1. Data source - produces data to ... Radio Wave: Radio. Broadcast, Mobile. Phones, Airport,. Bluetooth. Advantages:.

Annotated Algorithms in Python - GitHub
Jun 6, 2017 - 2.1.1 Python versus Java and C++ syntax . . . . . . . . 24. 2.1.2 help, dir ..... 10 years at the School of Computing of DePaul University. The lectures.

Annotated Z Bibliography
A secure transaction mechanism (SWORD secure DBMS). 3 ... Specification of advanced AI architectures .... Specifying temporal requirements for distributed.

Dynamics of activation of semantically similar concepts ...
Four objects were shown on a computer monitor, and participants were instructed to click on a ... distributed feature-based semantic representations. AttrActor ...