Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris ...

This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!

Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos New York University, NY, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany

3100

James F. Peters Andrzej Skowron Jerzy W. Grzymała-Busse Bo˙zena Kostek ´ Roman W. Swiniarski Marcin S. Szczuka (Eds.)

Transactions on Rough Sets I

13

Editors-in-Chief James F. Peters University of Manitoba, Department of Electrical and Computer Engineering Manitoba, Winnipeg, Manitoba R3T 5V6 Canada E-mail: [email protected] Andrzej Skowron University of Warsaw, Institute of Mathematics Banacha 2, 02-097 Warsaw, Poland E-mail: [email protected] Volume Editors Jerzy W. Grzymała-Busse University of Kansas, Department of Electrical Engineering and Computer Science 3014 Eaton Hall 1520 W. 15th St., #2001 Lawrence, KS 66045-7621, USA E-mail: [email protected] Bo˙zena Kostek Gdansk University of Technology Faculty of Electronics, Telecommunications and Informatics Multimedia Systems Department, Narutowicza 11/12, 80-952 Gdansk, Poland E-mail: [email protected] ´ Roman W. Swiniarski San Diego State University, Department of Computer Science 5500 Campanile Drive, San Diego, CA 92182-7720, USA E-mail: [email protected] Marcin S. Szczuka Warsaw University, Institute of Mathematics Banacha 2, 02-097 Warsaw, Poland E-mail: [email protected] Library of Congress Control Number: 2004108444 CR Subject Classiﬁcation (1998): F.4.1, F.1, I.2, H.2.8, I.5.1, I.4 ISSN 0302-9743 ISBN 3-540-22374-6 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microﬁlms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable to prosecution under the German Copyright Law. Springer-Verlag is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2004 Printed in Germany Typesetting: Camera-ready by author, data conversion by Olgun Computergraﬁk Printed on acid-free paper SPIN: 11011873 06/3142 543210

Preface

We would like to present, with great pleasure, the ﬁrst volume of a new journal, Transactions on Rough Sets. This journal, part of the new journal subline in the Springer-Verlag series Lecture Notes in Computer Science, is devoted to the entire spectrum of rough set related issues, starting from logical and mathematical foundations of rough sets, through all aspects of rough set theory and its applications, data mining, knowledge discovery and intelligent information processing, to relations between rough sets and other approaches to uncertainty, vagueness, and incompleteness, such as fuzzy sets, theory of evidence, etc. The ﬁrst, pioneering papers on rough sets, written by the originator of the idea, Professor Zdzislaw Pawlak, were published in the early 1980s. We are proud to dedicate this volume to our mentor, Professor Zdzislaw Pawlak, who kindly enriched this volume with his contribution on philosophical, logical, and mathematical foundations of rough set theory. In his paper Professor Pawlak shows all over again the underlying ideas of rough set theory as well as its relations with Bayes’ theorem, conﬂict analysis, ﬂow graphs, decision networks, and decision rules. After an overview and introductory article written by Professor Pawlak, the ten following papers represent and focus on rough set theory-related areas. Some papers provide an extension of rough set theory towards analysis of very large data sets, real data tables, data sets with missing values and rough non-deterministic information. Other theory-based papers deal with variable precision fuzzy-rough sets, consistency measure conﬂict proﬁles, and layered learning for concept synthesis. In addition, a paper on generalization of rough sets and rule extraction provides two diﬀerent interpretations of rough sets. The last paper of this group addresses a partition model of granular computing. Other topics with a more application-orientated view are covered by the following eight articles of this ﬁrst volume of Transactions on Rough Sets. They can be categorized into the following groups: – music processing, – rough set theory applied to software design models and inductive learning programming, – environmental engineering models, – medical data processing, – pattern recognition and classiﬁcation. These papers exemplify analysis and exploration of complex data sets from various domains. They provide useful insight into analyzed problems, showing for example how to compute decision rules from incomplete data. We believe that readers of this volume will better appreciate rough set theory-related trends after reading the case studies.

VI

Preface

Many scientists and institutions have contributed to the creation and the success of the rough set community. We are very thankful to everybody within the International Rough Set Society who supported the idea of creating a new LNCS journal subline – the Transactions on Rough Sets. It would not have been possible without Professors Peters’ and Skowron’s invaluable initiative, thus we are especially grateful to them. We believe that this very ﬁrst issue will be followed by many others, reporting new developments in the rough set domain. This issue would not have been possible without the great eﬀorts of many anonymously acting reviewers. Here, we would like to express our sincere thanks to all of them. Finally, we would like to express our gratitude to the LNCS editorial staﬀ of Springer-Verlag, in particular Alfred Hofmann, Ursula Barth and Christine G¨ unther, who supported us in a very professional way. Throughout preparation of this volume the Editors have been supported by various research programs and funds; Jerzy Grzymala-Busse has been supported by NSF award 9972843, Bo˙zena Kostek has been supported by the grant 4T11D01422 from the Polish Ministry for Scientiﬁc Research and Information ´ Technology, Roman Swiniarski has received support from the “Adaptive Data Mining and Knowledge Discovery Methods for Distributed Data” grant, awarded ´ by Lockheed-Martin, and Marcin Szczuka and Roman Swiniarski have been supported by the grant 3T11C00226 from the Polish Ministry for Scientiﬁc Research and Information Technology.

April 2004

Jerzy W. Grzymala-Busse Bo˙zena Kostek ´ Roman Swiniarski Marcin Szczuka

LNCS Transactions on Rough Sets

This journal subline has as its principal aim the fostering of professional exchanges between scientists and practitioners who are interested in the foundations and applications of rough sets. Topics include foundations and applications of rough sets as well as foundations and applications of hybrid methods combining rough sets with other approaches important for the development of intelligent systems. The journal includes high-quality research articles accepted for publication on the basis of thorough peer reviews. Dissertations and monographs up to 250 pages that include new research results can also be considered as regular papers. Extended and revised versions of selected papers from conferences can also be included in regular or special issues of the journal.

Honorary Editor: Editors-in-Chief:

Zdzislaw Pawlak James F. Peters, Andrzej Skowron

Editorial Board M. Beynon G. Cattaneo A. Czy˙zewski J.S. Deogun D. Dubois I. Duentsch S. Greco J.W. Grzymala-Busse M. Inuiguchi J. Jrvinen D. Kim J. Komorowski C.J. Liau T.Y. Lin E. Menasalvas M. Moshkov T. Murai

M. do C. Nicoletti H.S. Nguyen S.K. Pal L. Polkowski H. Prade S. Ramanna R. Slowi´ nski J. Stepaniuk ´ R. Swiniarski Z. Suraj M. Szczuka S. Tsumoto G. Wang Y. Yao N. Zhong W. Ziarko

Table of Contents

Rough Sets – Introduction Some Issues on Rough Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zdzislaw Pawlak

1

Rough Sets – Theory Learning Rules from Very Large Databases Using Rough Multisets . . . . . . . 59 Chien-Chung Chan Data with Missing Attribute Values: Generalization of Indiscernibility Relation and Rule Induction . . . . . . . . . . . 78 Jerzy W. Grzymala-Busse Generalizations of Rough Sets and Rule Extraction . . . . . . . . . . . . . . . . . . . . . 96 Masahiro Inuiguchi Towards Scalable Algorithms for Discovering Rough Set Reducts . . . . . . . . . 120 Marzena Kryszkiewicz and Katarzyna Cicho´ n Variable Precision Fuzzy Rough Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Alicja Mieszkowicz-Rolka and Leszek Rolka Greedy Algorithm of Decision Tree Construction for Real Data Tables . . . . 161 Mikhail Ju. Moshkov Consistency Measures for Conﬂict Proﬁles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Ngoc Thanh Nguyen and Michal Malowiecki Layered Learning for Concept Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Sinh Hoa Nguyen, Jan Bazan, Andrzej Skowron, and Hung Son Nguyen Basic Algorithms and Tools for Rough Non-deterministic Information Analysis . . . . . . . . . . . . . . . . . . . . . 209 Hiroshi Sakai and Akimichi Okuma A Partition Model of Granular Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Yiyu Yao

Rough Sets – Applications Musical Phrase Representation and Recognition by Means of Neural Networks and Rough Sets . . . . . . . . . . . . . . . . . . . . . . . . . 254 Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek

X

Table of Contents

Processing of Musical Metadata Employing Pawlak’s Flow Graphs . . . . . . . 279 Bozena Kostek and Andrzej Czyzewski Data Decomposition and Decision Rule Joining for Classiﬁcation of Data with Missing Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Rafal Latkowski and Michal Mikolajczyk Rough Sets and Relational Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 R.S. Milton, V. Uma Maheswari, and Arul Siromoney Approximation Space for Software Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 James F. Peters and Sheela Ramanna Application of Rough Sets to Environmental Engineering Models . . . . . . . . . 356 Robert H. Warren, Julia A. Johnson, and Gordon H. Huang Rough Set Theory and Decision Rules in Data Analysis of Breast Cancer Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Jerzy Zaluski, Renata Szoszkiewicz, Jerzy Krysi´ nski, and Jerzy Stefanowski Independent Component Analysis, Principal Component Analysis and Rough Sets in Face Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 ´ Roman W. Swiniarski and Andrzej Skowron Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

Some Issues on Rough Sets Zdzislaw Pawlak1,2 1

1

Institute for Theoretical and Applied Informatics Polish Academy of Sciences ul. Baltycka 5, 44-100 Gliwice, Poland 2 Warsaw School of Information Technology ul. Newelska 6, 01-447 Warsaw, Poland [email protected]

Introduction

The aim of this paper is to give rudiments of rough set theory and present some recent research directions proposed by the author. Rough set theory is a new mathematical approach to imperfect knowledge. The problem of imperfect knowledge has been tackled for a long time by philosophers, logicians and mathematicians. Recently it became also a crucial issue for computer scientists, particularly in the area of artiﬁcial intelligence. There are many approaches to the problem of how to understand and manipulate imperfect knowledge. The most successful one is, no doubt, the fuzzy set theory proposed by Lotﬁ Zadeh [1]. Rough set theory proposed by the author in [2] presents still another attempt to this problem. This theory has attracted attention of many researchers and practitioners all over the world, who have contributed essentially to its development and applications. Rough set theory overlaps with many other theories. However we will refrain to discuss these connections here. Despite this, rough set theory may be considered as an independent discipline in its own right. Rough set theory has found many interesting applications. The rough set approach seems to be of fundamental importance to AI and cognitive sciences, especially in the areas of machine learning, knowledge acquisition, decision analysis, knowledge discovery from databases, expert systems, inductive reasoning and pattern recognition. The main advantage of rough set theory in data analysis is that it does not need any preliminary or additional information about data – like probability in statistics, or basic probability assignment in Dempster-Shafer theory, grade of membership or the value of possibility in fuzzy set theory. One can observe the following about the rough set approach: – – – –

introduction of eﬃcient algorithms for ﬁnding hidden patterns in data, determination of minimal sets of data (data reduction), evaluation of the signiﬁcance of data, generation of sets of decision rules from data,

Former University of Information Technology and Management.

J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 1–58, 2004. c Springer-Verlag Berlin Heidelberg 2004

2

Zdzislaw Pawlak

– easy-to-understand formulation, – straightforward interpretation of obtained results, – suitability of many of its algorithms for parallel processing. Rough set theory has been extended in many ways (see, e.g., [3–17]) but we will not discuss these issues in this paper. Basic ideas of rough set theory and its extensions, as well as many interesting applications can be found in books (see, e.g., [18–27, 12, 28–30]), special issues of journals (see, e.g., [31–34, 34–38]), proceedings of international conferences (see, e.g., [39–49] ), tutorials (e.g., [50–53]), and on the internet (see, e.g., www.roughsets.org, logic.mimuw.edu.pl,rsds.wsiz.rzeszow.pl). The paper is organized as follows: Section 2 (Basic Concepts) contains general formulation of basic ideas of rough set theory together with brief discussion of its place in classical set theory. Section 3 (Rough Sets and Reasoning from Data) presents the application of rough set concept to reason from data (data mining). Section 4 (Rough Sets and Bayes’ Theorem) gives a new look on Bayes’ theorem and shows that Bayes’ rule can be used diﬀerently to that oﬀered by classical Bayesian reasoning methodology. Section 5 (Rough Sets and Conﬂict Analysis) discuses the application of rough set concept to study conﬂict. In Section 6 (Data Analysis and Flow Graphs) we show that many problems in data analysis can be boiled down to ﬂow analysis in a ﬂow network. This paper is a modiﬁed version of lectures delivered at the Taragona University seminar on Formal Languages and Rough Sets in August 2003.

2 2.1

Rough Sets – Basic Concepts Introduction

In this section we give some general remarks on a concept of a set and the place of rough sets in set theory. The concept of a set is fundamental for the whole mathematics. Modern set theory was formulated by George Cantor [54]. Bertrand Russell discovered that the intuitive notion of a set proposed by Cantor leads to antinomies [55]. Two kinds of remedy for this discontent have been proposed: axiomatization of Cantorian set theory and alternative set theories. Another issue discussed in connection with the notion of a set or a concept is vagueness (see, e.g., [56–61]). Mathematics requires that all mathematical notions (including set) must be exact (Gottlob Frege [62]). However, philosophers and recently computer scientists have become interested in vague concepts. In fuzzy set theory vagueness is deﬁned by graduated membership. Rough set theory expresses vagueness, not by means of membership, but employing a boundary region of a set. If the boundary region of a set is empty it means that the set is crisp, otherwise the set is rough (inexact). Nonempty boundary region of a set means that our knowledge about the set is not suﬃcient to deﬁne the set precisely.

Some Issues on Rough Sets

3

The detailed analysis of sorities paradoxes for vague concepts using rough sets and fuzzy sets is presented in [63]. In this section the relationship between sets, fuzzy sets and rough sets will be outlined and brieﬂy discussed. 2.2

Sets

The notion of a set is not only basic for mathematics but it also plays an important role in natural language. We often speak about sets (collections) of various objects of interest, e.g., collection of books, paintings, people etc. Intuitive meaning of a set according to some dictionaries is the following: “A number of things of the same kind that belong or are used together.” Webster’s Dictionary “Number of things of the same kind, that belong together because they are similar or complementary to each other.” The Oxford English Dictionary Thus a set is a collection of things which are somehow related to each other but the nature of this relationship is not speciﬁed in these deﬁnitions. In fact these deﬁnitions are due to Cantor [54], which reads as follows: “Unter einer Mannigfaltigkeit oder Menge verstehe ich n¨ amlich allgenein jedes Viele, welches sich als Eines denken l¨asst, d.h. jeden Inbegriﬀ bestimmter Elemente, welcher durch ein Gesetz zu einem Ganzen verbunden werden kann.” Thus according to Cantor a set is a collection of any objects, which according to some law can be considered as a whole. All mathematical objects, e.g., relations, functions, numbers, etc., are some kind of sets. In fact set theory is needed in mathematics to provide rigor. Russell discovered that the Cantorian notion of a set leads to antinomies (contradictions). One of the best known antinomies called the powerset antinomy goes as follows: consider (inﬁnite) set X of all sets. Thus X is the greatest set. Let Y denote the set of all subsets of X. Obviously Y is greater then X, because the number of subsets of a set is always greater the number of its elements. Hence X is not the greatest set as assumed and we arrived at contradiction. Thus the basic concept of mathematics, the concept of a set, is contradictory. This means that a set cannot be a collection of arbitrary elements as was stipulated by Cantor. As a remedy for this defect several improvements of set theory have been proposed. For example, – Axiomatic set theory (Zermello and Fraenkel, 1904). – Theory of types (Whitehead and Russell, 1910). – Theory of classes (v. Neumann, 1920). All these improvements consist in restrictions, put on objects which can form a set. The restrictions are expressed by properly chosen axioms, which say how

4

Zdzislaw Pawlak

the set can be build. They are called, in contrast to Cantors’ intuitive set theory, axiomatic set theories. Instead of improvements of Cantors’ set theory by its axiomatization, some mathematicians proposed escape from classical set theory by creating completely new idea of a set, which would free the theory from antinomies. Some of them are listed below. – Mereology (Le´sniewski, 1915). – Alternative set theory (Vopenka, 1970). – “Penumbral” set theory (Apostoli and Kanada, 1999). No doubt the most interesting proposal was given by Stanisaw Le´sniewski [64], who proposed instead of membership relation between elements and sets, employed in classical set theory, the relation of “being a part”. In his set theory, called mereology, this relation is a fundamental one. None of the three mentioned above “new” set theories were accepted by mathematicians, however Le´sniewski’s mereology attracted some attention of philosophers and recently also computer scientists, (e.g., Lech Polkowski and Andrzej Skowron [6]). In classical set theory a set is uniquely determined by its elements. In other words, this means that every element must be uniquely classiﬁed as belonging to the set or not. In contrast, the notion of a beautiful painting is vague, because we are unable to classify uniquely all paintings into two classes: beautiful and not beautiful. Thus beauty is not a precise but a vague concept. That is to say the notion of a set is a crisp (precise) one. For example, the set of odd numbers is crisp because every number is either odd or even. In mathematics we have to use crisp notions, otherwise precise reasoning would be impossible. However philosophers for many years were interested also in vague (imprecise) notions. Almost all concepts we are using in natural language are vague. Therefore common sense reasoning based on natural language must be based on vague concepts and not on classical logic. This is why vagueness is important for philosophers and recently also for computer scientists. Vagueness is usually associated with the boundary region approach (i.e., existence of objects which cannot be uniquely classiﬁed to the set or its complement) which was ﬁrst formulated in 1893 by the father of modern logic Gottlob Frege [62], who wrote: “Der Begriﬀ muss scharf begrenzt sein. Einem unscharf begrenzten Begriﬀe w¨ urde ein Bezirk entsprechen, der nicht u ¨ berall eine scharfe Grenzlinie h¨atte, sondern stellenweise ganz verschwimmend in die Umgebung u ¨berginge. Das w¨ are eigentlich gar kein Bezirk; und so wird ein unscharf deﬁnirter Begriﬀ mit Unrecht Begriﬀ genannt. Solche begriﬀsartige Bildungen kann die Logik nicht als Begriﬀe anerkennen; es ist unm¨oglich, von ihnen genaue Gesetze aufzustellen. Das Gesetz des ausgeschlossenen Dritten ist ja eigentlich nur in anderer Form die Forderung, dass der Begriﬀ scharf begrenzt sei. Ein beliebiger Gegenstand x f¨ allt entweder unter der Begriﬀ y, oder er f¨ allt nicht unter ihn: tertium non datur.”

Some Issues on Rough Sets

5

Thus according to Frege “The concept must have a sharp boundary. To the concept without a sharp boundary there would correspond an area that had not a sharp boundary-line all around.” That is, mathematics must use crisp, not vague concepts, otherwise it would be impossible to reason precisely. Summing up, vagueness is – Not allowed in mathematics. – Interesting for philosophy. – Necessary for computer science. 2.3

Fuzzy Sets

Zadeh proposed completely new, elegant approach to vagueness called fuzzy set theory [1]. In his approach an element can belong to a set to a degree k(0 ≤ k ≤ 1), in contrast to classical set theory where an element must deﬁnitely belong or not to a set. For example, in classical set theory language we can state that one is deﬁnitely ill or healthy, whereas in fuzzy set theory we can say that someone is ill (or healthy) in 60 percent (i.e., in the degree 0.6). Of course, at once the question arises where we get the value of degree from. This issue raised a lot of discussion, but we will refrain from considering this problem here. Thus fuzzy membership function can be presented as µX (x) ∈< 0, 1 >, where, X is a set and x is an element. Let us observe that the deﬁnition of fuzzy set involves more advanced mathematical concepts, real numbers and functions, whereas in classical set theory the notion of a set is used as a fundamental notion of whole mathematics and is used to derive any other mathematical concepts, e.g., numbers and functions. Consequently fuzzy set theory cannot replace classical set theory, because, in fact, the theory is needed to deﬁne fuzzy sets. Fuzzy membership function has the following properties: µU−X (x) = 1 − µX (x) for any x ∈ U,

(1)

µX∪Y (x) = max(µX (x), µY (x)) for any x ∈ U, µX∩Y (x) = min(µX (x), µY (x)) for any x ∈ U. This means that the membership of an element to the union and intersection of sets is uniquely determined by its membership to constituent sets. This is a very nice property and allows very simple operations on fuzzy sets, which is a very important feature both theoretically and practically. Fuzzy set theory and its applications developed very extensively over recent years and attracted attention of practitioners, logicians and philosophers worldwide.

6

Zdzislaw Pawlak

2.4

Rough Sets

Rough set theory [2, 18] is still another approach to vagueness. Similarly to fuzzy set theory it is not an alternative to classical set theory but it is embedded in it. Rough set theory can be viewed as a speciﬁc implementation of Frege’s idea of vagueness, i.e., imprecision in this approach is expressed by a boundary region of a set, and not by a partial membership, like in fuzzy set theory. Rough set concept can be deﬁned quite generally by means of topological operations, interior and closure, called approximations. Let us describe this problem more precisely. Suppose we are given a set of objects U called the universe and an indiscernibility relation R ⊆ U × U, representing our lack of knowledge about elements of U . For the sake of simplicity we assume that R is an equivalence relation. Let X be a subset of U. We want to characterize the set X with respect to R. To this end we will need the basic concepts of rough set theory given below. – The lower approximation of a set X (with respect to R) is the set of all objects, which can be for certain classiﬁed as X with respect to R (are certainly X with respect to R). – The upper approximation of a set X (with respect to R) is the set of all objects which can be possibly classiﬁed as X with respect to R (are possibly X with respect to R). – The boundary region of a set X (with respect to R) is the set of all objects, which can be classiﬁed neither as X nor as not-X with respect to R. Now we are ready to give the deﬁnition of rough sets. – Set X is crisp (exact with respect to R), if the boundary region of X is empty. – Set X is rough (inexact with respect to R), if the boundary region of X is nonempty. Thus a set is rough (imprecise) if it has nonempty boundary region; otherwise the set is crisp (precise). This is exactly the idea of vagueness proposed by Frege. The approximations and the boundary region can be deﬁned more precisely. To this end we need some additional notation. The equivalence class of R determined by element x will be denoted by R(x). The indiscernibility relation in certain sense describes our lack of knowledge about the universe. Equivalence classes of the indiscernibility relation, called granules generated by R, represent elementary portion of knowledge we are able to perceive due to R. Thus in view of the indiscernibility relation, in general, we are unable to observe individual objects but we are forced to reason only about the accessible granules of knowledge. Formal deﬁnitions of approximations and the boundary region are as follows: R-lower approximation of X R∗ (X) =

x∈U

{R(x) : R(x) ⊆ X},

(2)

Some Issues on Rough Sets

7

R-upper approximation of X R∗ (X) =

{R(x) : R(x) ∩ X = ∅},

(3)

x∈U

R-boundary region of X BNR (X) = R∗ (X) − R∗ (X).

(4)

As we can see from the deﬁnition approximations are expressed in terms of granules of knowledge. The lower approximation of a set is union of all granules which are entirely included in the set; the upper approximation – is union of all granules which have non-empty intersection with the set; the boundary region of set is the diﬀerence between the upper and the lower approximation. In other words, due to the granularity of knowledge, rough sets cannot be characterized by using available knowledge. Therefore with every rough set we associate two crisp sets, called its lower and upper approximation. Intuitively, the lower approximation of a set consists of all elements that surely belong to the set, whereas the upper approximation of the set constitutes of all elements that possibly belong to the set, and the boundary region of the set consists of all elements that cannot be classiﬁed uniquely to the set or its complement, by employing available knowledge. Thus any rough set, in contrast to a crisp set, has a non-empty boundary region. The approximation deﬁnition is clearly depicted in Figure 1.

Fig. 1. A rough set

8

Zdzislaw Pawlak

Approximations have the following properties: R∗ (X) ⊆ X ⊆ R∗ (X), R∗ (∅) = R∗ (∅) = ∅; R∗ (U ) = R∗ (U ) = U,

(5)

R∗ (X ∪ Y ) = R∗ (X) ∪ R∗ (Y ), R∗ (X ∩ Y ) = R∗ (X) ∩ R∗ (Y ), R∗ (X ∪ Y ) ⊇ R∗ (X) ∪ R∗ (Y ), R∗ (X ∩ Y ) ⊆ R∗ (X) ∩ R∗ (Y ), X ⊆ Y → R∗ (X) ⊆ R∗ (Y )&R∗ (X) ⊆ R∗ (Y ), R∗ (−X) = −R∗ (X), R∗ (−X) = −R∗ (X), R∗ R∗ (X) = R∗ R∗ (X) = R∗ (X), R∗ R∗ (X) = R∗ R∗ (X) = R∗ (X).

It is easily seen that approximations are in fact interior and closure operations in a topology generated by the indiscernibility relation. Thus fuzzy set theory and rough set theory require completely diﬀerent mathematical setting. Rough sets can be also deﬁned employing, instead of approximation, rough membership function [65] (6) µR X : U →< 0, 1 >, where µR X (x) =

card(X ∩ R(x)) , card(R(x))

(7)

and card(X) denotes the cardinality of X. The rough membership function expresses conditional probability that x belongs to X given R and can be interpreted as a degree that x belongs to X in view of information about x expressed by R. The meaning of rough membership function can be depicted as shown in Figure 2. The rough membership function can be used to deﬁne approximations and the boundary region of a set, as shown below: R∗ (X) = {x ∈ U : µR X (x) = 1}, ∗ R (X) = {x ∈ U : µR X (x) > 0},

(8)

BNR (X) = {x ∈ U : 0 < µR X (x) < 1}. It can be shown that the membership function has the following properties [65]: µR X (x) = 1 iﬀ x ∈ R∗ (X), ∗ µR X (x) = 0 iﬀ x ∈ U − R (X), 0 < µR X (x) < 1 iﬀ x ∈ BNR (X),

(9)

Some Issues on Rough Sets

9

Fig. 2. Rough membership function R µR U−X (x) = 1 − µX (x) for any x ∈ U, R R µR X∪Y (x) ≥ max(µX (x), µY (x)) for any x ∈ U, R R µR X∩Y (x) ≤ min(µX (x), µY (x)) for any x ∈ U.

From the properties it follows that the rough membership diﬀers essentially from the fuzzy membership, because the membership for union and intersection of sets, in general, cannot be computed as in the case of fuzzy sets from their constituents membership. Thus formally the rough membership is a generalization of fuzzy membership. Besides, the rough membership function, in contrast to fuzzy membership function, has a probabilistic ﬂavour. Now we can give two deﬁnitions of rough sets. Set X is rough with respect to R if R∗ (X) = R∗ (X). Set X rough with respect to R if for some x, 0 < µR X (x) < 1. It is interesting to observe that the above deﬁnitions are not equivalent [65], but we will not discuss this issue here. One can deﬁne the following four basic classes of rough sets, i.e., four categories of vagueness: R∗ (X) = ∅ and R∗ (X) = U, iﬀ X is roughly R-definable, R∗ (X) = ∅ and R∗ (X) = U, iﬀ X is internally R-indefinable,

(10)

R∗ (X) = ∅ and R∗ (X) = U, iﬀ X is externally R-definable, R∗ (X) = ∅ and R∗ (X) = U, iﬀ X is totally R-indefinable. The intuitive meaning of this classiﬁcation is the following. If X is roughly R-definable, this means that we are able to decide for some elements of U whether they belong to X or −X, using R.

10

Zdzislaw Pawlak

If X is internally R-indeﬁnable, this means that we are able to decide whether some elements of U belong to −X, but we are unable to decide for any element of U , whether it belongs to X or not, using R. If X is externally R-indeﬁnable, this means that we are able to decide for some elements of U whether they belong to X, but we are unable to decide, for any element of U whether it belongs to −X or not, using R. If X is totally R-indeﬁnable, we are unable to decide for any element of U whether it belongs to X or −X, using R. A rough set can also be characterized numerically by the following coeﬃcient αR (X) =

card(R∗ (X)) , card(R∗ (X))

(11)

called accuracy of approximation. Obviously, 0 ≤ αR (X) ≤ 1. If αR (X) = 1, X is crisp with respect to R (X is precise with respect to R), and otherwise, if αR (X) < 1, X is rough with respect to R (X is vague with respect to R). It is interesting to compare deﬁnitions of classical sets, fuzzy sets and rough sets. Classical set is a primitive notion and is deﬁned intuitively or axiomatically. Fuzzy sets are deﬁned by employing the fuzzy membership function, which involves advanced mathematical structures, numbers and functions. Rough sets are deﬁned by approximations. Thus this deﬁnition also requires advanced mathematical concepts. Let us also mention that rough set theory clearly distinguishes two very important concepts, vagueness and uncertainty, very often confused in the AI literature. Vagueness is the property of sets and can be described by approximations, whereas uncertainty is the property of elements of a set and can expressed by the rough membership function.

3 3.1

Rough Sets and Reasoning from Data Introduction

In this section we deﬁne basic concepts of rough set theory in terms of data, in contrast to general formulation presented in Section 2. This is necessary if we want to apply rough sets to reason from data. In what follows we assume that, in contrast to classical set theory, we have some additional data (information, knowledge) about elements of a universe of discourse. Elements that exhibit the same features are indiscernible (similar) and form blocks that can be understood as elementary granules (concepts) of knowledge about the universe. For example, patients suﬀering from a certain disease, displaying the same symptoms are indiscernible and may be thought of as representing a granule (disease unit) of medical knowledge. These granules can be considered as elementary building blocks of knowledge. Elementary concepts can be combined into compound concepts, i.e., concepts that are uniquely determined in terms of elementary concepts. Any union of elementary sets is called a crisp set, and any other sets are referred to as rough (vague, imprecise).

Some Issues on Rough Sets

3.2

11

An Example

Before we will formulate the above ideas more precisely let us consider a simple tutorial example. Data are often presented as a table, columns of which are labeled by attributes, rows by objects of interest and entries of the table are attribute values. For example, in a table containing information about patients suﬀering from a certain disease objects are patients (strictly speaking their ID’s), attributes can be, for example, blood pressure, body temperature etc., whereas the entry corresponding to object Smith and the attribute blood preasure can be normal. Such tables are known as information tables, attribute-value tables or information system. We will use here the term information system. Below an example of information system is presented. Suppose we are given data about 6 patients, as shown in Table 1. Table 1. Exemplary information system Patient Headache Muscle-pain Temperature Flu p1 no yes high yes p2 yes no high yes p3 yes yes very high yes p4 no yes normal no p5 yes no high no p6 no yes very high yes

Columns of the table are labeled by attributes (symptoms) and rows – by objects (patients), whereas entries of the table are attribute values. Thus each row of the table can be seen as information about speciﬁc patient. For example, patient p2 is characterized in the table by the following attributevalue set (Headache, yes), (Muscle-pain, no), (Temperature, high), (Flu, yes), which form the information about the patient. In the table patients p2, p3 and p5 are indiscernible with respect to the attribute Headache, patients p3 and p6 are indiscernible with respect to attributes Muscle-pain and Flu, and patients p2 and p5 are indiscernible with respect to attributes Headache, Muscle-pain and Temperature. Hence, for example, the attribute Headache generates two elementary sets {p2, p3, p5} and {p1, p4, p6}, whereas the attributes Headache and Muscle-pain form the following elementary sets: {p1, p4, p6}, {p2, p5} and {p3}. Similarly one can deﬁne elementary sets generated by any subset of attributes. Patient p2 has ﬂu, whereas patient p5 does not, and they are indiscernible with respect to the attributes Headache, Muscle-pain and Temperature, hence ﬂu cannot be characterized in terms of attributes Headache, Muscle-pain and

12

Zdzislaw Pawlak

Temperature. Hence p2 and p5 are the boundary-line cases, which cannot be properly classiﬁed in view of the available knowledge. The remaining patients p1, p3 and p6 display symptoms which enable us to classify them with certainty as having ﬂu, patients p2 and p5 cannot be excluded as having ﬂu and patient p4 for sure does not have ﬂu, in view of the displayed symptoms. Thus the lower approximation of the set of patients having ﬂu is the set {p1, p3, p6} and the upper approximation of this set is the set {p1, p2, p3, p5, p6}, whereas the boundary-line cases are patients p2 and p5. Similarly p4 does not have ﬂu and p2, p5 cannot be excluded as having ﬂu, thus the lower approximation of this concept is the set {p4} whereas – the upper approximation – is the set {p2, p4, p5} and the boundary region of the concept “not ﬂu” is the set {p2, p5}, the same as in the previous case. 3.3

Information Systems

Now, we are ready to formulate basic concepts of rough set theory using data. Suppose we are given two ﬁnite, non-empty sets U and A, where U is the universe, and A – a set of attributes. The pair S = (U, A) will be called an information system. With every attribute a ∈ A we associate a set Va , of its values, called the domain of a. Any subset B of A determines a binary relation I(B) on U , which will be called an indiscernibility relation, and is deﬁned as follows: xI(B)y if and only if a(x) = a(y) for every a ∈ A, (12) where a(x) denotes the value of attribute a for element x. Obviously I(B) is an equivalence relation. The family of all equivalence classes of I(B), i.e., partition determined by B, will be denoted by U/I(B), or simple U/B; an equivalence class of I(B), i.e., block of the partition U/B, containing x will be denoted by B(x). If (x, y) belongs to I(B) we will say that x and y are B-indiscernible. Equivalence classes of the relation I(B) (or blocks of the partition U/B) are referred to as B-elementary sets. In the rough set approach the elementary sets are the basic building blocks (concepts) of our knowledge about reality. Now approximations can be deﬁned as follows: B∗ (X) = {x ∈ U : B(x) ⊆ X},

(13)

B ∗ (X) = {x ∈ U : B(x) ∩ X = ∅},

(14)

called the B-lower and the B-upper approximation of X, respectively. The set BNB (X) = B ∗ (X) − B∗ (X),

(15)

will be referred to as the B-boundary region of X. If the boundary region of X is the empty set, i.e., BNB (X) = ∅, then the set X is crisp (exact) with respect to B; in the opposite case, i.e., if BNB (X) = ∅, the set X is referred to as rough (inexact) with respect to B.

Some Issues on Rough Sets

13

The properties of approximations can be presented now as: B∗ (X) ⊆ X ⊆ B ∗ (X), B∗ (∅) = B ∗ (∅) = ∅, B∗ (U ) = B ∗ (U ) = U,

(16)

B ∗ (X ∪ Y ) = B ∗ (X) ∪ B ∗ (Y ), B∗ (X ∩ Y ) = B∗ (X) ∩ B∗ (Y ), X ⊆ Y implies B∗ (X) ⊆ B∗ (Y ) and B ∗ (X) ⊆ B ∗ (Y ), B∗ (X ∪ Y ) ⊇ B∗ (X) ∪ B∗ (Y ), B ∗ (X ∩ Y ) ⊆ B ∗ (X) ∩ B ∗ (Y ), B∗ (−X) = −B ∗ (X), B ∗ (−X) = −B∗ (X), B∗ (B∗ (X)) = B ∗ (B∗ (X)) = B∗ (X), B ∗ (B ∗ (X)) = B∗ (B ∗ (X)) = B ∗ (X). 3.4

Decision Tables

An information system in which we distinguish two classes of attributes, called condition and decision (action) attributes are called decision tables. The condition and decision attributes deﬁne partitions of the decision table universe. We aim at approximation of the partition deﬁned by the decision attributes by means of the partition deﬁned by the condition attributes. For example, in Table 1 attributes Headache, Muscle-pain and Temperature can be considered as condition attributes, whereas the attribute Flu – as a decision attribute. A decision table with condition attributes C and decision attributes D will be denoted by S = (U, C, D). Each row of a decision table determines a decision rule, which speciﬁes decisions (actions) that should be taken when conditions pointed out by condition attributes are satisﬁed. For example, in Table 1 the condition (Headache, no), (Muscle-pain, yes), (Temperature, high) determines uniquely the decision (Flu, yes). Objects in a decision table are used as labels of decision rules. Decision rules 2) and 5) in Table 1 have the same conditions but diﬀerent decisions. Such rules are called inconsistent (nondeterministic, conflicting); otherwise the rules are referred to as consistent (certain, deterministic, nonconflicting). Sometimes consistent decision rules are called sure rules, and inconsistent rules are called possible rules. Decision tables containing inconsistent decision rules are called inconsistent (nondeterministic, conflicting); otherwise the table is consistent (deterministic, non-conflicting). The number of consistent rules to all rules in a decision table can be used as consistency factor of the decision table, and will be denoted by γ(C, D), where C and D are condition and decision attributes respectively. Thus if γ(C, D) = 1 the decision table is consistent and if γ(C, D) = 1 the decision table is inconsistent. For example, for Table 1, we have γ(C, D) = 4/6.

14

Zdzislaw Pawlak

Decision rules are often presented in a form called if... then... rules. For example, rule 1) in Table 1 can be presented as follows if (Headache,no) and (Muscle-pain,yes) and (Temperature,high) then (Flu,yes). A set of decision rules is called a decision algorithm. Thus with each decision table we can associate a decision algorithm consisting of all decision rules occurring in the decision table. We must however, make distinction between decision tables and decision algorithms. A decision table is a collection of data, whereas a decision algorithm is a collection of rules, e.g., logical expressions. To deal with data we use various mathematical methods, e.g., statistics but to analyze rules we must employ logical tools. Thus these two approaches are not equivalent, however for simplicity we will often present here decision rules in form of implications, without referring deeper to their logical nature, as it is often practiced in AI. 3.5

Dependency of Attributes

Another important issue in data analysis is discovering dependencies between attributes. Intuitively, a set of attributes D depends totally on a set of attributes C, denoted C ⇒ D, if all values of attributes from D are uniquely determined by values of attributes from C. In other words, D depends totally on C, if there exists a functional dependency between values of D and C. For example, in Table 1 there are no total dependencies whatsoever. If in Table 1, the value of the attribute Temperature for patient p5 were “no” instead of “high”, there would be a total dependency {T emperature} ⇒ {F lu}, because to each value of the attribute Temperature there would correspond unique value of the attribute Flu. We would need also a more general concept of dependency of attributes, called a partial dependency of attributes. Let us depict the idea by example, referring to Table 1. In this table, for example, the attribute Temperature determines uniquely only some values of the attribute Flu. That is, (Temperature, very high) implies (Flu, yes), similarly (Temperature, normal) implies (Flu, no), but (Temperature, high) does not imply always (Flu, yes). Thus the partial dependency means that only some values of D are determined by values of C. Formally dependency can be deﬁned in the following way. Let D and C be subsets of A. We will say that D depends on C in a degree k (0 ≤ k ≤ 1), denoted C ⇒k D, if k = γ(C, D). If k = 1 we say that D depends totally on C, and if k < 1, we say that D depends partially (in a degree k) on C. The coeﬃcient k expresses the ratio of all elements of the universe, which can be properly classiﬁed to blocks of the partition U/D, employing attributes C. Thus the concept of dependency of attributes is strictly connected with that of consistency of the decision table.

Some Issues on Rough Sets

15

For example, for dependency {Headache, Muscle-pain, Temperature} ⇒ {Flu} we get k = 4/6 = 2/3, because four out of six patients can be uniquely classiﬁed as having ﬂu or not, employing attributes Headache, Muscle-pain and Temperature. If we were interested in how exactly patients can be diagnosed using only the attribute Temperature, that is – in the degree of the dependence {Temperature} ⇒ {Flu}, we would get k = 3/6 = 1/2, since in this case only three patients p3, p4 and p6 out of six can be uniquely classiﬁed as having ﬂu. In contrast to the previous case patient p4 cannot be classiﬁed now as having ﬂu or not. Hence the single attribute Temperature oﬀers worse classiﬁcation than the whole set of attributes Headache, Muscle-pain and Temperature. It is interesting to observe that neither Headache nor Muscle-pain can be used to recognize ﬂu, because for both dependencies {Headache} ⇒ {Flu} and {Muscle-pain} ⇒ {Flu} we have k = 0. It can be easily seen that if D depends totally on C then I(C) ⊆ I(D). That means that the partition generated by C is ﬁner than the partition generated by D. Observe, that the concept of dependency discussed above corresponds to that considered in relational databases. If D depends in degree k, 0 ≤ k ≤ 1, on C, then γ(C, D) =

card(P OSC (D)) , card(U )

where P OSC (D) =

C∗ (X).

(17)

(18)

X∈U/I(D)

The expression P OSC (D), called a positive region of the partition U/D with respect to C, is the set of all elements of U that can be uniquely classiﬁed to blocks of the partition U/D, by means of C. Summing up: D is totally (partially) dependent on C, if all (some) elements of the universe U can be uniquely classiﬁed to blocks of the partition U/D, employing C. 3.6

Reduction of Attributes

We often face a question whether we can remove some data from a data table preserving its basic properties, that is – whether a table contains some superﬂuous data. For example, it is easily seen that if we drop in Table 1 either the attribute Headache or Muscle-pain we get the data set which is equivalent to the original one, in regard to approximations and dependencies. That is we get in this case the same accuracy of approximation and degree of dependencies as in the original table, however using smaller set of attributes. In order to express the above idea more precisely we need some auxiliary notions. Let B be a subset of A and let a belong to B.

16

Zdzislaw Pawlak

– We say that a is dispensable in B if I(B) = I(B − {a}); otherwise a is indispensable in B. – Set B is independent if all its attributes are indispensable. – Subset B of B is a reduct of B if B is independent and I(B ) = I(B). Thus a reduct is a set of attributes that preserves partition. This means that a reduct is the minimal subset of attributes that enables the same classiﬁcation of elements of the universe as the whole set of attributes. In other words, attributes that do not belong to a reduct are superﬂuous with regard to classiﬁcation of elements of the universe. Reducts have several important properties. In what follows we will present two of them. First, we deﬁne a notion of a core of attributes. Let B be a subset of A. The core of B is the set oﬀ all indispensable attributes of B. The following is an important property, connecting the notion of the core and reducts Core(B) = Red(B), (19) where Red(B) is the set oﬀ all reducts of B. Because the core is the intersection of all reducts, it is included in every reduct, i.e., each element of the core belongs to some reduct. Thus, in a sense, the core is the most important subset of attributes, for none of its elements can be removed without aﬀecting the classiﬁcation power of attributes. To further simpliﬁcation of an information table we can eliminate some values of attribute from the table in such a way that we are still able to discern objects in the table as the original one. To this end we can apply similar procedure as to eliminate superﬂuous attributes, which is deﬁned next. – We will say that the value of attribute a ∈ B, is dispensable for x, if B(x) = B a (x), where B a = B − {a}; otherwise the value of attribute a is indispensable for x. – If for every attribute a ∈ B the value of a is indispensable for x, then B will be called orthogonal for x. – Subset B ⊆ B is a value reduct of B for x, iﬀ B is orthogonal for x and B(x) = B (x). The set of all indispensable values of attributes in B for x will be called the value core of B for x, and will be denoted CORE x (B). Also in this case we have Redx (B), (20) CORE x (B) = where Redx (B) is the family of all reducts of B for x. Suppose we are given a dependency C ⇒ D. It may happen that the set D depends not on the whole set C but on its subset C and therefore we might be interested to ﬁnd this subset. In order to solve this problem we need the notion of a relative reduct, which will be deﬁned and discussed next.

Some Issues on Rough Sets

17

Let C, D ⊆ A. Obviously if C ⊆ C is a D-reduct of C, then C is a minimal subset of C such that γ(C, D) = γ(C , D). (21) – We will say that attribute a ∈ C is D-dispensable in C, if P OSC (D) = P OS(C−{a}) (D); otherwise the attribute a is D-indispensable in C. – If all attributes a ∈ C are C-indispensable in C, then C will be called Dindependent. – Subset C ⊆ C is a D-reduct of C, iﬀ C is D-independent and P OSC (D) = P OSC (D). The set of all D-indispensable attributes in C will be called D − core of C, and will be denoted by CORED (C). In this case we have also the property CORED (C) = RedD (C), (22) where RedD (C) is the family of all D-reducts of C. If D = C we will get the previous deﬁnitions. For example, in Table 1 there are two relative reducts with respect to Flu, {Headache, Temperature} and {Muscle-pain, Temperature} of the set of condition attributes Headache, Muscle-pain, Temperature. That means that either the attribute Headache or Muscle-pain can be eliminated from the table and consequently instead of Table 1 we can use either Table 2 or Table 3. For Table 1 the relative core of with respect to the set {Headache, Muscle-pain, Temperature} is the Temperature. This conﬁrms our previous considerations showing that Temperature is the only symptom that enables, at least, partial diagnosis of patients. Table 2. Data table obtained from Table 1 by drooping the attribute Muscle-pain Patient Headache Temperature Flu p1 no high yes p2 yes high yes p3 yes very high yes p4 no normal no p5 yes high no p6 no very high yes Table 3. Data table obtained from Table 1 by drooping the attribute Headache Patient Muscle-pain Temperature Flu p1 yes high yes p2 no high yes p3 yes very high yes p4 yes normal no p5 no high no p6 yes very high yes

18

Zdzislaw Pawlak

We will need also a concept of a value reduct and value core. Suppose we are given a dependency C ⇒ D where C is relative D-reduct of C. To further investigation of the dependency we might be interested to know exactly how values of attributes from D depend on values of attributes from C. To this end we need a procedure eliminating values of attributes form C which does not inﬂuence on values of attributes from D. – We say that value of attribute a ∈ C, is D-dispensable for x ∈ U , if C(x) ⊆ D(x) implies C a (x) ⊆ D(x), otherwise the value of attribute a is D-indispensable for x. – If for every attribute a ∈ C value of a is D-indispensable for x, then C will be called D-independent (orthogonal) for x. – Subset C ⊆ C is a D-reduct of C for x (a value reduct), iﬀ C is Dindependent for x and C(x) ⊆ D(x) implies C (x) ⊆ D(x). The set of all D-indispensable for x values of attributes in C will be called the x D − core of C for x (the value core), and will be denoted CORED (C). We have also the following property x (C) = CORED

RedxD (C),

(23)

where RedxD (C) is the family of all D-reducts of C for x. Using the concept of a value reduct, Table 2 and Table 3 can be simpliﬁed and we obtain Table 4 and Table 5, respectively. For Table 4 we get its representation by means of rules if if if if if if

(Headache, no) and (Temperature, high) then (Flu, yes), (Headache, yes) and (Temperature, high) then (Flu, yes), (Temperature, very high) then (Flu, yes), (Temperature, normal) then (Flu, no), (Headache, yes) and (Temperature, high) then (Flu, no), (Temperature, very high) then (Flu, yes).

For Table 5 we have if if if if if if

(Muscle-pain, yes) and (Temperature, high) then (Flu, yes), (Muscle-pain, no) and (Temperature, high) then (Flu, yes), (Temperature, very high) then (Flu, yes), (Temperature, normal) then (Flu, no), (Muscle-pain, no) and (Temperature, high) then (Flu, no), (Temperature, very high) then (Flu, yes).

Some Issues on Rough Sets

19

Table 4. Simpliﬁed Table 2 Patient Headache Temperature Flu p1 no high yes p2 yes high yes p3 – very high yes p4 – normal no p5 yes high no p6 – very high yes Table 5. Simpliﬁed Table 3 Patient Muscle-pain Temperature Flu p1 yes high yes p2 no high yes p3 – very high yes p4 – normal no p5 no high no p6 – very high yes

The following important property a) B ⇒ B − B , where B is a reduct of B, connects reducts and dependency. Besides, we have: b) If B ⇒ C, then B ⇒ C , for every C ⊆ C, in particular c) If B ⇒ C, then B ⇒ {a}, for every a ∈ C. Moreover, we have: d) If B is a reduct of B, then neither {a} ⇒ {b} nor {b} ⇒ {a} holds, for every a, b ∈ B , i.e., all attributes in a reduct are pairwise independent. 3.7

Indiscernibility Matrices and Functions

To compute easily reducts and the core we will use discernibility matrix [66], which is deﬁned next. By an discernibility matrix of B ⊆ A denoted M (B) we will mean n × n matrix with entries deﬁned by: cij = {a ∈ B : a(xi ) = a(xj )} for i, j = 1, 2, . . . , n. Thus entry cij is the set of all attributes which discern objects xi and xj .

(24)

20

Zdzislaw Pawlak

The discernibility matrix M (B) assigns to each pair of objects x and y a subset of attributes δ(x, y) ⊆ B, with the following properties: δ(x, x) = ∅, δ(x, y) = δ(y, x),

(25)

δ(x, z) ⊆ δ(x, y) ∪ δ(y, z). These properties resemble properties of semi-distance, and therefore the function δ may be regarded as qualitative semi-matrix and δ(x, y) – qualitative semidistance. Thus the discernibility matrix can be seen as a semi-distance (qualitative) matrix. Let us also note that for every x, y, z ∈ U we have card(δ(x, x)) = 0,

(26)

card(δ(x, y)) = card(δ(y, x)), card(δ(x, z)) ≤ card(δ(x, y)) + card(δ(y, z)). It is easily seen that the core is the set of all single element entries of the discernibility matrix M (B), i.e., CORE(B) = {a ∈ B : cij = {a}, for some i, j}.

(27)

Obviously B ⊆ B is a reduct of B, if B is the minimal (with respect to inclusion) subset of B such that B ∩ c = ∅ for any nonempty entry c (c = ∅) in M (B).

(28)

In other words reduct is the minimal subset of attributes that discerns all objects discernible by the whole set of attributes. Every discernibility matrix M (B) deﬁnes uniquely a discernibility (boolean) function f (B) deﬁned as follows. Let us assign to each attribute a ∈ B a binary boolean variable a, and let Σδ(x, y) denote Boolean sum of all Boolean variables assigned to the set of attributes δ(x, y). Then the discernibility function can be deﬁned by the formula {Σδ(x, y) : (x, y) ∈ U 2 and δ(x, y) = ∅}. (29) f (B) = (x,y)∈U 2

The following property establishes the relationship between disjunctive normal form of the function f (B) and the set of all reducts of B. All constituents in the minimal disjunctive normal form of the function f (B) are all reducts of B. In order to compute the value core and value reducts for x we can also use the discernibility matrix as deﬁned before and the discernibility function, which must be slightly modiﬁed: {Σδ(x, y) : y ∈ U and δ(x, y) = ∅}. (30) f x (B) = y∈U

Some Issues on Rough Sets

21

Relative reducts and core can be computed also using discernibility matrix, which needs slight modiﬁcation cij = {a ∈ C : a(xi ) = a(xj ) and w(xi , xj )},

(31)

where w(xi , xj ) ≡ xi ∈ P OSC (D) and xj ∈ P OSC (D) or xi ∈ P OSC (D) and xj ∈ P OSC (D) or xi , xj ∈ P OSC (D) and (xj , xj ) ∈ I(D), for i, j = 1, 2, . . . , n. If the partition deﬁned by D is deﬁnable by C then the condition w(xi , xj ) in the above deﬁnition can be reduced to (xi , xj ) ∈ I(D). Thus entry cij is the set of all attributes which discern objects xi and xj that do not belong to the same equivalence class of the relation I(D). The remaining deﬁnitions need little changes. The D-core is the set of all single element entries of the discernibility matrix MD (C), i.e., CORED (C) = {a ∈ C : cij = (a), for some i, j}.

(32)

Set C ⊆ C is the D-reduct of C, if C is the minimal (with respect to inclusion) subset of C such that C ∩ c = ∅ for any nonempty entry c, (c = ∅) in MD (C).

(33)

Thus D-reduct is the minimal subset of attributes that discerns all equivalence classes of the relation I(D). Every discernibility matrix MD (C) deﬁnes uniquely a discernibility (Boolean) function fD (C) which is deﬁned as before. We have also the following property: All constituents in the disjunctive normal form of the function fD (C) are all D-reducts of C. For computing value reducts and the value core for relative reducts we use as a starting point the discernibility matrix MD (C) and discernibility function will have the form: x fD (C) = {Σδ(x, y) : y ∈ U and δ(x, y) = ∅}. (34) y∈U

Let us illustrate the above considerations by computing relative reducts for the set of attributes {Headache, Muscle-pain, Temperature} with respect to Flu. The corresponding discernibility matrix is shown in Table 6. In Table 6 H, M, T denote Headache, Muscle-pain and Temperature, respectively. The discernibility function for this table is T (H + M )(H + M + T )(M + T ),

22

Zdzislaw Pawlak Table 6. Discernibility matrix 1 2 3 1 2 3 4 T H, M, T 5 H, M M, T 6

4

5

T

H, M, T

6

where + denotes the boolean sum and the boolean multiplication is omitted in the formula. After simplication the discernibility function using laws of Boolean algebra we obtain the following expression T H + T H, which says that there are two reducts T H and T M in the data table and T is the core. 3.8

Significance of Attributes and Approximate Reducts

As it follows from considerations concerning reduction of attributes, they cannot be equally important, and some of them can be eliminated from an information table without losing information contained in the table. The idea of attribute reduction can be generalized by introducing a concept of significance of attributes, which enables us evaluation of attributes not only by two-valued scale, dispensable – indispensable, but by assigning to an attribute a real number from the closed interval [0,1], expressing how important is an attribute in an information table. Signiﬁcance of an attribute can be evaluated by measuring eﬀect of removing the attribute from an information table on classiﬁcation deﬁned by the table. Let us ﬁrst start our consideration with decision tables. Let C and D be sets of condition and decision attributes respectively and let a be a condition attribute, i.e., a ∈ A. As shown previously the number γ(C, D) expresses a degree of consistency of the decision table, or the degree of dependency between attributes C and D, or accuracy of approximation of U/D by C. We can ask how the coeﬃcient γ(C, D) changes when removing the attribute a, i.e., what is the diﬀerence between γ(C, D) and γ(C − {a}, D). We can normalize the diﬀerence and deﬁne the signiﬁcance of the attribute a as σ(C,D) (a) =

(γ(C, D) − γ(C − {a}, D)) γ(C − {a}, D) =1− , γ(C, D) γ(C, D)

(35)

and denoted simple by σ(a), when C and D are understood. Obviously 0 ≤ σ(a) ≤ 1. The more important is the attribute a the greater is the number σ(a). For example for condition attributes in Table 1 we have the following results:

Some Issues on Rough Sets

23

σ(Headache) = 0, σ(Muscle-pain) = 0, σ(Temperature) = 0.75. Because the signiﬁcance of the attribute Temperature or Muscle-pain is zero, removing either of the attributes from condition attributes does not eﬀect the set of consistent decision rules, whatsoever. Hence the attribute Temperature is the most signiﬁcant one in the table. That means that by removing the attribute Temperature, 75% (three out of four) of consistent decision rules will disappear from the table, thus lack of the attribute essentially eﬀects the ”decisive power” of the decision table. For a reduct of condition attributes, e.g., {Headache, Temperature}, we get σ(Headache) = 0.25, σ(Temperature) = 1.00. In this case, removing the attribute Headache from the reduct, i.e., using only the attribute Temperature, 25% (one out of four) of consistent decision rules will be lost, and dropping the attribute Temperature, i.e., using only the attribute Headache 100% (all) consistent decision rules will be lost. That means that in this case making decisions is impossible at all, whereas by employing only the attribute Temperature some decision can be made. Thus the coeﬃcient σ(a) can be understood as an error which occurs when attribute a is dropped. The signiﬁcance coeﬃcient can be extended to set of attributes as follows: ε(C,D) (B) =

γ(C − B, D) (γ(C, D) − γ(C − B, D)) =1− , γ(C, D) γ(C, D)

(36)

denoted by ε(B), if C and D are understood, where B is a subset of C. If B is a reduct of C, then ε(B) = 1, i.e., removing any reduct from a set of decision rules unables to make sure decisions, whatsoever. Any subset B of C will be called an approximate reduct of C, and the number ε(C,D) (B) =

γ(B, D) (γ(C, D) − γ(B, D)) =1− , γ(C, D) γ(C, D)

(37)

denoted simple as ε(B), will be called an error of reduct approximation. It expresses how exactly the set of attributes B approximates the set of condition attributes C. Obviously ε(B) = 1 − σ(B) and ε(B) = 1 − ε(C − B). For any subset B of C we have ε(B) ≤ ε(C). If B is a reduct of C, then ε(B) = 0. For example, either of attributes Headache and Temperature can be considered as approximate reducts of {Headache, Temperature}, and ε(Headache) = 1, ε(Temperature) = 0.25.

24

Zdzislaw Pawlak

But for the whole set of condition attributes {Headache, Muscle-pain, Temperature} we have also the following approximate reduct ε(Headache, Muscle-pain) = 0.75. The concept of an approximate reduct is a generalization of the concept of a reduct considered previously. The minimal subset B of condition attributes C, such that γ(C, D) = γ(B, D), or ε(C,D) (B) = 0 is a reduct in the previous sense. The idea of an approximate reduct can be useful in cases when a smaller number of condition attributes is preferred over accuracy of classiﬁcation.

4 4.1

Rough Sets and Bayes’ Theorem Introduction

Bayes’ theorem is the essence of statistical inference. “The result of the Bayesian data analysis process is the posterior distribution that represents a revision of the prior distribution on the light of the evidence provided by the data” [67]. “Opinion as to the values of Bayes’ theorem as a basic for statistical inference has swung between acceptance and rejection since its publication on 1763” [68]. Rough set theory oﬀers new insight into Bayes’ theorem [69–71]. The look on Bayes’ theorem presented here is completely diﬀerent to that studied so far using the rough set approach (see, e.g., [72–85]) and in the Bayesian data analysis philosophy (see, e.g., [67, 86, 68, 87]). It does not refer either to prior or posterior probabilities, inherently associated with Bayesian reasoning, but it reveals some probabilistic structure of the data being analyzed. It states that any data set (decision table) satisﬁes total probability theorem and Bayes’ theorem. This property can be used directly to draw conclusions from data without referring to prior knowledge and its revision if new evidence is available. Thus in the presented approach the only source of knowledge is the data and there is no need to assume that there is any prior knowledge besides the data. We simple look what the data are telling us. Consequently we do not refer to any prior knowledge which is updated after receiving some data. Moreover, the presented approach to Bayes’ theorem shows close relationship between logic of implications and probability, which was ﬁrst studied by Jan L ukasiewicz [88] (see also [89]). Bayes’ theorem in this context can be used to “invert” implications, i.e., to give reasons for decisions. This is a very important feature of utmost importance to data mining and decision analysis, for it extends the class of problem which can be considered in this domains. Besides, we propose a new form of Bayes’ theorem where basic role plays strength of decision rules (implications) derived from the data. The strength of decision rules is computed from the data or it can be also a subjective assessment. This formulation gives new look on Bayesian method of inference and also simpliﬁes essentially computations.

Some Issues on Rough Sets

4.2

25

Bayes’ Theorem

“In its simplest form, if H denotes an hypothesis and D denotes data, the theorem says that P (H | D) = P (D | H) × P (H)/P (D). (38) With P (H) regarded as a probabilistic statement of belief about H before obtaining data D, the left-hand side P (H | D) becomes an probabilistic statement of belief about H after obtaining D. Having speciﬁed P (D | H) and P (D), the mechanism of the theorem provides a solution to the problem of how to learn from data. In this expression, P (H), which tells us what is known about H without knowing of the data, is called the prior distribution of H, or the distribution of H priori. Correspondingly, P (H | D), which tells us what is known about H given knowledge of the data, is called the posterior distribution of H given D, or the distribution of H a posteriori [87]. “A prior distribution, which is supposed to represent what is known about unknown parameters before the data is available, plays an important role in Bayesian analysis. Such a distribution can be used to represent prior knowledge or relative ignorance” [68]. 4.3

Decision Tables and Bayes’ Theorem

In this section we will show that decision tables satisfy Bayes’ theorem but the meaning of this theorem diﬀers essentially from the classical Bayesian methodology. Every decision table describes decisions (actions, results etc.) determined, when some conditions are satisﬁed. In other words each row of the decision table speciﬁes a decision rule which determines decisions in terms of conditions. In what follows we will describe decision rules more exactly. Let S = (U, C, D) be a decision table. Every x ∈ U determines a sequence c1 (x), . . . , cn (x), d1 (x), . . . , dm (x) where {c1 , . . . , cn } = C and {d1 , . . . , dm } = D The sequence will be called a decision rule induced by x (in S) and denoted by c1 (x), . . . , cn (x) → d1 (x), . . . , dm (x) or in short C →x D. The number suppx (C, D) = card(C(x) ∩ D(x)) will be called a support of the decision rule C →x D and the number σx (C, D) =

suppx(C, D) , card(U )

(39)

will be referred to as the strength of the decision rule C →x D. With every decision rule C →x D we associate a certainty factor of the decision rule, denoted cerx (C, D) and deﬁned as follows: cerx (C, D) = where π(C(X)) =

suppx(C, D) σx (C, D) card(C(x) ∩ D(x)) = = , card(C(x)) card(C(x)) π(C(x))

card(C(x)) card(U) .

(40)

26

Zdzislaw Pawlak

The certainty factor may be interpreted as a conditional probability that y belongs to D(x) given y belongs to C(x), symbolically πx (D | C). If cerx (C, D) = 1, then C →x D will be called a certain decision rule in S; if 0 < cerx (C, D) < 1 the decision rule will be referred to as an uncertain decision rule in S. Besides, we will also use a coverage factor of the decision rule, denoted covx (C, D) deﬁned as covx (C, D) = where π(D(X)) = Similarly

suppx (C, D) σx (C, D) card(C(x) ∩ D(x)) = = , card(D(x)) card(D(x)) π(D(x))

(41)

card(D(x)) card(U) .

covx (C, D) = πx (C | D).

(42)

The certainty and coverage coeﬃcients have been widely used for years by data mining and rough set communities. However, L ukasiewicz [88] (see also [89]) was ﬁrst who used this idea to estimate the probability of implications. If C →x D is a decision rule then C →x D will be called an inverse decision rule. The inverse decision rules can be used to give explanations (reasons) for a decision. Let us observe that C (x) and covx (C, D). cerx (C, D) = πD(x)

(43)

That means that the certainty factor expresses the degree of membership of x to the decision class D(x), given C, whereas the coverage factor expresses the degree of membership of x to condition class C(x), given D. Decision tables have important probabilistic properties which are discussed next. Let C →x D be a decision rule in S and let Γ = C(x) and ∆ = D(x). Then the following properties are valid: cery (C, D) = 1, (44) y∈Γ

covy (C, D) = 1,

(45)

y∈Γ

π(D(x)) =

cery (C, D) · π(C(y)) =

y∈Γ

π(C(x)) =

σy (C, D),

(46)

σy (C, D),

(47)

y∈Γ

covy (C, D) · π(D(y)) =

y∈∆

y∈∆

σx (C, D) σx (C, D) covx (C, D) · π(D(x)) cerx (C, D) = = = , (48) covy (C, D) · π(D(y)) σy (C, D) π(C(x)) y∈Γ

y∈∆

Some Issues on Rough Sets

27

σx (C, D) σx (C, D) cerx (C, D) · π(C(x)) = = . (49) covx (C, D) = cery (C, D) · π(C(y)) σx (C, D) π(D(x)) y∈Γ

y∈Γ

That is, any decision table, satisﬁes (44)-(49). Observe that (46) and (47) refer to the well known total probability theorem, whereas (48) and (49) refer to Bayes’ theorem. Thus in order to compute the certainty and coverage factors of decision rules according to formula (48) and (49) it is enough to know the strength (support) of all decision rules only. The strength of decision rules can be computed from data or can be a subjective assessment. 4.4

Decision Language and Decision Algorithms

It is often useful to describe decision tables in logical terms. To this end we deﬁne a formal language called a decision language. Let S = (U, A) be an information system. With every B ⊆ A we associate a formal language, i.e., a set of formulas F or(B). Formulas of F or(B) are built up from attribute-value pairs (a, v) where a ∈ B and v ∈ Va by means of logical connectives ∧(and), ∨(or), ∼ (not) in the standard way. For any Φ ∈ F or(B) by Φ S we denote the set of all objects x ∈ U satisfying Φ in S and refer to as the meaning of Φ in S. The meaning Φ S of Φ in S is deﬁned inductively as follows: (a, v) S = {x ∈ U : a(v) = x} for all a ∈ B and v ∈ Va , Φ ∧ Ψ S = Φ S ∪ Ψ S , Φ ∧ Ψ S = Φ S ∩ Ψ S , ∼ Φ S = U − Φ S . If S = (U, C, D) is a decision table then with every row of the decision table we associate a decision rule, which is deﬁned next. A decision rule in S is an expression Φ →S Ψ or simply Φ → Ψ if S is understood, read if Φ then Ψ , where Φ ∈ F or(C), Ψ ∈ F or(D) and C, D are condition and decision attributes, respectively; Φ and Ψ are referred to as conditions part and decisions part of the rule, respectively. The number suppS (Φ, Ψ ) = card(( Φ ∧ Ψ S )) will be called the support of the rule Φ → Ψ in S. We consider a probability distribution pU (x) = 1/card(U ) for x ∈ U where U is the (non-empty) universe of objects of S; we have pU (X) = card(X)/card(U ) for X ⊆ U . For any formula Φ we associate its probability in S deﬁned by (50) πS (Φ) = pU ( Φ S ). With every decision rule Φ → Ψ we associate a conditional probability πS (Ψ | Φ) = pU ( Ψ S | Φ S )

(51)

called the certainty factor of the decision rule, denoted cerS (Φ, Ψ ). We have cerS (Φ, Ψ ) = πS (Ψ | Φ) = where Φ S = ∅.

card( Φ ∧ Ψ S ) , card( Φ S )

(52)

28

Zdzislaw Pawlak

If πS (Ψ | Φ) = 1, then Φ → Ψ will be called a certain decision rule; if 0 < πS (Ψ | Φ) < 1 the decision rule will be referred to as a uncertain decision rule. There is an interesting relationship between decision rules and their approximations: certain decision rules correspond to the lower approximation, whereas the uncertain decision rules correspond to the boundary region. Besides, we will also use a coverage factor of the decision rule, denoted covS (Φ, Ψ ) deﬁned by πS (Φ | Ψ ) = pU ( Φ S | Ψ S ).

(53)

Obviously we have covS (Φ, Ψ ) = πS (Φ | Ψ ) =

card( Φ ∧ Ψ S ) . card( Ψ S )

(54)

There are three possibilities to interpret the certainty and the coverage factors: statistical (frequency), logical (degree of truth) and mereological (degree of inclusion). We will use here mainly the statistical interpretation, i.e., the certainty factors will be interpreted as the frequency of objects having the property Ψ in the set of objects having the property Φ and the coverage factor – as the frequency of objects having the property Φ in the set of objects having the property Ψ . Let us observe that the factors are not assumed arbitrarily but are computed from the data. The number σS (Φ, Ψ ) =

suppS (Φ, Ψ ) = πS (Ψ | Φ) · πS (Φ), card(U )

(55)

will be called the strength of the decision rule Φ → Ψ in S. We will need also the notion of an equivalence of formulas. Let Φ, Ψ be formulas in F or(A) where A is the set of attributes in S = (U, A). We say that Φ and Ψ are equivalent in S, or simply, equivalent if S is understood, in symbols Φ ≡ Ψ , if and only if Φ → Ψ and Ψ → Φ. This means that Φ ≡ if and only if Φ S = Ψ S . We need also approximate equivalence of formulas which is deﬁned as follows: Φ ≡S Ψ if and only if cer(Φ, Ψ ) = cov(Φ, Ψ ) = k.

(56)

Besides, we deﬁne also approximate equivalence of formulas with the accuracy ε (0 ≤ ε ≤ 1, which is deﬁned as follows: Φ ≡k,ε Ψ if and only if

k = min{(cer(Φ, Ψ ), cov(Φ, Ψ )}

(57)

and |cer(Φ, Ψ ) − cov(Φ, Ψ )| ≤ ε. Now, we deﬁne the notion of a decision algorithm, which is a logical counterpart of a decision table. Let Dec(S) = {Φi → Ψ }m i=1 , m ≥ 2, be a set of decision rules in a decision table S = (U, C, D).

Some Issues on Rough Sets

29

1) If for every Φ → Ψ , Φ → Ψ ∈ Dec(S) we have Φ = Φ or Φ ∧ Φ S = ∅, and Ψ = Ψ or Ψ ∧ Ψ S = ∅, then we will say that Dec(S) is the set of pairwise mutually exclusive (independent) decision rules in S. m m 2) If Φi S = U and Ψi S = U we will say that the set of decision i=1

i=1

rules Dec(S) covers U. 3) If Φ → Ψ ∈ Dec(S) and suppS (Φ, Ψ ) = 0 we will say that the decision rule Φ → Ψ is admissible in S. C∗ (X) = Φ S , where Dec+ (S) is the set of all 4) If X∈U/D

Φ→Ψ ∈Dec+ (S)

certain decision rules from Dec(S), we will say that the set of decision rules Dec(S) preserves the consistency part of the decision table S = (U, C, D). The set of decision rules Dec(S) that satisﬁes 1), 2) 3) and 4), i.e., is independent, covers U , preserves the consistency of S and all decision rules Φ → Ψ ∈ Dec(S) are admissible in S – will be called a decision algorithm in S. Hence, if Dec(S) is a decision algorithm in S then the conditions of rules from Dec(S) deﬁne in S a partition of U. Moreover, the positive region of D with respect to C, i.e., the set C∗ (X), (58) X∈U/D

is partitioned by the conditions of some of these rules, which are certain in S. If Φ → Ψ is a decision rule then the decision rule Ψ → Ψ will be called an inverse decision rule of Φ → Ψ . Let Dec∗ (S) denote the set of all inverse decision rules of Dec(S). It can be shown that Dec∗ (S) satisﬁes 1), 2), 3) and 4), i.e., it is a decision algorithm in S. If Dec(S) is a decision algorithm then Dec∗ (S) will be called an inverse decision algorithm of Dec(S). The inverse decision algorithm gives reasons (explanations) for decisions pointed out by the decision algorithms. A decision algorithm is a description of a decision table in the decision language. Generation of decision algorithms from decision tables is a complex task and we will not discuss this issue here, for it does not lie in the scope of this paper. The interested reader is advised to consult the references (see, e.g., [18, 66, 90–97, 50, 98–104] and the bibliography in these articles). 4.5

An Example

Let us now consider an example of decision table, shown in Table 7. Attributes Disease, Age and Sex are condition attributes, whereas test is the decision attribute. We want to explain the test result in terms of patients state, i.e., to describe attribute Test in terms of attributes Disease, Age and Sex.

30

Zdzislaw Pawlak Table 7. Exemplary decision table Fact Disease Age Sex 1 yes old man 2 yes middle woman 3 no old man 4 yes old man 5 no young woman 6 yes middle woman

Test Support + 400 + 80 − 100 − 40 − 220 − 60

Table 8. Certainty and coverage factors for decision table shown in Table 7 Fact Strength Certaint Coverage 1 0.44 0.92 0.83 2 0.09 0.56 0.17 3 0.11 1.00 0.24 4 0.04 0.08 0.10 5 0.24 1.00 0.52 6 0.07 0.44 0.14

The strength, certainty and coverage factors for decision table are shown in Table 8. Below a decision algorithm associated with Table 7 is presented. 1) 2) 3) 4) 5)

if if if if if

(Disease, (Disease, (Disease, (Disease, (Disease,

yes) and yes) and no) then yes) and yes) and

(Age, old) then (Test, +); (Age, middle) then (Test, +); (Test, −); (Age, old) then (Test, −); (Age, middle) then (Test, −).

The certainty and coverage factors for the above algorithm are given in Table 9. Table 9. Certainty and coverage factors for the decision algorithm Rule Strength Certaint Coverage 1 0.44 0.92 0.83 2 0.09 0.56 0.17 3 0.36 1.00 0.76 4 0.04 0.08 0.10 5 0.24 0.44 0.14

The certainty factors of the decision rules lead the following conclusions: – – – – –

92% ill and old patients have positive test result, 56% ill and middle age patients more positive test result, all healthy patients have negative test result, 8% ill and old patients have negative test result, 44% ill and old patients have negative test result.

Some Issues on Rough Sets

31

In other words: – ill and old patients most probably have positive test result (probability = 0.92), – ill and middle age patients most probably have positive test result (probability = 0.56), – healthy patients have certainly negative test result (probability = 1.00). Now let us examine the inverse decision algorithm, which is given below: 1’) 2’) 3’) 4’) 5’)

if if if if if

(Test, (Test, (Test, (Test, (Test,

+) +) −) −) −)

then then then then then

(Disease, (Disease, (Disease, (Disease, (Disease,

yes) yes) no); yes) yes)

and (Age, old); and (Age, middle); and (Age, old); and (Age, middle).

Employing the inverse decision algorithm and the coverage factor we get the following explanation of test results: – reason for positive test results are most probably patients disease and old age (probability = 0.83), – reason for negative test result is most probably lack of the disease (probability = 0.76). It follows from Table 7 that there are two interesting approximate equivalences of test results and the disease. According to rule 1) the disease and old age are approximately equivalent to positive test result (k = 0.83, ε = 0.11), and lack of the disease according to rule 3) is approximately equivalent to negative test result (k = 0.76, ε = 0.24).

5 5.1

Rough Sets and Conflict Analysis Introduction

Knowledge discovery in databases considered in the previous sections boiled down to searching for functional dependencies in the data set. In this section we will discuss another kind of relationship in the data – not dependencies, but conﬂicts. Formally, the conﬂict relation can be seen as a negation (not necessarily, classical) of indiscernibility relation which was used as a basis of rough set theory. Thus dependencies and conﬂict are closely related from logical point of view. It turns out that the conﬂict relation can be used to the conﬂict analysis study. Conﬂict analysis and resolution play an important role in business, governmental, political and lawsuits disputes, labor-management negotiations, military operations and others. To this end many mathematical formal models of conﬂict situations have been proposed and studied, e.g., [105–110].

32

Zdzislaw Pawlak

Various mathematical tools, e.g., graph theory, topology, diﬀerential equations and others, have been used to that purpose. Needless to say that game theory can be also considered as a mathematical model of conﬂict situations. In fact there is no, as yet, “universal” theory of conﬂicts and mathematical models of conﬂict situations are strongly domain dependent. We are going to present in this paper still another approach to conﬂict analysis, based on some ideas of rough set theory – along the lines proposed in [110]. We will illustrate the proposed approach by means of a simple tutorial example of voting analysis in conﬂict situations. The considered model is simple enough for easy computer implementation and seems adequate for many real life applications but to this end more research is needed. 5.2

Basic Concepts of Conflict Theory

In this section we give after [110] deﬁnitions of basic concepts of the proposed approach. Let us assume that we are given a ﬁnite, non-empty set U called the universe. Elements of U will be referred to as agents. Let a function v : U → {−1, 0, 1}, or in short {−, 0, +}, be given assigning to every agent the number −1, 0 or 1, representing his opinion, view, voting result, etc. about some discussed issue, and meaning against, neutral and favorable, respectively. The pair S = (U, v) will be called a conflict situation. In order to express relations between agents we deﬁne three basic binary relations on the universe: conflict, neutrality and alliance. To this end we ﬁrst deﬁne the following auxiliary function: 1, if v(x)v(y) = 1 or x = y (59) φv (x, y) = 0, if v(x)v(y) = 0 and x = y −1, if v(x)v(y) = −1. This means that, if φv (x, y) = 1, agents x and y have the same opinion about issue v (are allied) on v); if φv (x, y) = 0 means that at least one agent x or y has neutral approach to issue a (is neutral on a), and if φv (x, y) = −1, means that both agents have diﬀerent opinions about issue v (are in conflict on v). In what follows we will deﬁne three basic relations Rv+ ,Rv0 and Rv− on U 2 called alliance, neutrality and conflict relations respectively, and deﬁned as follows: Rv+ (x, y) iﬀ φv (x, y) = 1,

(60)

Rv0 (x, y) iﬀ φv (x, y) = 0, Rv− (x, y) iﬀ φv (x, y) = −1. It is easily seen that the alliance relation has the following properties: Rv+ (x, x), Rv+ (x, y) implies Rv+ (y, x), Rv+ (x, y) and Rv+ (y, z) implies Rv+ (x, z),

(61)

Some Issues on Rough Sets

33

i.e., Rv+ is an equivalence relation. Each equivalence class of alliance relation will be called coalition with respect to v. Let us note that the last condition in (61) can be expressed as “a friend of my friend is my friend”. For the conﬂict relation we have the following properties: not Rv− (x, x),

Rv− (x, y) Rv− (x, y) Rv− (x, y)

(62) Rv− (y, x),

implies and Rv− (y, z) implies Rv+ (x, z), and Rv+ (y, z) implies Rv− (x, z).

The last two conditions in (62) refer to well known sayings “an enemy of my enemy is my friend” and “a friend of my enemy is my enemy”. For the neutrality relation we have: not Rv0 (x, x), Rv0 (x, y) = Rv0 (y, x).

(63)

Let us observe that in the conﬂict and neutrality relations there are no coalitions. The following property holds: Rv+ ∪ Rv0 ∪ Rv− = U 2 because if (x, y) ∈ U 2 then Φv (x, y) = 1 or Φv (x, y) = 0 or Φv (x, y) = −1 so (x, y) ∈ Rv+ or (x, y) ∈ Rv− or (x, y) ∈ Rv− . All the three relations Rv+ , Rv0 , Rv− are pairwise disjoint, i.e., every pair of objects (x, y) belongs to exactly one of the above deﬁned relations (is in conﬂict, is allied or is neutral). With every conﬂict situation we will associate a conflict graph GS = (Rv+ , Rv0 , Rv− ).

(64)

An example of a conﬂict graph is shown in Figure 3. Solid lines are denoting conﬂicts, doted line – alliance, and neutrality, for simplicity, is not shown explicitly in the graph. Of course, B, C, and D form a coalition.

Fig. 3. Exemplary conﬂict graph

34

5.3

Zdzislaw Pawlak

An Example

In this section we will illustrate the above presented ideas by means of a very simple tutorial example using concepts presented in the previous. Table 10 presents a decision table in which the only condition attribute is Party, whereas the decision attribute is Voting. The table describes voting results in a parliament containing 500 members grouped in four political parties denoted A, B, C and D. Suppose the parliament discussed certain issue (e.g., membership of the country in European Union) and the voting result is presented in column Voting, where +, 0 and − denoted yes, abstention and no respectively. The column support contains the number of voters for each option. Table 10. Decision table with one condition attribute Party and the decision Voting Fact Party Voting Support 1 A + 200 2 A 0 30 3 A − 10 4 B + 15 5 B − 25 6 C 0 20 7 C − 40 8 D + 25 9 D 0 35 10 D − 100 Table 11. Certainty and the coverage factors for Table 10 Fact Strength Certainty Coverage 1 0.40 0.83 0.83 2 0.06 0.13 0.35 3 0.02 0.04 0.06 4 0.03 0.36 0.06 5 0.05 0.63 0.14 6 0.04 0.33 0.23 7 0.08 0.67 0.23 8 0.05 0.16 0.10 9 0.07 0.22 0.41 10 0.20 0.63 0.57

The strength, certainty and the coverage factors for Table 10 are given in Table 11. From the certainty factors we can conclude, for example, that: – 83.3% of party A voted yes, – 12.5% of party A abstained, – 4.2% of party A voted no.

Some Issues on Rough Sets

35

From the coverage factors we can get, for example, the following explanation of voting results: – 83.3% yes votes came from party A, – 6.3% yes votes came from party B, – 10.4% yes votes came from party C.

6 6.1

Data Analysis and Flow Graphs Introduction

Pursuit for data patterns considered so far referred to data tables. In this section we will consider data represented not in a form of data table but by means of graphs. We will show that this method od data representation leads to a new look on knowledge discovery, new eﬃcient algorithms, and vide spectrum of novel applications. The idea presented here are based on some concepts given by L ukasiewicz [88]. In [88] L ukasiewicz proposed to use logic as mathematical foundation of probability. He claims that probability is “purely logical concept” and that his approach frees probability from its obscure philosophical connotation. He recommends to replace the concept of probability by truth values of indefinite propositions, which are in fact propositional functions. Let us explain this idea more closely. Let U be a non empty ﬁnite set, and let Φ(x) be a propositional function. The meaning of Φ(x) in U , denoted by Φ(x), is the set of all elements of U , that satisﬁes Φ(x) in U. The truth value of Φ(x) is deﬁned by card(Φ(x))/card(U ). For example, if U = {1, 2, 3, 4, 5, 6} and Φ(x) is the propositional function x > 4, then the truth value of Φ(x) = 2/6 = 1/3. If the truth value of Φ(x) is 1, then the propositional function is true, and if it is 0, then the function is false. Thus the truth value of any propositional function is a number between 0 and 1. Further, it is shown that the truth values can be treated as probability and that all laws of probability can be obtained by means of logical calculus. In this paper we show that the idea of L ukasiewicz can be also expressed diﬀerently. Instead of using truth values in place of probability, stipulated by L ukasiewicz, we propose, in this paper, using of deterministic ﬂow analysis in ﬂow networks (graphs). In the proposed setting, ﬂow is governed by some probabilistic rules (e.g., Bayes’ rule), or by the corresponding logical calculus proposed by L ukasiewicz, though, the formulas have entirely deterministic meaning, and need neither probabilistic nor logical interpretation. They simply describe ﬂow distribution in ﬂow graphs. However, ﬂow graphs introduced here are diﬀerent from those proposed by Ford and Fulkerson [111] for optimal ﬂow analysis, because they model rather, e.g., ﬂow distribution in a plumbing network, than the optimal ﬂow. The ﬂow graphs considered in this paper are basically meant not to physical media (e.g., water) ﬂow analysis, but to information ﬂow examination in decision algorithms. To this end branches of a ﬂow graph are interpreted as decision

36

Zdzislaw Pawlak

rules. With every decision rule (i.e. branch) three coeﬃcients are associated, the strength, certainty and coverage factors. In classical decision algorithms language they have probabilistic interpretation. Using L ukasiewicz’s approach we can understand them as truth values. However, in the proposed setting they can be interpreted simply as ﬂow distribution ratios between branches of the ﬂow graph, without referring to their probabilistic or logical nature. This interpretation, in particular, leads to a new look on Bayes’ theorem, which in this setting, has entirely deterministic explanation (see also [86]). The presented idea can be used, among others, as a new tool for data analysis, and knowledge representation. We start our considerations giving fundamental deﬁnitions of a ﬂow graph and related notions. Next, basic properties of ﬂow graphs are deﬁned and investigated. Further, the relationship between ﬂow graphs and decision algorithms is discussed. Finally, a simple tutorial example is used to illustrate the consideration. 6.2

Flow Graphs

A ﬂow graph is a directed, acyclic, ﬁnite graph G = (N, B, φ), where N is a set of nodes, B ⊆ N × N is a set of directed branches, φ : B → R+ is a flow function and R+ is the set of non-negative reals. If (x, y) ∈ B then x is an input of y and y is an output of x. If x ∈ N then I(x) is the set of all inputs of x and O(x) is the set of all outputs of x. Input and output of a graph G are deﬁned I(G) = {x ∈ N : I(x) = ∅}, O(G) = {x ∈ N : O(x) = ∅}. Inputs and outputs of G are external nodes of G; other nodes are internal nodes of G. If (x, y) ∈ B then φ(x, y) is a troughflow from x to y. We will assume in what follows that φ(x, y) = 0 for every (x, y) ∈ B. With every node x of a ﬂow graph G we associate its inflow φ(y, x), (65) φ+ (x) = y∈I(x)

and outflow φ− (x) =

φ(x, y).

(66)

y∈O(x)

Similarly, we deﬁne an inﬂow and an outﬂow for the whole ﬂow graph G, which are deﬁned as φ− (x), (67) φ+ (G) = x∈I(G)

φ− (G) =

x∈O(G)

φ+ (x).

(68)

Some Issues on Rough Sets

37

We assume that for any internal node x, φ+ (x) = φ− (x) = φ(x), where is a troughflow of node x. Obviously,φ+ (G) = φ− (G) = φ(G) , where φ(G) is a troughflow of graph G. The above formulas can be considered as flow conservation equations [111]. We will deﬁne now a normalized flow graph. A normalized ﬂow graph is a directed, acyclic, finite graph G = (N, B, σ), where N is a set of nodes, B ⊆ N × N is a set of directed branches and σ : B →< 0, 1 > is a normalized flow of (x, y) and σ(x, y) =

σ(x, y) , σ(G)

(69)

is strength of (x, y). Obviously, 0 ≤ σ(x, y) ≤ 1. The strength of the branch expresses simply the percentage of a total ﬂow through the branch. In what follows we will use normalized ﬂow graphs only, therefore by a ﬂow graphs we will understand normalized ﬂow graphs, unless stated otherwise. With every node x of a ﬂow graph G we associate its normalized inflow and outflow deﬁned as σ+ (x) =

φ+ (x) = σ(y, x), φ(G)

(70)

φ− (x) = σ(y, x). φ(G)

(71)

y∈I(x)

σ− (x) =

y∈O(x)

Obviously for any internal node x, we have σ+ (X) = σ− = σ(x), where σ(x) is a normalized troughflow of x. Moreover, let σ+ (G) =

φ+ (G) = σ− (x), φ(G)

(72)

φ− (G) = σ+ (x). φ(G)

(73)

x∈I(G)

σ− (G) =

x∈O(G)

Obviously, σ+ (G) = σ− (G) = σ(G) = 1. 6.3

Certainty and Coverage Factors

With every branch (x, y) of a ﬂow graph G we associate the certainty and the coverage factors. The certainty and the coverage of are deﬁned as cer(x, y) =

σ(x, y) , σ(x)

(74)

38

Zdzislaw Pawlak

and cov(x, y) =

σ(x, y) . σ(y)

(75)

respectively, where σ(x) = 0 and σ(y) = 0. Below some properties, which are immediate consequences of deﬁnitions given above are presented: cer(x, y) = 1, (76) y∈O(x)

cov(x, y) = 1,

(77)

y∈I(y)

σ(x) =

cer(x, y)σ(x) =

y∈O(x)

σ(y) =

σ(x, y),

(78)

σ(x, y),

(79)

y∈O(x)

cov(x, y)σ(y) =

x∈I(y)

x∈I(y)

cer(x, y) =

cov(x, y)σ(y) , σ(x)

(80)

cov(x, y) =

cer(x, y)σ(x) . σ(y)

(81)

Obviously the above properties have a probabilistic ﬂavor, e.g., equations (78) and (79) have a form of total probability theorem, whereas formulas (80) and (81) are Bayes’ rules. However, these properties in our approach are interpreted in a deterministic way and they describe ﬂow distribution among branches in the network. A (directed) path from x to y, x = y in G is a sequence of nodes x1 , . . . , xn such that x1 = x, xn = y and (xi , xi+1 ) ∈ B for every i, 1 ≤ i ≤ n − 1. A path from x to y is denoted by [x . . . y]. The certainty, the coverage and the strength of the path [x1 . . . xn ] are deﬁned as cer[x1 . . . xn ] =

n−1

cer(xi , xi+1 ),

(82)

cov(xi , xi+1 ),

(83)

i=1

cov[x1 . . . xn ] =

n−1 i=1

σ[x . . . y] = σ(x)cer[x . . . y] = σ(y)cov[x . . . y],

(84)

Some Issues on Rough Sets

39

respectively. The set of all paths from x to y(x = y) in G denoted < x, y >, will be called a connection from x to y in G. In other words, connection < x, y > is a sub-graph of G determined by nodes x and y. For every connection < x, y > we deﬁne its certainty, coverage and strength as shown below: cer < x, y >= cer[x . . . y], (85) [x...y]∈<x,y>

the coverage of the connection < x, y > is cov < x, y >=

cov[x . . . y],

(86)

[x...y]∈<x,y>

and the strength of the connection < x, y > is σ[x . . . y] = σ(x)cer < x, y >= σ(y)cov < x, y > .(87) σ < x, y >= [x...y]∈<x,y>

Let [x . . . y] be a path such that x and y are input and output of the graph G, respectively. Such a path will be referred to as complete. The set of all complete paths from x to y will be called a complete connection from x to y in G. In what follows we will consider complete paths and connections only, unless stated otherwise. Let x and y be an input and output of a graph G respectively. If we substitute for every complete connection < x, y > in G a single branch (x, y) such σ(x, y) = σ < x, y >, cer(x, y) = cer < x, y >, cov(x, y) = cov < x, y > then we obtain a new ﬂow graph G such that σ(G) = σ(G ). The new ﬂow graph will be called a combined ﬂow graph. The combined ﬂow graph for a given ﬂow graph represents a relationship between its inputs and outputs. 6.4

Dependencies in Flow Graphs

Let (x, y) ∈ B. Nodes x and y are independent on each other if σ(x, y) = σ(x)σ(y).

(88)

σ(x, y) = cer(x, y) = σ(y), σ(x)

(89)

σ(x, y) = cov(x, y) = σ(x). σ(y)

(90)

Consequently

and

This idea refers to some concepts proposed by L ukasiewicz [88] in connection with statistical independence of logical formulas.

40

Zdzislaw Pawlak

If cer(x, y) > σ(y),

(91)

cov(x, y) > σ(x),

(92)

or

then x and y depend positively on each other. Similarly, if cer(x, y) < σ(y),

(93)

cov(x, y) < σ(x),

(94)

or

then x and y depend negatively on each other. Let us observe that relations of independency and dependencies are symmetric ones, and are analogous to that used in statistics. For every (x, y) ∈ B we deﬁne a dependency factor η(x, y) deﬁned as η(x, y) =

cer(x, y) − σ(y) cov(x, y) − σ(x) = . cer(x, y) + σ(y) cov(x, y) + σ(x)

(95)

It is easy to check that if η(x, y) = 0, then x and y are independent on each other, if −1 < η(x, y) < 0, then x and y are negatively dependent and if 0 < η(x, y) < 1 then x and y are positively dependent on each other. Thus the dependency factor expresses a degree of dependency, and can be seen as a counterpart of correlation coeﬃcient used in statistics (see also [112]). 6.5

An Example

Now we will illustrate ideas introduced in the previous sections by means of a simple example concerning votes distribution of various age groups and social classes of voters between political parties. Consider three disjoint age groups of voters y1 (old), y2 (middle aged) and y3 (young) – belonging to three social classes x1 (high), x2 (middle) and x3 (low). The voters voted for four political parties z1 (Conservatives), z2 (Labor), z3 (Liberal Democrats) and z4 (others). Social class and age group votes distribution is shown in Figure 4. First we want to ﬁnd votes distribution with respect to age group. The result is shown in Figure 5. From the ﬂow graph presented in Figure 5 we can see that, e.g., party z1 obtained 19% of total votes, all of them from age group y1 ; party z2 – 44% votes, which 82% are from age group y2 and 18% – from age group y3 , etc. If we want to know how votes are distributed between parties with respects to social classes we have to eliminate age groups from the ﬂow graph. Employing the algorithm presented in Section 6.3 we get results shown in Figure 6.

Some Issues on Rough Sets

41

Fig. 4. Social class and age group votes distribution

From the ﬂow graph presented in Figure 6 we can see that party z1 obtained 22% votes from social class x1 and 78% – from social class x2 , etc. We can also present the obtained results employing decision rules. For simplicity we present only some decision rules of the decision algorithm. For example, from Figure 5 we obtain decision rules: If Party (z1 ) then Age group (y1 )(0.19); If Party (z2 ) then Age group (y2 (0.36); If Party (z2 ) then Age group (y3 )(0.08), etc. The number at the end of each decision rule denotes strength of the rule. Similarly, from Figure 6 we get: If Party (z1 ) then Soc. class (x1 )(0.04); If Party (z1 ) then Soc. class (x2 )(0.14), etc.

Fig. 5. Votes distribution with respect to the age group

42

Zdzislaw Pawlak

Fig. 6. Votes distribution between parties with respects to the social classes

From Figure 6 we have: If Soc. class (x1 ) then Party (z1 )(0.04); If Soc. class (x1 ) then Party (z2 )(0.02); If Soc. class (x1 ) then Party (z3 )(0.04), etc. Dependencies between Social class and Parties are shown in Figure 6. 6.6

An Example

In this section we continue the example from Section 5.3. The ﬂow graph associated with Table 11 is shown in Figure 7. Branches of the ﬂow graph represent decision rules together with their certainty and coverage factors. For example, the decision rule A → 0 has the certainty and coverage factors 0.13 and 0.35, respectively. The ﬂow graph gives a clear insight into the voting structure of all parties. For many applications exact values of certainty of coverage factors of decision rules are not necessary. To this end we introduce “approximate” decision rules, denoted C D and read C mostly implies D. C D if and only if cer(C, D) > 0.5. Thus, we can replace ﬂow graph shown in Figure 7 by approximate ﬂow graph presented in Figure 8. From this graph we can see that parties B, C and D form a coalition, which is in conﬂict with party A, i.e., every member of the coalition is in conﬂict with party A. The corresponding conﬂict graph is shown in Figure 9. Moreover, from the ﬂow graph shown in Figure 7 we can obtain an “inverse” approximate ﬂow graph which is shown in Figure 10. This ﬂow graph contains all inverse decision rules with certainty factor greater than 0.5. From this graph we can see that yes votes were obtained mostly from party A and no votes – mostly from party D.

Some Issues on Rough Sets

43

Fig. 7. Flow graph for Table 11

Fig. 8. “Approximate” ﬂow graph

Fig. 9. Conﬂict graph

We can also compute dependencies between parties and voting results the results are shown in Figure 11. 6.7

Decision Networks

Ideas given in the previous sections can be also presented in logical terms, as shown in what follows.

44

Zdzislaw Pawlak

Fig. 10. An “inverse” approximate ﬂow graph

Fig. 11. Dependencies between parties and voting results

The main problem in data mining consists in discovering patterns in data. The patterns are usually expressed in form of decision rules, which are logical expressions in the form if Φ then Ψ , where Φ and Ψ are logical formulas (propositional functions) used to express properties of objects of interest. Any set of decision rules is called a decision algorithm. Thus knowledge discovery from data consists in representing hidden relationships between data in a form of decision algorithms. However, for some applications, it is not enough to give only set of decision rules describing relationships in the database. Sometimes also knowledge of relationship between decision rules is necessary in order to understand better data structures. To this end we propose to employ a decision algorithm in which also relationship between decision rules is pointed out, called a decision network. The decision network is a ﬁnite, directed acyclic graph, nodes of which represent logical formulas, whereas branches – are interpreted as decision rules. Thus

Some Issues on Rough Sets

45

every path in the graph represents a chain of decisions rules, which will be used to describe compound decisions. Some properties of decision networks will be given and a simple example will be used to illustrate the presented ideas and show possible applications. Let U be a non empty ﬁnite set, called the universe and let Φ , Ψ be logical formulas. The meaning of Φ in U , denoted by Φ, is the set of all elements of U , that satisﬁes Φ in U. The truth value of Φ denoted val(Φ) is deﬁned as card(Φ)/card(U ), where card(X) denotes cardinality of X and F is a set of formulas. By decision network over S = (U, F ) we mean a pair N = (F , R), where R ⊆ F × F is a binary relation, called a consequence relation and F is a set of logical formulas. Any pair (Φ, Ψ ) ∈ R, Φ = Ψ is referred to as a decision rule (in N ). We assume that S is known and we will not refer to it in what follows. A decision rule (Φ, Ψ ) will be also presented as an expression Φ → Ψ , read if Φ then Ψ , where Φ and Ψ are referred to as predecessor (conditions) and successor (decisions) of the rule, respectively. The number supp(Φ, Ψ ) = card(Φ ∧ Ψ ) will be called a support of the rule Φ → Ψ . We will consider nonvoid decision rules only, i.e., rules such that supp(Φ, Ψ ) = 0. With every decision rule Φ → Ψ we associate its strength deﬁned as str(Φ, Ψ ) =

supp(Φ, Ψ ) . card(U )

(96)

Moreover, with every decision rule Φ → Ψ we associate the certainty factor deﬁned as str(Φ, Ψ ) cer(Φ, Ψ ) = , (97) val(Φ) and the coverage factor of Φ → Ψ cov(Φ, Ψ ) =

str(Φ, Ψ ) , val(Ψ )

(98)

where val(Φ) = 0 and val(Ψ ) = 0. The coeﬃcients can be computed from data or can be a subjective assessment. We assume that val(Φ) = str(Φ, Ψ ) (99) Ψ ∈Suc(Φ)

and val(Ψ ) =

str(Φ, Ψ ),

(100)

Φ∈P re(Ψ )

where Suc(Φ) and P re(Ψ ) are sets of all successors and predecessors of the corresponding formulas, respectively. Consequently we have cer(φ, Ψ ) = cov(Φ, Ψ ) = 1. (101) Suc(Φ)

P re(Ψ )

46

Zdzislaw Pawlak

If a decision rule Φ → Ψ uniquely determines decisions in terms of conditions, i.e., if cer(Φ, Ψ ) = 1, then the rule is certain, otherwise the rule is uncertain. If a decision rule Φ → Ψ covers all decisions, i.e., if cov(Φ, Ψ ) = 1 then the decision rule is total, otherwise the decision rule is partial. Immediate consequences of (97) and (98) are: cer(Φ, Ψ ) =

cov(Φ, Ψ )val(Ψ ) , val(Φ)

(102)

cov(Φ, Ψ ) =

cer(Φ, Ψ )val(Φ) . val(Ψ )

(103)

Note, that (102) and (103) are Bayes’ formulas. This relationship, as mentioned previously, ﬁrst was observed by L ukasiewicz [88]. Any sequence of formulas Φ1 , . . . , Φn , Φi ∈ F and for every i, 1 ≤ i ≤ n − 1, (Φi , Φi+1 ) ∈ R will be called a path from Φ1 to Φn and will be denoted by [Φ1 . . . Φn ]. We deﬁne n−1 cer[Φi , Φi+1 ], (104) cer[Φ1 . . . Φn ] = i=1

cov[Φ1 . . . Φn ] =

n−1

cov[Φi , Φi+1 ],

(105)

i=1

str[Φ1 . . . Φn ] = val(Φ1 )cer[Φ1 . . . Φn ] = val(Φn )cov[Φ1 . . . Φn ].

(106)

The set of all paths form Φ to Ψ , denoted < Φ, Ψ >, will be called a connection from Φ to Ψ. For connection we have cer[Φ . . . Ψ ], (107) cer < Φ, Ψ >= [Φ...Ψ ]∈

cov < Φ, Ψ >=

cov[Φ . . . Ψ ],

(108)

[Φ...Ψ ]∈

str < Φ, Ψ > =

str[Φ . . . Ψ ] =

[Φ...Ψ ]∈

= val(Φ)cer < Φ, Ψ >= val(Ψ )cov < Φ, Ψ > .

(109)

With every decision network we can associate a ﬂow graph [70, 71]. Formulas of the network are interpreted as nodes of the graph, and decision rules – as directed branches of the ﬂow graph, whereas strength of a decision rule is interpreted as ﬂow of the corresponding branch.

Some Issues on Rough Sets

47

Let Φ → Ψ be a decision rule. Formulas Φ and Ψ are independent on each other if str(Φ, Ψ ) = val(Φ)val(Ψ ).

(110)

str(Φ, Ψ ) = cer(Φ, Ψ ) = val(Ψ ), val(Φ)

(111)

str(Φ, Ψ ) = cov(Φ, Ψ ) = val(Φ). val(Ψ )

(112)

cer(Φ, Ψ ) > val(Ψ ),

(113)

cov(Φ, Ψ ) > val(Φ),

(114)

Consequently

and

If

or

then Φ and Ψ depend positively on each other. Similarly, if cer(Φ, Ψ ) < val(Ψ ),

(115)

cov(Φ, Ψ ) < val(Φ),

(116)

or

then Φ and Ψ depend negatively on each other. For every decision rule Φ → Ψ we deﬁne a dependency factor η(Φ, Ψ ) deﬁned as η(Φ, Ψ ) =

cov(Φ, Ψ ) − val(Φ) cer(Φ, Ψ ) − val(Ψ ) = . cer(Φ, Ψ ) + val(Ψ ) cov(Φ, Ψ ) + val(Φ)

(117)

It is easy to check that if η(Φ, Ψ ) = 0, then Φ and Ψ are independent on each other, if −1 < η(Φ, Ψ ) < 0, then Φ and Ψ are negatively dependent and if 0 < η(Φ, Ψ ) < 1 then Φ and Ψ are positively dependent on each other. 6.8

An Example

Flow graphs given in Figures 4–6 can be now presented as shown in Figures 12– 14, respectively. These ﬂow graphs show clearly the relational structure between formulas involved in the voting process.

48

Zdzislaw Pawlak

Fig. 12. Decision network for ﬂow graph from Figure 4

Fig. 13. Decision network for ﬂow graph from Figure 5

6.9

Inference Rules and Decision Rules

In this section we are going to show relationship between previously discussed concepts and reasoning schemes used in logical inference. Basic rules of inference used in classical logic are Modus Ponens (MP) and Modus Tollens (MT). These two reasoning patterns start from some general knowledge about reality, expressed by true implication, ”if Φ then Ψ ”. Then basing on true premise Φ we arrive at true conclusion Ψ (MP), or if negation of conclusion Ψ is true we infer that negation of premise Φ is true (MT). In reasoning from data (data mining) we also use rules if Φ then Ψ , called decision rules, to express our knowledge about reality, but the meaning of decision rules is diﬀerent. It does not express general knowledge but refers to partial facts. Therefore decision rules are not true or false but probable (possible) only.

Some Issues on Rough Sets

49

Fig. 14. Decision network for ﬂow graph from Figure 6

In this paper we compare inference rules and decision rules in the context of decision networks, proposed by the author as a new approach to analyze reasoning patterns in data. Decision network is a set of logical formulas F together with a binary relation over the set R ⊆ F × F of formulas, called a consequence relation. Elements of the relation are called decision rules. The decision network can be perceived as a directed graph, nodes of which are formulas and branches – are decision rules. Thus the decision network can be seen as a knowledge representation system, revealing data structure of a data base. Discovering patterns in the database represented by a decision network boils down to discovering some patterns in the network. Analogy to the modus ponens and modus tollens inference rules will be shown and discussed. Classical rules of inference used in logic are Modus Ponens and Modus Tollens, which have the form if Φ → Ψ is true and Φ is true then Ψ is true and if Φ → Ψ is true and ∼ Ψ is true then ∼ Φ is true respectively.

50

Zdzislaw Pawlak

Modus Ponens allows us to obtain true consequences from true premises, whereas Modus Tollens yields true negation of premise from true negation of conclusion. In reasoning about data (data analysis) the situation is diﬀerent. Instead of true propositions we consider propositional functions, which are true to a “degree”, i.e., they assume truth values which lie between 0 and 1, in other words, they are probable, not true. Besides, instead of true inference rules we have now decision rules, which are neither true nor false. They are characterized by three coeﬃcients, strength, certainty and coverage factors. Strength of a decision rule can be understood as a counterpart of truth value of the inference rule, and it represents frequency of the decision rule in a database. Thus employing decision rules to discovering patterns in data boils down to computation probability of conclusion in terms of probability of the premise and strength of the decision rule, or – the probability of the premise from the probability of the conclusion and strength of the decision rule. Hence, the role of decision rules in data analysis is somehow similar to classical inference patterns, as shown by the schemes below. Two basic rules of inference for data analysis are as follows: if Φ→Ψ and Φ then Ψ

has cer(Φ, Ψ ) and cov(Φ, Ψ ) is true with the probability val(Φ) is true with the probability val(Ψ ) = αval(Φ).

Similarly if and then

Φ→Ψ Ψ Φ

has cer(Φ, Ψ ) and cov(Φ, Ψ ) is true with the probability val(Ψ ) is true with the probability val(Φ) = α−1 val(Φ).

The above inference rules can be considered as counterparts of Modus Ponens and Modus Tollens for data analysis and will be called Rough Modus Ponens (RMP) and Rough Modus Tollens (RMT), respectively. There are however essential diﬀerences between MP (MT) and RMP (RMT). First, instead of truth values associated with inference rules we consider certainly and coverage factors (conditional probabilities) assigned to decision rules. Second, in the case of decision rules, in contrast to inference rules, truth value of a conclusion (RMP) depends not only on a single premise but in fact depends on truth values of premises of all decision rules having the same conclusions. Similarly, for RMT. Let us also notice that inference rules are transitive, i.e., if Φ → Ψ and Ψ → Θ then Φ → Θ and decision rules are not. If Φ → Ψ and Ψ → Θ, then we have to compute the certainty, coverage and strength of the rule Φ → Θ, employing formulas (104),(105),(107),(108). This shows clearly the diﬀerence between reasoning patterns using classical inference rules in logical reasoning and using decision rules in reasoning about data.

Some Issues on Rough Sets

6.10

51

An Example

Suppose that three models of cars Φ1 , Φ2 and Φ3 are sold to three disjoint groups of customers Θ1 , Θ2 and Θ3 through four dealers Ψ1 , Ψ2 , Ψ3 and Ψ4 . Moreover, let us assume that car models and dealers are distributed as shown in Figure 15. Applying RMP to data shown in Figure 15 we get results shown in Figure 16. In order to ﬁnd how car models are distributed among customer

Fig. 15. Distributions of car models and dealers

Fig. 16. The result of application of RMP to data from Figure 15

52

Zdzislaw Pawlak

Fig. 17. Distribution of car models among customer groups

groups we have to compute all connections among cars models and consumers groups, i.e., to apply RMP to data given in Figure 16. The results are shown in Figure 17. For example, we can see from the decision network that consumer group Θ2 bought 21% of car model Φ1 , 35% of car model Φ2 and 44% of car model Φ3 . Conversely, for example, car model Φ1 is distributed among customer groups as follows: 31% cars bought group Θ1 , 57% group Θ2 and 12% group Θ3 .

7

Summary

Basic concept of mathematics, the set, leads to antinomies, i.e., it is contradictory. This deﬁciency of sets, has rather philosophical than practical meaning, for sets used in mathematics are free from the above discussed faults. Antinomies are associated with very “artiﬁcial” sets constructed in logic but not found in sets used in mathematics. That is why we can use mathematics safely. Philosophically, fuzzy set theory and rough set theory are two diﬀerent approaches to vagueness and are not remedy for classical set theory diﬃculties. Both theories represent two diﬀerent approaches to vagueness. Fuzzy set theory addresses gradualness of knowledge, expressed by the fuzzy membership whereas rough set theory addresses granularity of knowledge, expressed by the indiscernibility relation. Practically, rough set theory can be viewed as a new method of intelligent data analysis. Rough set theory has found many applications in medical data analysis, ﬁnance, voice recognition, image processing, and others. However the approach presented in this paper is too simple to many real-life applications and was extended in many ways by various authors. The detailed discussion of the above issues can be found in be found in books (see, e.g., [18–27, 12, 28–30]), special issues of journals (see, e.g., [31–34, 34–38]), proceedings of international conferences (see, e.g., [39–49] ), tutorials (e.g., [50–53]), and on the internet (see, e.g., www.roughsets.org, logic.mimuw.edu.pl,rsds.wsiz.rzeszow.pl).

Some Issues on Rough Sets

53

Besides, rough set theory inspired new look on Bayes’ theorem. Bayesian inference consists in update prior probabilities by means of data to posterior probabilities. In the rough set approach Bayes’ theorem reveals data patterns, which are used next to draw conclusions from data, in form of decision rules. Moreover, we have shown a new mathematical model of ﬂow networks, which can be used to decision algorithm analysis. In particular it has been revealed that the ﬂow in the ﬂow network is governed by Bayes’ rule, which has entirely deterministic meaning, and can be used to decision algorithm study. Also, a new look of dependencies in databases, based on L ukasiewiczs ideas of independencies of logical formulas, is presented. Acknowledment I would like to thank to Prof. Andrzej Skowron for useful discussion and help in preparation of this paper.

References 1. Zadeh, L.A.: Fuzzy sets. Information and Control 8 (1965) 338–353 2. Pawlak, Z.: Rough sets. International Journal of Computer and Information Sciences 11 (1982) 341–356 3. Ziarko, W.: Variable precision rough set model. Journal of Computer and System Sciences 46 (1993) 39–59 ˙ 4. Polkowski, L., Skowron, A., Zytkow, J.: Rough foundations for rough sets. In [40] 55–58 5. Skowron, A., Stepaniuk, J.: Tolerance approximation spaces. Fundamenta Informaticae 27 (1996) 245–253 6. Polkowski, L., Skowron, A.: Rough mereology: A new paradigm for approximate reasoning. International Journal of Approximate Reasoning 15 (1996) 333–365 7. Slowi´ nski, R., Vanderpooten, D.: Similarity relation as a basis for rough approximations. In Wang, P.P., ed.: Machine Intelligence & Soft-Computing, Vol. IV. Bookwrights, Raleigh, NC (1997) 17–33 8. Slowi´ nski, R., Vanderpooten, D.: A generalized deﬁnition of rough approximations based on similarity. IEEE Transactions on Data and Knowledge Engineering 12(2) (2000) 331–336 9. Stepaniuk, J.: Knowledge discovery by application of rough set models. In [26] 137–233 10. Skowron, A.: Toward intelligent systems: Calculi of information granules. Bulletin of the International Rough Set Society 5 (2001) 9–30 11. Greco, A., Matarazzo, B., Slowi´ nski, R.: Rough approximation by dominance relations. International Journal of Intelligent Systems 17 (2002) 153–171 12. Polkowski, L., ed.: Rough Sets: Mathematical Foundations. Advances in Soft Computing. Physica-Verlag, Heidelberg (2002) 13. Skowron, A., Stepaniuk, J.: Information granules and rough-neural computing. In [30] 43–84 14. Skowron, A.: Approximation spaces in rough neurocomputing. In [29] 13–22 15. Wr´ oblewski, J.: Adaptive aspects of combining approximation spaces. In [30] 139–156

54

Zdzislaw Pawlak

16. Yao, Y.Y.: Informaton granulation and approximation in a decision-theoretical model of rough sets. In [30] 491–520 17. Skowron, A., Swiniarski, R., Synak, P.: Approximation spaces and information granulation (submitted). In: Fourth International Conference on Rough Sets and Current Trends in Computing (RSCTC’04), Uppsala, Sweden, June 1-5, 2004. Lecture Notes in Computer Science. Springer-Verlag, Heidelberg, Germany (2004) 18. Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning about Data. Volume 9 of System Theory, Knowledge Engineering and Problem Solving. Kluwer Academic Publishers, Dordrecht, The Netherlands (1991) 19. Slowi´ nski, R., ed.: Intelligent Decision Support - Handbook of Applications and Advances of the Rough Sets Theory. Volume 11 of System Theory, Knowledge Engineering and Problem Solving. Kluwer Academic Publishers, Dordrecht, The Netherlands (1992) 20. Lin, T.Y., Cercone, N., eds.: Rough Sets and Data Mining - Analysis of Imperfect Data. Kluwer Academic Publishers, Boston, USA (1997) 21. Orlowska, E., ed.: Incomplete Information: Rough Set Analysis. Volume 13 of Studies in Fuzziness and Soft Computing. Springer-Verlag/Physica-Verlag, Heidelberg, Germany (1997) 22. Polkowski, L., Skowron, A., eds.: Rough Sets in Knowledge Discovery 1: Methodology and Applications. Volume 18 of Studies in Fuzziness and Soft Computing. Physica-Verlag, Heidelberg, Germany (1998) 23. Polkowski, L., Skowron, A., eds.: Rough Sets in Knowledge Discovery 2: Applications, Case Studies and Software Systems. Volume 19 of Studies in Fuzziness and Soft Computing. Physica-Verlag, Heidelberg, Germany (1998) 24. Pal, S.K., Skowron, A., eds.: Rough Fuzzy Hybridization: A New Trend in Decision-Making. Springer-Verlag, Singapore (1999) 25. Duentsch, I., Gediga, G.: Rough set data analysis: A road to non-invasive knowledge discovery. Methodos Publishers, Bangor, UK (2000) 26. Polkowski, L., Lin, T.Y., Tsumoto, S., eds.: Rough Set Methods and Applications: New Developments in Knowledge Discovery in Information Systems. Volume 56 of Studies in Fuzziness and Soft Computing. Springer-Verlag/Physica-Verlag, Heidelberg, Germany (2000) 27. Lin, T.Y., Yao, Y.Y., Zadeh, L.A., eds.: Rough Sets, Granular Computing and Data Mining. Studies in Fuzziness and Soft Computing. Physica-Verlag, Heidelberg (2001) 28. Demri, S., Orlowska, E., eds.: Incomplete Information: Structure, Inference, Complexity. Monographs in Theoretical Cpmputer Sience. Springer-Verlag, Heidelberg, Germany (2002) 29. Inuiguchi, M., Hirano, S., Tsumoto, S., eds.: Rough Set Theory and Granular Computing. Volume 125 of Studies in Fuzziness and Soft Computing. SpringerVerlag, Heidelberg (2003) 30. Pal, S.K., Polkowski, L., Skowron, A., eds.: Rough-Neural Computing: Techniques for Computing with Words. Cognitive Technologies. Springer-Verlag, Heidelberg, Germany (2003) 31. Slowi´ nski, R., Stefanowski, J., eds.: Special issue: Proceedings of the First International Workshop on Rough Sets: State of the Art and Perspectives, Kiekrz, Pozna´ n, Poland, September 2–4 (1992). Volume 18(3-4) of Foundations of Computing and Decision Sciences. (1993) 32. Ziarko, W., ed.: Special issue. Volume 11(2) of Computational Intelligence: An International Journal. (1995)

Some Issues on Rough Sets

55

33. Ziarko, W., ed.: Special issue. Volume 27(2-3) of Fundamenta Informaticae. (1996) 34. Lin, T.Y., ed.: Special issue. Volume 2(2) of Journal of the Intelligent Automation and Soft Computing. (1996) 35. Peters, J., Skowron, A., eds.: Special issue on a rough set approach to reasoning about data. Volume 16(1) of International Journal of Intelligent Systems. (2001) 36. Cercone, N., Skowron, A., Zhong, N., eds.: (Special issue). Volume 17(3) of Computational Intelligence. (2001) 37. Pal, S.K., Pedrycz, W., Skowron, A., Swiniarski, R., eds.: Special volume: Roughneuro computing. Volume 36 of Neurocomputing. (2001) 38. Skowron, A., Pal, S.K., eds.: Special volume: Rough sets, pattern recognition and data mining. Volume 24(6) of Pattern Recognition Letters. (2003) 39. Ziarko, W., ed.: Rough Sets, Fuzzy Sets and Knowledge Discovery: Proceedings of the Second International Workshop on Rough Sets and Knowledge Discovery (RSKD’93), Banﬀ, Alberta, Canada, October 12–15 (1993). Workshops in Computing. Springer–Verlag & British Computer Society, London, Berlin (1994) 40. Lin, T.Y., Wildberger, A.M., eds.: Soft Computing: Rough Sets, Fuzzy Logic, Neural Networks, Uncertainty Management, Knowledge Discovery. Simulation Councils, Inc., San Diego, CA, USA (1995) 41. Tsumoto, S., Kobayashi, S., Yokomori, T., Tanaka, H., Nakamura, A., eds.: Proceedings of the The Fourth Internal Workshop on Rough Sets, Fuzzy Sets and Machine Discovery, November 6-8, University of Tokyo , Japan. The University of Tokyo, Tokyo (1996) 42. Polkowski, L., Skowron, A., eds.: First International Conference on Rough Sets and Soft Computing (RSCTC’98), Warsaw, Poland, June 22-26, 1998. Volume 1424 of Lecture Notes in Artiﬁcial Intelligence. Springer-Verlag, Heidelberg (1998) 43. Zhong, N., Skowron, A., Ohsuga, S., eds.: Proceedings of the 7-th International Workshop on Rough Sets, Fuzzy Sets, Data Mining, and Granular-Soft Computing (RSFDGrC’99), Yamaguchi, November 9-11, 1999. Volume 1711 of Lecture Notes in Artiﬁcial Intelligence. Springer-Verlag, Heidelberg (1999) 44. Ziarko, W., Yao, Y., eds.: Proceedings of the 2-nd International Conference on Rough Sets and Current Trends in Computing (RSCTC’2000), Banﬀ, Canada, October 16-19, 2000. Volume 2005 of Lecture Notes in Artiﬁcial Intelligence. Springer-Verlag, Heidelberg (2001) 45. Hirano, S., Inuiguchi, M., Tsumoto, S., eds.: Proceedings of International Workshop on Rough Set Theory and Granular Computing (RSTGC-2001), Matsue, Shimane, Japan, May 20-22, 2001. Volume 5(1-2) of Bulletin of the International Rough Set Society. International Rough Set Society, Matsue, Shimane (2001) 46. Terano, T., Nishida, T., Namatame, A., Tsumoto, S., Ohsawa, Y., Washio, T., eds.: New Frontiers in Artiﬁcial Intelligence, Joint JSAI’01 Workshop PostProceedings. Volume 2253 of Lecture Notes in Artiﬁcial Intelligence. SpringerVerlag, Heidelberg (2001) 47. Alpigini, J.J., Peters, J.F., Skowron, A., Zhong, N., eds.: Third International Conference on Rough Sets and Current Trends in Computing (RSCTC’02), Malvern, PA, October 14-16, 2002. Volume 2475 of Lecture Notes in Artiﬁcial Intelligence. Springer-Verlag, Heidelberg (2002) 48. Skowron, A., Szczuka, M., eds.: Proceedings of the Workshop on Rough Sets in Knowledge Discovery and Soft Computing at ETAPS 2003 (RSKD’03), April 12-13, 2003. Volume 82(4) of Electronic Notes in Computer Science. Elsevier, Amsterdam, Netherlands (2003)

56

Zdzislaw Pawlak

49. Wang, G., Liu, Q., Yao, Y., Skowron, A., eds.: Proceedings of the 9-th International Conference on Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing (RSFDGrC’03), Chongqing, China, May 26-29, 2003. Volume 2639 of Lecture Notes in Artiﬁcial Intelligence. Springer-Verlag, Heidelberg (2003) 50. Komorowski, J., , Pawlak, Z., Polkowski, L., Skowron, A.: Rough sets: a tutorial. In [24] 3–98 51. Pawlak, Z., Polkowski, L., Skowron, A.: Rough sets and rough logic: A KDD perspective. In [26] 583–646 52. Skowron, A., Pawlak, Z., Komorowski, J., Polkowski, L.: A rough set perspective ˙ on data and knowledge. In Kloesgen, W., Zytkow, J., eds.: Handbook of KDD. Oxford University Press, Oxford (2002) 134–149 53. Pawlak, Z., Polkowski, L., Skowron, A.: Rough set theory. In Wah, B., ed.: EncyClopedia Of Computer Science and Engineering. Wiley, New York, USA (2004) 54. Cantor, G.: Grundlagen einer allgemeinen Mannigfaltigkeitslehre, Leipzig, Germany (1883) 55. Russell, B.: The Principles of Mathematics. George Allen & Unwin Ltd., London, Great Britain (1903) 56. Russell, B.: Vagueness. The Australasian Journal of Psychology and Philosophy 1 (1923) 84–92 57. Black, M.: Vagueness: An exercise in logical analysis. Philosophy of Science 4(4) (1937) 427–455 58. Hempel, C.G.: Vagueness and logic. Philosophy of Science 6 (1939) 163–180 59. Fine, K.: Vagueness, truth and logic. Synthese 30 (1975) 265–300 60. Keefe, R., Smith, P.: Vagueness: A Reader. MIT Press, Cambridge, MA (1999) 61. Keefe, R.: Theories of Vagueness. Cambridge University Press, Cambridge, U.K. (2000) 62. Frege, G.: Grundgesetzen der Arithmetik, 2. Verlag von Herman Pohle, Jena, Germany (1903) 63. Read, S.: Thinking about Logic - An Introduction to Philosophy of Logic. Oxford University Press, Oxford (1995) 64. Le´sniewski, S.: Grungz¨ uge eines neuen systems der grundlagen der mathematik. Fundamenta Matematicae 14 (1929) 1–81 65. Pawlak, Z., Skowron, A.: Rough membership functions. In Yager, R., Fedrizzi, M., Kacprzyk, J., eds.: Advances in the Dempster-Shafer Theory of Evidence, New York, NY, John Wiley & Sons (1994) 251–271 66. Skowron, A., Rauszer, C.: The discernibility matrices and functions in information systems. In [19] 331–362 67. Berthold, M., Hand, D.J.: Intelligent Data Analysis. An Introduction. SpringerVerlag, Berlin, Heidelberg, New York (1999) 68. Box, G.E.P., Tiao, G.C.: Bayesian Inference in Statistical Analysis. John Wiley and Sons, Inc., New York, Chichester, Brisbane, Toronto, Singapore (1992) 69. Pawlak, Z.: Rough sets and decision algorithms. In [44] 30–45 70. Pawlak, Z.: In pursuit of patterns in data reasoning from data – the rough set way. In [47] 1–9 71. Pawlak, Z.: Probability, truth and ﬂow graphs. In [48] 1–9 72. Wong, S., Ziarko, W.: Algebraic versus probabilistic independence in decision theory. In Ras, Z.W., Zemankova, M., eds.: Proceedings of the ACM SIGART First International Symposium on Methodologies for Intelligent Systems Knoxville (ISMIS’86), Tennessee, USA, October 22-24, 1986. ACM SIGART, USA (1986) 207–212

Some Issues on Rough Sets

57

73. Wong, S., Ziarko, W.: On learning and evaluation of decision rules in the context of rough sets. In Ras, Z.W., Zemankova, M., eds.: Proceedings of the ACM SIGART First International Symposium on Methodologies for Intelligent Systems Knoxville (ISMIS’86), Tennessee, USA, October 22-24, 1986. ACM SIGART, USA (1986) 308–324 74. Pawlak, Z., Wong, S.K.M., Ziarko, W.: Rough sets: Probabilistic versus deterministic approach. International Journal of Man-Machine Studies 29(1) (1988) 81–95 75. Yamauchi, Y., Mukaidono, M.: Probabilistic inference and bayeasian theorem based on logical implication. In [43] 334–342 76. Intan, R., an Y. Y. Yao, M.M.: Generalization of rough sets with alpha-coverings of the universe induced by conditional probability relations. In [46] 311–315 ´ ezak, D.: Approximate decision reducts (in Polish). PhD thesis, Warsaw Uni77. Sl¸ versity, Warsaw, Poland (2002) ´ ezak, D.: Approximate bayesian networks. In Bouchon-Meunier, B., Gutierrez78. Sl¸ Rios, J., Magdalena, L., Yager, R., eds.: Technologies for Constructing Intelligent Systems 2: Tools. Volume 90 of Studies in Fuzziness and Soft Computing. Springer-Verlag, Heidelberg, Germany (2002) 313–326 ´ ezak, D., Wr´ 79. Sl¸ oblewski, J.: Approximate bayesian network classiﬁers. In [47] 365–372 80. Yao, Y.Y.: Information granulation and approximation. In [30] 491–516 ´ ezak, D.: Approximate markov boundaries and bayesian networks: Rough set 81. Sl¸ approach. In [29] 109–121 ´ ezak, D., Ziarko, W.: Attribute reduction in the bayesian version of variable 82. Sl¸ precision rough set model. In [48] ´ ezak, D., Ziarko, W.: Variable precision bayesian rough set model. In [49] 83. Sl¸ 312–315 84. Wong, S.K.M., Wu, D.: A common framework for rough sets, databases, and bayesian networks. In [49] 99–103 ´ ezak, D.: The rough bayesian model for distributed decision systems (submit85. Sl¸ ted). In: Fourth International Conference on Rough Sets and Current Trends in Computing (RSCTC’04), Uppsala, Sweden, June 1-5, 2004. Lecture Notes in Computer Science. Springer-Verlag, Heidelberg, Germany (2004) 86. Swinburne, R.: Bayes Theorem. Volume 113 of Proceedings of the British Academy. Oxford University Press, Oxford, UK (2003) 87. Bernardo, J.M., Smith, A.F.M.: Bayesian Theory. Wiley Series in Probability and Mathematical Statistics. John Wiley & Sons, Chichester, New York, Brisbane, Toronto, Singapore (1994) 88. L ukasiewicz, J.: Die logischen grundlagen der wahrscheinilchkeitsrechnung, Krak´ ow 1913. In Borkowski, L., ed.: Jan L ukasiewicz - Selected Works. North Holland Publishing Company, Amstardam, London, Polish Scientiﬁc Publishers, Warsaw (1970) 89. Adams, E.W.: The Logic of Conditionals. An Application of Probability to Deductive Logic. D. Reidel Publishing Company, Dordrecht, Boston (1975) 90. Grzymala-Busse, J.W.: LERS - a system for learning from examples based on rough sets. In [19] 3–18 91. Skowron, A.: Boolean reasoning for decision rules generation. In Komorowski, J., Ra´s, Z.W., eds.: Seventh International Symposium for Methodologies for Intelligent Systems (ISMIS’93), Trondheim, Norway, June 15-18. Volume 689 of Lecture Notes in Artiﬁcial Intelligence., Heidelberg, Springer-Verlag (1993) 295–305

58

Zdzislaw Pawlak

92. Pawlak, Z., Skowron, A.: A rough set approach for decision rules generation. In: Thirteenth International Joint Conference on Artiﬁcial Intelligence (IJCAI’93), Chamb´ery, France, Morgan Kaufmann (1993) 114–119 93. Shan, N., Ziarko, W.: An incremental learning algorithm for constructing decision rules. In Ziarko, W., ed.: Rough Sets, Fuzzy Sets and Knowledge Discovery, Berlin, Germany, Springer Verlag (1994) 326–334 94. Nguyen, H.S.: Discretization of Real Value Attributes, Boolean Reasoning Approach. PhD thesis, Warsaw University, Warsaw, Poland (1997) 95. Slowi´ nski, R., Stefanowski, J.: Rough family – software implementation of the rough set theory. In [23] 581–586 96. Nguyen, H.S., Nguyen, S.H.: Pattern extraction from data. Fundamenta Informaticae 34 (1998) 129–144 97. Nguyen, H.S., Nguyen, S.H.: Discretization methods for data mining. In [22] 451–482 98. Skowron, A.: Rough sets in KDD - plenary talk. In Shi, Z., Faltings, B., Musen, M., eds.: 16-th World Computer Congress (IFIP’00): Proceedings of Conference on Intelligent Information Processing (IIP’00). Publishing House of Electronic Industry, Beijing (2002) 1–14 99. Bazan, J., Nguyen, H.S., Nguyen, S.H., Synak, P., Wr´ oblewski, J.: Rough set algorithms in classiﬁcation problems. In [26] 49–88 100. Grzymala-Busse, J.W., Shah, P.: A comparison of rule matching methods used in aq15 and lers. In: Proceedings of the Twelfth International Symposium on Methodologies for Intelligent Systems (ISMIS’00), Charlotte, NC, October 11-14, 2000. Volume 1932 of Lecture Nites in Artiﬁcial Intelligence., Berlin, Germany, Springer-Verlag (2000) 148–156 101. Grzymala-Busse, J., Hu, M.: A comparison of several approaches to missing attribute values in data mining. In [44] 340 – 347 102. Greco, S., Matarazzo, B., Slowi´ nski, R., Stefanowski, J.: An algorithm for induction of decision rules consistent with dominance principle. In [44] 304–313 103. Skowron, A.: Rough sets and boolean reasoning. In Pedrycz, W., ed.: Granular Computing: an Emerging Paradigm. Volume 70 of Studies in Fuzziness and Soft Computing. Springer-Verlag/Physica-Verlag, Heidelberg, Germany (2001) 95–124 104. Greco, S., Matarazzo, B., Slowi´ nski, R.: Rough sets theory for multicriteria decision analysis. European J. of Operational Research 129(1) (2001) 1–47 105. Casti, J.L.: Alternate Realities: Mathematical Models of Nature and Man. John Wiley and Sons, Inc., New York, Chichester, Brisbane, Toronto, Singapore (1989) 106. Coombs, C.H., Avruin, G.S.: The Structure of Conﬂicts. Lawrence Erlbaum, London (1988) 107. Deja, R.: Conﬂict analysis, rough set methods and applications. In [26] 491–520 108. Maeda, Y., Senoo, K., Tanaka, H.: Interval density function in conﬂict analysis. In [43] 382–389 109. Nakamura, A.: Conﬂict logic with degrees. In [24] 136–150 110. Pawlak, Z.: An inquiry into anatomy of conﬂicts. Journal of Information Sciences 109 (1998) 65–68 111. Ford, L.R., Fulkerson, D.R.: Flows in Networks. Princeton University Press, Princeton, New Jersey (1973) 112. Slowi´ nski, R., Greco, S.: A note on dependency factor. (2004) (manuscript).

Learning Rules from Very Large Databases Using Rough Multisets Chien-Chung Chan Department of Computer Science University of Akron Akron, OH 44325-4003 [email protected]

Abstract. This paper presents a mechanism called LERS-M for learning production rules from very large databases. It can be implemented using objectrelational database systems, it can be used for distributed data mining, and it has a structure that matches well with parallel processing. LERS-M is based on rough multisets and it is formulated using relational operations with the objective to be tightly coupled with database systems. The underlying representation used by LERS-M is multiset decision tables, which are derived from information multisystems. In addition, it is shown that multiset decision tables provide a simple way to compute Dempster-Shafer’s basic probability assignment functions from. data sets.

1 Introduction The development of computer technologies has provided many useful and efficient tools to produce, disseminate, store, and retrieve data in electronics forms. As a consequence, ever-increasing streams of data are recorded in all types of databases. For example, in automated business activities, even simple transactions such as telephone calls, credit card charges, items in shopping carts, etc. are typically recorded in databases. These data are potentially beneficial to enterprises, because they may be used for designing effective marketing and sales plans based on consumer's shopping patterns and preferences collectively recorded in the databases. From databases of credit card charges, some patterns of fraud charges may be detected, hence, preventive actions may be taken. The raw data stored in databases are potentially lodes of useful information. In order to extract the ore, effective mining tools must be developed. The task of extracting useful information from data is not a new one. It has been a common interest in research areas such as statistical data analysis, machine learning, and pattern recognition. Traditional techniques developed in these areas are fundamental to the task, but there are limitations of these methods. For example, these tools usually assume that the collection of data in the databases is small enough to be fit into the memory of a computer system so that they can be processed. This condition is no longer true in very large databases. Another limitation is that these tools are usually applicable to only static data sets. However, most databases are updated frequently by large streams of data. It is typical that databases of an enterprise are distributed in different locations. Issues and techniques related to finding useful information from distributed data need to be studied and developed. J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 59–77, 2004. © Springer-Verlag Berlin Heidelberg 2004

60

Chien-Chung Chan

There are three classical data mining problems: market basket analysis, clustering, and classification. Traditional machine learning systems are usually developed independent of database technology. One of the recent trends is to develop learning systems that are tightly coupled with relational or object-relational database systems for mining association rules and for mining tree classifiers [1–4]. Due to the maturity of database technology, these systems are more portable and scalable than traditional systems, and they are easier to integrate with OLAP (On Line Analytical Processing) and data warehousing systems. Another trend is that more and more data are stored into distributed databases. Some distributed data mining systems have been developed [5]. However, not many have been tightly coupled with database system technology. In this paper, we introduce a mechanism called LERS-M for learning production rules from very large databases. It can be implemented using object-relational database systems, it can be used for distributed data mining, and it has a structure that matches well with parallel processing. LERS-M is similar to the LERS family of learning programs [6], which is based on rough set theory [7–9]. The main differences are LERS-M is based on rough multisets [10] and it is formulated using relational operations with the objective to be tightly coupled with database systems. The underlying representation used by LERS-M is multiset decision tables [11], which are derived from information multisystems [10]. In addition to facilitate the learning of rules, multiset decision tables can also be used to compute Dempster-Shafer’s belief functions from data [12], [14]. The methodology developed here can be used to design learning systems for knowledge discovery from distributed databases and to develop distributed rule-based expert systems and decision support systems. The paper is organized as follows. The problem addressed by this paper is formulated in Section 2. In Section 3, we review some related concepts. The concept of multiset decision tables and its properties are presented in Section 4. In Section 5, we present the LERS-M learning algorithm with example and discussion. Conclusions are given in Section 6.

2 Problem Statements In this paper we consider the problem of learning production rules from very large databases. For simplicity, a very large database is considered as a very large data table U defined by a finite nonempty set A of attributes. We assume that a very large data table can be store in one single database or distributed over databases. By distributed databases, we means that the data table U is divided into N smaller tables with sizes manageable by a database management system. In the abstraction, we do not consider communication mechanisms used by a distributed database system. Nor do we consider the costs of transferring data from A to B. Briefly speaking, the problem of inductive learning of production rules from examples is to generate descriptions or rules to characterize the logical implication C → D from a collection U of examples, where C and D are sets of attributes used to describe the examples. The set C is called condition attributes, and the set D is called decision attributes. Usually, set D is a singleton set, and the sets C and D are not overlapped. The objective of learning is to find rules that can be used to predict the logical implication as accurate as possible when applied to new examples.

Learning Rules from Very Large Databases Using Rough Multisets

61

The objective of this paper is to develop a mechanism for generating production rules by taking into account the following issues: (1) The implication of C → D may be uncertain, (2) If the set U of examples is divided into N smaller sets, how to determine the implication of C → D, and (3) The result can be implemented using objectrelational database technology.

3 Related Concepts In the following, we will review the concepts of rough sets, information systems, decision tables, rough multisets, information multisystems, and partition of boundary sets. 3.1 Rough Sets, Information Systems, and Decision Tables The fundamental assumption of the rough set theory is that objects from the domain are perceived only through the accessible information about them, that is, the values of attributes that can be evaluated on these objects. Objects with the same information are indiscernible. Consequently, the classification of objects is based on the accessible information about them, not on objects themselves. The notion of information systems was introduced by Pawlak [8] to represent knowledge about objects in a domain. In this paper, we use a special case of information systems called decision tables or data tables to represent data sets. In a decision table there is a designated attribute called decision attribute and another set of attributes are called condition attributes. A decision attribute can be interpreted as a classification of objects in the domain given by an expert. Given a decision table, values of the decision attribute determine a partition on U. The problem of learning rules from examples is to find a set of classification rules using condition attributes that will produce the partition generated by the decision attribute. An example of a decision table adapted from [13] is shown in Table 1, where the universe U consists of 28 objects or examples. The set of condition attributes is {A, B, C, E, F}, and D is the decision attribute with values 1, 2, and 3. The partition on U determined by the decision attribute D is X1 = [1, 2, 4, 8, 10, 15, 22, 25], X2 = [3, 5, 11, 12, 16, 18, 19, 21, 23, 24, 27], X3 = [6, 7, 9, 13, 14, 17, 20, 26, 28] where Xi is the set of objects whose value of attribute d is i, for i = 1, 2, and 3. Note that Table 1 is an inconsistent decision table. Both objects 8 and 12 have the same condition values (1, 1, 1, 1, 1), but their decision values are different. Object 8 has decision value 1, but object 12 has decision value 2. Inconsistent data sets are also called noisy data sets. This kind of data sets is quite common in real world situations. It is an issue must be addressed by machine learning algorithms. In rough set approach, inconsistency is represented by the concepts of lower and upper approximations. Let A = (U, R) be an approximation space, where U is a nonempty set of objects and R is an equivalence relation defined on U. Let X be a nonempty subset of U. Then, the lower approximation of X by R in A is defined as

62

Chien-Chung Chan Table 1. Example of a decision table. U 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

A 0 1 0 1 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 0 1 0 1 1 1 1

B 0 1 1 0 1 0 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 0 1 0 0 0 0 1

C 1 1 0 0 0 0 0 1 0 1 1 1 0 0 1 0 0 0 0 1 0 1 1 1 1 1 1 1

E 0 0 0 0 0 0 1 1 1 0 0 1 1 0 1 1 1 0 0 0 0 0 0 1 0 0 0 1

F 0 0 0 1 0 1 1 1 1 0 0 1 1 1 1 1 1 0 0 0 1 1 0 1 1 1 1 0

D 1 1 2 1 2 3 3 1 3 1 2 2 3 3 1 2 3 2 2 3 2 1 2 2 1 3 2 3

RX = { e ∈ U | [e] ⊆ X} and the upper approximation of X by R in A is defined as R X = { e ∈ U | [e] ∩ X ≠ ∅},

where [e] denotes the equivalence class containing e. The difference R X – RX is called the boundary set of X in A. A subset X of U is said to be R-definable in A if and only if RX = R X. The pair (RX, R X) defines a rough set in A, which is a family of subsets of U with the same lower and upper approximations as RX and R X. In terms of decision tables, the pair (U, A) defines an approximation space. When a decision class Xi ⊆ U is inconsistent, it means that Xi is not A-definable. In this case, we can find classification rules from AXi and A Xi. These rules are called certain rules and possible rules, respectively [16]. Thus, rough set approach can be used to learn rules from both consistent and inconsistent examples [17], [18]. 3.2 Rough Multisets and Information Multisystems The concepts of rough multisets and information multisystems were introduced by Grzymala-Busse [10]. The basic idea is to represent an information system using multisets [15]. Object identifiers represented explicitly in an information system is not

Learning Rules from Very Large Databases Using Rough Multisets

63

represented in an information multisystem. Thus, the resulting data tables are more compact. More precisely, an information multisystem is a triple S = (Q, V, Q~ ), where Q is a set of attributes, V is the union of domains of attributes in Q, and Q~ is a multirelation on

×V

q∈Q

q

. In addition, the concepts of lower and upper approximations in

rough sets are extended to multisets. Let M be a multiset, and let e be an element of M whose number of occurrences in M is w. The sub-multiset {w⋅e} will be denoted by [e]M. Thus M may be represented as union of all [e]M’s where e is in M. A multiset [e]M is called an elementary multiset in M. The empty multiset is elementary. A finite union of elementary multisets is called a definable multiset in M. Let X be a sub-multiset of M. Then, the lower approximation of X in M is the multiset defined as X = { e ∈ M | [e]M ⊆ X} and the upper approximation of X in M is the multiset defined as = { e ∈ M | [e]M ∩ X ≠ ∅}, where the operations on sets are defined by multisets. Therefore, a rough multiset in M is the family of all sub-multisets of M having the same lower and upper approximations in M. ~ Let P be a subset of Q, a projection of Q~ onto P is defined as the multirelation P , obtained by deleting columns corresponding to attributes in Q – P. Note that Q~ and ~ ~ P have same cardinality. Let X be a sub-multiset of P . A P-lower approximation of ~ X in S is the lower approximation X of X in P . A P-upper approximation of X in S is ~ ~ the upper approximation X of X in P . A multiset X in P is P-definable in S iff PX = P X. A multipartition χ on a multiset X is a multiset {X1, X2, …, Xn} of sub-multisets of X such that X

n

∑

Xi = X

i =1

where the sum of two multisets X and Y, denoted X + Y, is a multiset of all elements that are members of X or Y with the number of occurrences of each element e in X + Y is the sum of the number of occurrences of e in X and the number of occurrences of e in Y. Follow from [9], classifications are multipartitions on information multisystems generated with respect to subsets of attributes. Specifically, let S = (Q, V, Q~ ) be an ~ information multisystem. Let A and B be subsets of Q with |A| = i and |B| = j. Let A ~ ~ be a projection of Q onto A. The subset B generates a multipartition BA on A defined as follows: each two i-tuples determined by A are in the same multiset X in BA if and only if their associated j-tuples, determined by B, are equal. The mulitpartition BA is ~ called a classification on A generated by B. Table 2 shows a multirelation representation of the data table given in Table 1 where the number of occurrences of each row is denoted by integers in the W column. The projection of the multirealtion onto the set P of attributes {A, B, C, E, F} is shown in Table 3.

64

Chien-Chung Chan Table 2. An information multisystem S. A 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

B 0 0 0 0 0 0 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

C 0 0 1 1 1 1 0 0 0 1 1 1 0 0 0 0 1 1 1 1 1 1

E 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1

F 0 1 0 1 1 1 0 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1

D 2 3 1 1 1 2 2 1 3 1 2 3 2 3 2 3 1 2 3 3 1 2

W 2 3 2 1 1 1 2 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1

~

Table 3. An information multisystem P . A 0 0 0 0 0 0 1 1 1 1 1 1 1

B 0 0 0 0 0 1 0 0 1 1 1 1 1

C 0 0 1 1 1 0 0 1 0 0 1 1 1

E 0 1 0 0 1 0 0 0 0 1 0 1 1

F 0 1 0 1 1 0 1 1 1 1 0 0 1

W 2 3 2 1 2 2 2 3 2 2 4 1 2

~ Let X be a sub-multiset of P with elements shown in Table 4. ~

Table 4. A sub-multiset X of P . A 0 1 1 1 0 0 1

B 0 1 0 1 0 0 0

C 1 1 0 1 1 1 1

E 0 0 0 1 1 0 0

F 0 0 1 1 1 1 1

W 2 1 1 1 1 1 1

Learning Rules from Very Large Databases Using Rough Multisets

65

Table 5. P-lower approximation of X. A 0 0

B 0 0

C 1 1

E 0 0

F 0 1

W 2 1

Table 6. P-upper approximation of X. A 0 1 1 1 0 0 1

B 0 1 0 1 0 0 0

C 1 1 0 1 1 1 1

E 0 0 0 1 1 0 0

F 0 0 1 1 1 1 1

W 2 4 2 2 2 1 3

~ The P-lower and P-upper approximations of X in P are shown in Table 5 and 6. ~ The classification of P generated by attribute D in S consists of three submultisets which are given in the following Tables 7, 8, and 9 which correspond to the cases where D = 1, D = 2, and D = 3, respectively. Table 7. Sub-multiset of the multipartition DP with D = 1. A 0 0 0 1 1 1 1

B 0 0 0 0 0 1 1

C 1 1 1 0 1 1 1

E 0 0 1 0 0 0 1

F 0 1 1 1 1 0 1

W 2 1 1 1 1 1 1

Table 8. Sub-multiset of the m ultipartition DP with D = 2. A 0 0 0 1 1 1 1 1

B 0 0 1 0 1 1 1 1

C 0 1 0 1 0 0 1 1

E 0 1 0 0 0 1 0 1

F 0 1 0 1 1 1 0 1

W 2 1 2 1 1 1 2 1

Table 9. Sub-multiset of the multipartition DP with D = 3. A 0 1 1 1 1 1 1

B 0 0 0 1 1 1 1

C 0 0 1 0 0 1 1

E 1 0 0 0 1 0 1

F 1 1 1 1 1 0 0

W 3 1 1 1 1 1 1

66

Chien-Chung Chan

3.3 Partition of Boundary Sets The relationship between rough set theory and Dempster-Shafer’s theory of evidence was first shown in [14] and further developed in [13]. The concept of partition of boundary sets was introduced in [13]. The basic idea is to represent an expert’s classification on a set of objects in terms of lower approximations and a partition on the boundary set. In information multisystems, the concept of boundary sets is represented by boundary multisets, which is defined as the difference of upper and lower approximations of a multiset. Thus, the partition of a boundary set can be extended as a multipartition on a boundary multiset. The computation of this multipartition will be discussed in next section.

4 Multiset Decision Tables 4.1 Basic Concepts The idea of multiset decision tables (MDT) was first informally introduced in [11]. We will formalize the concept in the following. Let S = (Q = C ∪ D, V, Q~ ) be an information multisystem, where C are condition attributes and D is a decision attribute. A multiset decision table is an ordered pair A = ( C~ , CD), where C~ is a projection of ~ ~ ~ Q onto C and CD is a multipartition on D generated by C in A. We will call C the LHS (Left Hand Side) and CD the RHS (Right Hand Side). Each sub-multiset in CD is represented by two vectors: a Boolean bit-vector and an integer vector. Similar representational scheme has been used in [19], [20], [21]. The size of each vector is the number of values in the domain VD of decision attribute D. The Boolean bit-vector labeled by Di’s denotes that a decision value Di is in a sub-multiset of CD iff Di = 1 and its number of occurrences is denoted in the integer vector entry labeled by wi. The information multisystem of Table 2 is represented as a multiset decision table in Table 10 with C = {A, B, C, E, F} and decision attribute D. The Boolean vector is denoted by [D1, D2, D3], and the integer vector is denoted by [w1, w2, w3]. Note that W = w1 + w2 + w3 on each row. Table 10. Example of MDT. A 0 0 0 0 0 0 1 1 1 1 1 1 1

B 0 0 0 0 0 1 0 0 1 1 1 1 1

C 0 0 1 1 1 0 0 1 0 0 1 1 1

E 0 1 0 0 1 0 0 0 0 1 0 1 1

F 0 1 0 1 1 0 1 1 1 1 0 0 1

W 2 3 2 1 2 2 2 3 2 2 4 1 2

D1 0 0 1 1 1 0 1 1 0 0 1 0 1

D2 1 0 0 0 1 1 0 1 1 1 1 0 1

D3 0 1 0 0 0 0 1 1 1 1 1 1 0

w1 0 0 2 1 1 0 1 1 0 0 1 0 1

w2 2 0 0 0 1 2 0 1 1 1 2 0 1

w3 0 3 0 0 0 0 1 1 1 1 1 1 0

Learning Rules from Very Large Databases Using Rough Multisets

67

4.2 Properties of Multiset Decision Tables Based on multiset decision table representation, we can use relational operations on the table to compute the concepts of rough sets reviewed in Section 3. Let A be a multiset decision table. We will show how to determine the lower and upper approximations of decision classes and partitions of boundary multisets from A. The lower approximation of Di in terms of the LHS columns is defined as the multiset where Di = 1 and W = wi, and the upper approximation of Di is defined as the multiset where Di = 1 and W >= wi, or simply Di = 1. The boundary multiset of Di is defined as the multiset where Di = 1 and W > wi. The multipartition of boundary multisets can be identified by a equivalence multirelation defined over the Boolean vector denoted by the decision-value columns D1, D2, and D3. It is clear that one row of a multiset decision table is in some boundary multiset if and only if the sum over D1, D2, and D3 of the row is greater than 1. Therefore, to compute the multipartition of boundary multisets, we will first identify those rows with D1 + D2 + D3 > 1, then the rows in the multirelation over D1, D2, and D3 define blocks of the multipartition of the boundary multisets. The above computations are shown in the following example. Example: Consider the decision class D1 in Table 10. The C-lower approximation of D1 is the multiset that satisfies D1 = 1 and W = w1, in table form we have: Table 11. C-lower approximation of D1. A 0 0

B 0 0

C 1 1

E 0 0

F 0 1

W 2 1

The C-upper approximation of D1 is the multiset that satisfies D1 = 1, in table form we have: Table 12. C-upper approximation of D1. A 0 0 0 1 1 1 1

B 0 0 0 0 0 1 1

C 1 1 1 0 1 1 1

E 0 0 1 0 0 0 1

F 0 1 1 1 1 0 1

W 2 1 2 2 3 4 2

To determine the partition of boundary multisets, we use the following two steps. Step 1. Identify rows with D1 + D2 + D3 > 1, we have the following multiset in table form: Table 13. Elements in the boundary sets. A

B

C

E

F

W

D1

D2

D3

0 1 1 1 1 1 1

0 0 0 1 1 1 1

1 0 1 0 0 1 1

1 0 0 0 1 0 1

1 1 1 1 1 0 1

2 2 3 2 2 4 2

1 1 1 0 0 1 1

1 0 1 1 1 1 1

0 1 1 1 1 1 0

68

Chien-Chung Chan

Step 2. Grouping the above table in terms of D1, D2, and D3, we have the following blocks in the partition. Table 14 shows the block where D1 = 1 and D2 = 1 and D3 = 0, i.e., (1 1 0): Table 14. The block denotes D = {1, 2}. A 0 1

B 0 1

C 1 1

E 1 1

F 1 1

W 2 2

D1 1 1

D2 1 1

D3 0 0

Table 15 shows the block where D1 = 1 and D2 = 0 and D3 = 1, i.e., (1 0 1): Table 15. The block denotes D = {1, 3}. A 1

B 0

C 0

E 0

F 1

W 2

D1 1

D2 0

D3 1

Table 16 shows the block where D1 = 0 and D2 = 1 and D3 = 1, i.e., (0 1 1): Table 16. The block denotes D = {2, 3}. A 1 1

B 1 1

C 0 0

E 0 1

F 1 1

W 2 2

D1 0 0

D2 1 1

D3 1 1

Table 17 shows the block where D1 = 1 and D2 = 1 and D3 = 1, i.e., (1 1 1): Table 17. The block denotes D = {1, 2, 3}. A 1 1

B 0 1

C 1 1

E 0 0

F 1 0

W 3 4

D1 1 1

D2 1 1

D3 1 1

From the above example, it is clear that an expert’s classification on the decision attribute D can be obtained by grouping similar values over columns D1, D2, and D3 and by taking the sum over the W column in a multiset decision table. Based on this grouping and summing operation, we can derive a basic probability assignment (bpa) function as required in Dempster-Shafer theory for computing belief functions. This is shown in Table 18. Table 18. Grouping over D1, D2, D3 and sum over W. D1 1 0 0 0 1 1 1

D2 0 1 0 1 0 1 1

D3 0 0 1 1 1 0 1

W 3 4 4 4 2 4 7

Learning Rules from Very Large Databases Using Rough Multisets

69

Let Θ = {1, 2, 3}. Table 19 shows the basic probability assignment function derived from the information multisystem shown in Table 2. The computation is based on the partition of boundary multisets shown in Table 18. Table 19. The bpa derived from Table 2. X m(X)

{1} 3/28

{2} 4/28

{3} 4/28

{1, 2} 4/28

{1, 3} 2/28

{2, 3} 4/28

{1, 2, 3} 7/28

5 Learning Rules From MDT 5.1 LERS-M (Learning Rules from Examples Using Rough MultiSets) In this section, we will present an algorithm LERS-M for learning production rules from a database table based on multiset decision table. A multiset decision table can be computed directly using typical SQL commands from a database table once the condition and decision attributes are specified. For efficiency reason, we will associate entries in an MDT with a sequence of integer numbers. This can be accomplished by using extensions to relational database management system such as the UDF (User Defined Functions) and UDT (User defined Data Type) available on IBM’s DB2 [22]. The emphasis of this paper is more on algorithms, implementation details will be covered somewhere else. The basic idea of LERS-M is to generate a multiset decision table with a sequence of integer numbers. Then, for each value di of the decision attribute D, the upper approximation of di, UPPER(di), is computed, and a set of rules is generated for each UPPER(di). The algorithm LERS-M is given in the following. The detail for generation of rules is presented in Section 5.2. procedure LERS-M Inputs: a table S with condition attributes C1, C2, …, Cn and decision attribute D. Outputs: a set of production rules represented as a multiset data table. begin Create a Multiset Decision Table (MDT) from S with sequence numbers; for each decision value di of D do begin find the upper approximation UPPER(di) of di; Generate rules for UPPER(di); end; end; 5.2 Rule Generation Strategy The basic idea of rule generation is to create an AVT (Attribute-Value pairs Table) table containing all a-v pairs appeared in the set UPPER(di). Then, we will partition the a-v pairs into different groups based on a grouping criterion such as degree of relevancy, which is also used to rank the groups. The left hand sides of rules are identi-

70

Chien-Chung Chan

fied by taking conjunctions of a-v pairs within the same group (intra-group conjuncts) and by taking natural join over different groups (inter-group conjuncts). Strategies for generating and validating candidate conjuncts are encapsulated in a module called GenerateAndTestConjuncts. Once a set of valid conjuncts is identified, minimal conjuncts can be generated using the method of dropping conditions. The process of rule generation is an iterative one. It starts with the set UPPER(di) as an initial TargetSet. In each iteration, a set of rules is generated, and the instances covered by the rule-set are removed from the TargetSet. It stops when all instances in UPPER(di) are covered by the generated rules. In LERS-M, the stopping condition is guaranteed by the fact that upper approximations are always definable based on the theory of rough sets. The above strategy is presented in the following procedures RULE_GEN, GroupAVT, and GenerateAndTestConjuncts. A working example will be given in next section. Specifically, we have adopted the following notions. The extension of an a-v pair (a, v) denoted by [(a, v)], i.e., the set of instances covered by the a-v pair, is a subset of the sequence numbers in the original MDT. The extension of an a-v pair is encoded by a Boolean bit-vector. A conjunct is a nonempty finite set of a-v pairs. The extension of a conjunct is the intersection of extensions of all the a-v pairs in the conjunct. Note that the extension of a group of conjunct is the union of extensions of all the conjuncts in the group, and the extension of an empty group of conjuncts is an empty set. procedure RULE_GEN Inputs: an upper approximation of a decision value di, UPPER(di) and an MDT. Outputs: a set of rules for UPPER(di) represented as a multiset decision table. begin TargetSet := UPPER(di); Ruleset := empty set; Select a grouping criteria G := degree of relevance; Create an a-v pair table AVT contains all a-v pairs appeared in UPPER(di); while TargetSet is not empty do begin AVT := GroupAVT(G, TargetSet); NewRules := GenerateAndTestConjuncts(AVT, UPPER(di)); RuleSet := RuleSet + NewRules; TargetSet := TargetSet – [NewRules]; end; minimalCover(RuleSet); /* applying dropping condition technique to remove redundant rules from RuleSet linearly starting from the first rule to the last rule in the set */ end; // RULE_GEN procedure GroupAVT Inputs: a grouping criterion such as degree of relevance and a subset of the upper approximation of a decision value di. Outputs: a list of groups of equivalent a-v pairs relevant to the target set. begin Initialize the AVT table to be empty;

Learning Rules from Very Large Databases Using Rough Multisets

71

Select a subtable T from the target set where decision value = di; Create a query to get a vector of condition attributes from the subtable T; for each condition attribute do /* Generate distinct values for each condition attribute */ begin Create query string to select distinct values; for each distinct value do begin Create a query string to select count of occurrences; relevance := count of occurrences; if (relevance > 0) Add the condition-value pair to AVT table; end;// for each distinct value end; // end of for each condition Select the list of distinct values of the relevance column; Sort the list of distinct values in descending order; Use the list of distinct values to generate a list of groups of a-v pairs; end; // GroupAVT procedure GenerateAndTestConjuncts Inputs: a list AVT of groups of equivalent a-v pairs and the upper approximation of decision value di. Outputs: a set of rules. begin RuleList := ∅; CarryOverList := ∅; // a list of groups of a-v pairs CandidateList := ∅; // a list of TargetSet := UPPER(di); // Generate Candidate List repeat L := getNext(AVT); // L is a list of equivalent a-v pairs if (L is empty) then break; if ([conjunct(L)] ⊆ TargetSet) then Add conjunct(L) to CandidateList; /* conjunct(L) returns a conjunction of all a-v pairs in L */ if (CarryOverList is empty) then Add all a-v pairs in L to CarryOverList else begin FilterList := ∅; Add join(CarryOverList, L) to FilterList; /*join is a function that creates new lists of a-v pairs by taking and joining one element each from the CarryOverList and L */ CarryOverList := ∅; for each list in FilterList do if ([list] ⊆ TargetSet) then Add list to CandidateList

72

Chien-Chung Chan

else Add list to CarryOverList; end; until (CandidateList is not empty); // Test CandidateList for each list in CandidateList do begin list := minimalConjunct(list); /* applying dropping condition to get minimal list of a-v pairs */ Add list to RuleList; end; return RuleList; end; // GenerateAndTestConjuncts Example Consider the information multisystem in Table 2 as input to LERS-M. The result of generating an MDT with sequence numbers is shown in Table 20. Table 20. MDT with sequence numbers. Seq 1 2 3 4 5 6 7 8 9 10 11 12 13

A 0 0 0 0 0 0 1 1 1 1 1 1 1

B 0 0 0 0 0 1 0 0 1 1 1 1 1

C 0 0 1 1 1 0 0 1 0 0 1 1 1

E 0 1 0 0 1 0 0 0 0 1 0 1 1

F 0 1 0 1 1 0 1 1 1 1 0 0 1

W 2 3 2 1 2 2 2 3 2 2 4 1 2

D1 0 0 1 1 1 0 1 1 0 0 1 0 1

D2 1 0 0 0 1 1 0 1 1 1 1 0 1

D3 0 1 0 0 0 0 1 1 1 1 1 1 0

w1 0 0 2 1 1 0 1 1 0 0 1 0 1

w2 2 0 0 0 1 2 0 1 1 1 2 0 1

w3 0 3 0 0 0 0 1 1 1 1 1 1 0

The C-upper approximation of the class D = 1 is the sub-MDT shown in Table 21. Table 21. Table of UPPER(D1). Seq

3 4 5 7 8 11 13

A 0 0 0 1 1 1 1

B 0 0 0 0 0 1 1

C 1 1 1 0 1 1 1

E

F

W

0 0 1 0 0 0 1

0 1 1 1 1 0 1

2 1 2 2 3 4 2

D1

1 1 1 1 1 1 1

D2

0 0 1 0 1 1 1

D3

0 0 0 1 1 1 0

w1

2 1 1 1 1 1 1

w2

0 0 1 0 1 2 1

w3

0 0 0 1 1 1 0

The following is how RULE_GEN will generate rules for UPPER(D1). Table 22 shows the AVT table created by procedure GroupAVT before sorting is applied to the

Learning Rules from Very Large Databases Using Rough Multisets

73

table to generate the final list of groups of equivalent a-v pairs. The grouping criterion used is based on the size of intersection between the extension of an a-v pair and the set UPPER(D1). Each entry in the Relevance column denotes the number of rows in the UPPER(D1) table matched with the a-v pair. For example, the relevance of (A, 0) is 3 means that there are three rows in UPPER(D1) that satisfy A = 0. The ranking of a-v pairs is based on maximum degree of relevance, i.e., larger relevance number has higher priority. The ranks are ordered in ascending order, i.e., smaller rank number has higher priority. The encoding for extensions of a-v pairs in the AVT is shown in Table 23, and the Target set UPPER(D1) = {3, 4, 5, 7, 8, 11, 13} is considered with the encoding (0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1). Table 22. AVT table created from UPPER(D1). Name A A B B C C E E F F

Value 0 1 0 1 0 1 0 1 0 1

Relevance 3 4 5 2 1 6 5 2 2 5

Rank 4 3 2 5 6 1 2 5 5 2

Table 23. Extensions of a-v pairs encoded as Boolean bit-vector. N A A B B C C E E F F

V 0 1 0 1 0 1 0 1 0 1

1 1 0 1 0 1 0 1 0 1 0

2 1 0 1 0 1 0 0 1 0 1

3 1 0 1 0 0 1 1 0 1 0

4 1 0 1 0 0 1 1 0 0 1

5 1 0 1 0 0 1 0 1 0 1

6 1 0 0 1 1 0 1 0 1 0

7 0 1 1 0 1 0 1 0 0 1

8 0 1 1 0 0 1 1 0 0 1

9 0 1 0 1 1 0 1 0 0 1

10 0 1 0 1 1 0 0 1 0 1

11 0 1 0 1 0 1 1 0 1 0

12 0 1 0 1 0 1 0 1 1 0

13 0 1 0 1 0 1 0 1 0 1

Based on the Rank of the AVT table shown in Table 22, the a-v pairs are grouped into the following six groups listed from higher to lower rank: {(C, 1)} {(B, 0), (E, 0), (F, 1)} {(A, 1)} {(A, 0)} {(B, 1), (E, 1), (F, 0)} {(C, 0)} Candidate conjuncts are generated and tested by the GenerateAndTestConjuncts procedure based on the above list. The basic strategy used here is to generate the intra-group conjuncts first, then followed by generating inter-group conjuncts. The procedure proceeds sequentially starting from the highest ranked group downward. It stops when at least one rule is found. The heuristics employed here is trying to find rules with maximum coverage of instances in UPPER(di).

74

Chien-Chung Chan

In our example, the first group contains only one a-v pair (C, 1); therefore, no need to generate intra-group conjuncts. From Table 21, we can see that [{(C, 1)}] is not a subset of the UPPER(D1). Thus, inter-group join is needed. In addition, the second group {(B, 0), (E, 0), (F, 1)} is also included in the candidate list. This results in the following list of candidate conjuncts, which are listed with their corresponding externsions. [{(C, 1), (B, 0)}] = {3, 4, 5, 8} [{(C, 1), (E, 0)}] = {3, 4, 8, 11} [{(C, 1), (F, 1)}] = {4, 5, 8, 13} [{(B, 0), (E, 0), (F, 1)}] = {4, 7, 8} Following the generating stage, a testing stage is performed to identify valid conjuncts. Because all the conjuncts are valid, i.e., their extensions are subset of UPPER(di). Four new rules are found in this iteration. The next step is to find minimal conjuncts by using dropping condition method. Consider the conjunction of {(B, 0), (E, 0), (F, 1)}. Dropping the a-v pair (B, 0) from the group, we have [{(E, 0), (F, 1)}] = {1, 4, 7, 8, 9}, which is not a subset of TargetSet, {3, 4, 5, 7, 8, 11, 13}. Next, try to drop the a-v pair (E, 0) from the group, we have [{(B, 0), (F, 1)}] = {1, 4, 5, 7, 8}, which is not a subset of TargetSet, {3, 4, 5, 7, 8, 11, 13}. Finally, try to drop the a-v pair (F, 1) from the group, we have [{(B, 0), (E, 0)}] = {1, 3, 4, 7, 8}, which is not a subset of TargetSet, {3, 4, 5, 7, 8, 11, 13}. We can conclude that the conjunction of {(B, 0), (E, 0), (F, 1)} contains no redundant a-v pairs, and it is a minimal conjunct. Similarly, it can be verified that the conjuncts {(C, 1), (B, 0)}, {(C, 1), (E, 0)}, and {(C, 1), (F, 1)} are minimal. All minimal conjuncts found are added to the new rule set R. Thus, we have the extension [R] of the new rules as [R] = [{(C, 1), (B, 0)}] + [{(C, 1), (E, 0)}]

+ [{(C, 1), (F, 1)}] + [{(B, 0), (E, 0), (F, 1)}] = {3, 4, 5, 7, 8, 11, 13}. The target set is updated by the following TargetSet = {3, 4, 5, 7, 8, 11, 13} – [R] = empty set. Therefore, we have found the rule set. The last step in procedure RULE_GEN is to remove redundant rules from the rule set. The basic idea is similar to finding minimal conjuncts. Here, we try to remove one rule at a time and to test if the remaining rules cover all examples of the target set. More specifically, we try to remove the conjunct {(C, 1), (B, 0)} from the collection. Then, we have [R] = [{(C, 1), (E, 0)}] + [{(C, 1), (F, 1)}] + [{(B, 0), (E, 0), (F, 1)}]

= {3, 4, 5, 7, 8, 11, 13} = TargetSet = {3, 4, 5, 7, 8, 11, 13}. Therefore, the conjunct {(C, 1), (B, 0)} is redundant and is removed from the rule set. Next, we try to remove the conjunct {(C, 1), (E, 0)} from the rule set, we have

Learning Rules from Very Large Databases Using Rough Multisets

75

[R] = [{(C, 1), (F, 1)}] + [{(B, 0), (E, 0), (F, 1)}]

= {4, 5, 7, 8, 13} ≠ TargetSet. Therefore, the conjunct {(C, 1), (B, 0)} is not redundant, and it is kept in the rule set. Similarly, it can be verified that both conjuncts {(C, 1), (F, 1)} and {(B, 0), (E, 0), (F, 1)} are not redundant. The resulting rule set generated is shown in Table 22, where the w1, w2, and w3 are column sums extracted from the table UPPER(D1) of Table 21. Table 24. Rules generated for UPPER(D1). A

B

C

E

F

D

w1

w2

w3

null null null

null null 0

1 1 null

0 null 0

null 1 1

1 1 1

5 4 3

3 3 1

2 1 2

The LERS-M algorithm tries to find only one minimal set of rules, it does not try to find all minimal sets of rules. 5.3 Discussion There are several advantages of developing LERS-M using relational database technology. Relational database systems have been highly optimized and scalable in dealing with large amount of data. They are very portable. They provide smooth integration with OLAP or data warehousing systems. However, one typical disadvantage of SQL implementation is extra computational overhead. Experiments are needed to identify impacts of computational overhead to the performance of LERS-M. When a database is very large, we can divide the database into smaller n databases and run LERS-M on each small database. Similarly, this scheme can be applied to homogeneous distributed databases. To integrate the distributed answers provided by multiple LERS-M programs, we can take the sum over the number of occurrences (i.e., w1, w2, and w3 in previous example) provided by local LERS-M programs. When single answer is desirable, then the decision value Di with maximum sum of wi can be returned, or the entire vector of number of occurrences can be returned as an answer. It is possible to develop other inference mechanisms that will make use of the number of occurrences when performing the task of classification. Based on our discussion, there are two major parameters of LERS-M, namely, grouping criteria and generation of conjuncts. New criteria and heuristics based on numerical measures such as gini index and entropy function may be used. In the paper, we have used the minimal length criterion for the generation of candidate conjuncts. The search strategy is not exhaustive, and it stops when at least one candidate conjunct is identified. There are rooms for developing more extensive and efficient strategies for generating candidate conjuncts. The proposed algorithm is under implementation on IBM’s DB2 database system running on Redhat Linux with web-based interface implemented using Java servlets and JSP. Performance evaluation and comparison to systems based on classical rough set methods will need further work.

76

Chien-Chung Chan

6 Conclusions In this paper we have formulated the concept of multiset decision tables based on the concept of information multisystems. The concept is then used to develop an algorithm LERS-M for learning rules from databases. Based on the concept of partition of boundary sets, we have shown that it is straightforward to compute basic probability assignment functions of the Dempster-Shafer theory from multiset decision tables. A nice feature of multiset decision tables is that we can use the sum over number of occurrences of decision values as a simple mechanism to integrate distributed answers. Developing LERS-M on top of relational database technology will make the system scalable and portable. Our next step is to evaluate the time and space complexities of LERS-M over very large data sets. It would be interesting to compare the SQL-based implementation to classical rough set methods for learning rules from very large data sets. In addition, we have considered only homogenous data tables, which may be very large or distributed. Generalization to multiple heterogeneous tables needs further work.

References 1. Sarawagi, S., S. Thomas, and R. Agrawal, “Integrating association rule mining with relational database systems: alternatives and implications,” Data Mining and Knowledge Discovery, 4, 89–125, (2000). 2. Agrawal, R. and K. Shim, “Developing tightly-coupled data mining applications on a relational database systems,” Proc. of the 2nd Int. Conference on Knowledge Discovery in Databases and Data Mining, Portland, Oregon, (1996). 3. Wang, M., B. Iyer, and J.S. Vitter, “Scalable mining for classification rules in relational databases,” IDEAS, 58-67, (1998). 4. Fernández-Baizán, M.C., Menasalvas Ruiz E., Peña Sánchez J.M., Pardo Pastrana B., “Integrating KDD Algorithms and RDBMS Code,” Rough Sets and Current Trends in Computing (1998), 210-213. 5. Stolfo, S., A. Prodromidis, S. Tselepis, W. Lee, W. Fan, and P. Chan, “JAM: Java agents for meta-learning over distributed databases,” Proc. Third Intl. Conf. Knowledge Discovery and Data Mining, 74-81, (1997). 6. Grzymala-Busse, J.W., “The LERS family of learning systems based on rough sets,” Proc. of the 3rd Midwest Artificial Intelligence and Cognitive Science Society Conference, Carbondale, IL, April 12-14, 103-107, (1991). 7. Pawlak, Z., “Rough sets: basic notion,” Int. J. of Computer and Information Science 11, 344-56, (1982). 8. Pawlak, Z., “Rough sets and decision tables,” Lecture Notes in Computer Science 208, 186196, Berlin, Heidelberg, Springer-Verlag, (1985). 9. Pawlak, Z., J. Grzymala-Busse, R. Slowinski, and W. Ziarko, “Rough sets,” Communication of ACM, Vol. 38, No. 11, November, (1995), 89-95. 10. Grzymala-Busse, J.W., “Learning from examples based on rough multisets,” Proc. of the 2nd Int. Symposium on Methodologies for Intelligent Systems, Charlotte, North Carolina, October 14-17, 325-332, (1987). 11. Chan, C.-C., “Distributed incremental data mining from very large databases: a rough multiset approach,” Proc. the 5th World Multi-Conference on Systemics, Cybernetics and Informatics, SCI 2001, Orlando, Florida, July 22-25, (2001), 517-522.

Learning Rules from Very Large Databases Using Rough Multisets

77

12. Shafer, G., A Mathematical Theory of Evidence. Princeton, NJ, Princeton University Press, (1976). 13. Skowron, A. and J. Grzymala-Busse, “From rough set theory to evidence theory.” in Advances in the Dempster-Shafer Theory of Evidence, edited by R. R. Yager, J. Kacprzyk, and M. Fedrizzi, 193-236, John Wiley & Sons, Inc, New York, (1994). 14. Grzymala-Busse, J.W., “Rough set and Dempster-Shafer approaches to knowledge acquisition under uncertainty - a comparison,” manuscript, (1987). 15. Knuth, D.E., The Art of Computer Programming. Vol. III, Sorting and Searching. AddisonWesley, (1973). 16. Grzymala-Busse, J.W., “Knowledge acquisition under uncertainty: a rough set approach,” J. of Intelligent and Robotic Systems, Vol. 1, 3-16, (1988). 17. Chan, C.-C., “Incremental learning of production rules from examples under uncertainty: a rough set approach,” Int. J. of Software Engineering and Knowledge Engineering, Vol. 1, No. 4, 439 - 461, (1991). 18. Grzymala-Busse, J.W., Managing Uncertainty in Expert Systems. Morgan Kaufmann Pub., San Mateo, CA, (1991). 19. Hu, X., T.Y. Lin, E. Louie, “Bitmap techniques for optimizing decision support queries and association rule algorithms,” IDEAS, (2003), pp. 34-43. 20. Kryszkiewicz, M., “Rough Set Approach to Rules Generation from Incomplete Information Systems,” In The Encyclopedia of Computer Science and Technology, Marcel Dekker, Inc., New York, Vol. 44, 319–346, (2001). 21. Ślęzak, D., “Various approaches to reasoning with frequency based decision reducts: a survey,” in Rough Set Methods and Applications, L. Polkowski, S. Tsumoto, T.Y. Lin (eds.), Physica-Verlag, Heidelberg, New York, (2000). 22. Chamberlin, D. A Complete Guide to DB2 Universal Database. Morgan Kaufmann Publishers. (1998).

Data with Missing Attribute Values: Generalization of Indiscernibility Relation and Rule Induction Jerzy W. Grzymala-Busse1,2 1 2

Department of Electrical Engineering and Computer Science, University of Kansas Lawrence, KS 66045, USA Institute of Computer Science, Polish Academy of Sciences, 01-237 Warsaw, Poland [email protected] http://lightning.eecs.ku.edu/index.html

Abstract. Data sets, described by decision tables, are incomplete when for some cases (examples, objects) the corresponding attribute values are missing, e.g., are lost or represent “do not care” conditions. This paper shows an extremely useful technique to work with incomplete decision tables using a block of an attribute-value pair. Incomplete decision tables are described by characteristic relations in the same way complete decision tables are described by indiscernibility relations. These characteristic relations are conveniently determined by blocks of attributevalue pairs. Three diﬀerent kinds of lower and upper approximations for incomplete decision tables may be easily computed from characteristic relations. All three deﬁnitions are reduced to the same deﬁnition of the indiscernibility relation when the decision table is complete. This paper shows how to induce certain and possible rules for incomplete decision tables using MLEM2, an outgrow of the rule induction algorithm LEM2, again, using blocks of attribute-value pairs. Additionally, the MLEM2 may induce rules from incomplete decision tables with numerical attributes as well.

1

Introduction

We will assume that data sets are presented as decision tables. In such a table columns are labeled by variables and rows by case names. In the simplest case such case names, also called cases, are numbers. Variables are categorized as either independent, also called attributes, or dependent, called decisions. Usually only one decision is given in a decision table. The set of all cases that correspond to the same decision value is called a concept (or a class). In most articles on rough set theory it is assumed that for all variables and all cases the corresponding values are speciﬁed. For such tables the indiscernibility relation, one of the most fundamental ideas of rough set theory, describes cases that can be distinguished from other cases. However, in many real-life applications, data sets have missing attribute values, or, in other words, the corresponding decision tables are incompletely specJ.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 78–95, 2004. c Springer-Verlag Berlin Heidelberg 2004

Data with Missing Attribute Values

79

iﬁed. For simplicity, incompletely speciﬁed decision tables will be called incomplete decision tables. In this paper we will assume that there are two reasons for decision tables to be incomplete. The ﬁrst reason is that an attribute value, for a speciﬁc case, is lost. For example, originally the attribute value was known, however, due to a variety of reasons, currently the value is not recorded. Maybe it was recorded but is erased. The second possibility is that an attribute value was not relevant – the case was decided to be a member of some concept, i.e., was classiﬁed, or diagnosed, in spite of the fact that some attribute values were not known. For example, it was feasible to diagnose a patient regardless of the fact that some test results were not taken (here attributes correspond to tests, so attribute values are test results). Since such missing attribute values do not matter for the ﬁnal outcome, we will call them “do not care” conditions. The main objective of this paper is to study incomplete decision tables, i.e., incomplete data sets, or, yet in diﬀerent words, data sets with missing attribute values. We will assume that in the same decision table some attribute values may be lost and some may be “do not care” conditions. The ﬁrst paper dealing with such decision tables was [6]. For such incomplete decision tables there are two special cases: in the ﬁrst case, all missing attribute values are lost, in the second case, all missing attribute values are “do not care” conditions. Incomplete decision tables in which all attribute values are lost, from the viewpoint of rough set theory, were studied for the ﬁrst time in [8], where two algorithms for rule induction, modiﬁed to handle lost attribute values, were presented. This approach was studied later in [13–15], where the indiscernibility relation was generalized to describe such incomplete decision tables. On the other hand, incomplete decision tables in which all missing attribute values are “do not care” conditions, from the view point of rough set theory, were studied for the ﬁrst time in [3], where a method for rule induction was introduced in which each missing attribute value was replaced by all values from the domain of the attribute. Originally such values were replaced by all values from the entire domain of the attribute, later, by attribute values restricted to the same concept to which a case with a missing attribute value belongs. Such incomplete decision tables, with all missing attribute values being “do not care conditions”, were extensively studied in [9], [10], including extending the idea of the indiscernibility relation to describe such incomplete decision tables. In general, incomplete decision tables are described by characteristic relations, in a similar way as complete decision tables are described by indiscernibility relations [6]. In rough set theory, one of the basic notions is the idea of lower and upper approximations. For complete decision tables, once the indiscernibility relation is ﬁxed and the concept (a set of cases) is given, the lower and upper approximations are unique. For incomplete decision tables, for a given characteristic relation and concept, there are three diﬀerent possibilities to deﬁne lower and upper approximations,

80

Jerzy W. Grzymala-Busse

called singleton, subset, and concept approximations [6]. Singleton lower and upper approximations were studied in [9], [10], [13–15]. Note that similar three deﬁnitions of lower and upper approximations, though not for incomplete decision tables, were studied in [16–18]. In this paper we further discuss applications to data mining of all three kinds of approximations: singleton, subset and concept. As it was observed in [6], singleton lower and upper approximations are not applicable in data mining. The next topic of this paper is demonstrating how certain and possible rules may be computed from incomplete decision tables. An extension of the well-known LEM2 algorithm [1], [4], MLEM2, was introduced in [5]. Originally, MLEM2 induced certain rules from incomplete decision tables with missing attribute values interpreted as lost and with numerical attributes. Using the idea of lower and upper approximations for incomplete decision tables, MLEM2 was further extended to induce both certain and possible rules from a decision table with some missing attribute values being lost and some missing attribute values being “do not care” conditions, while some attributes may be numerical.

2

Blocks of Attribute-Value Pairs and Characteristic Relations

Let us reiterate that our basic assumption is that the input data sets are presented in the form of a decision table. An example of a decision table is shown in Table 1. Table 1. A complete decision table Attributes Decision Case Temperature Headache Nausea Flu 1 high yes no yes 2 very high yes yes yes 3 high no no no 4 high yes yes yes 5 high yes yes no 6 normal yes no no 7 normal no yes no 8 normal yes no yes

Rows of the decision table represent cases, while columns are labeled by variables. The set of all cases will be denoted by U . In Table 1, U = {1, 2, ..., 8}. Independent variables are called attributes and a dependent variable is called a decision and is denoted by d. The set of all attributes will be denoted by A. In Table 1, A = {Temperature, Headache, Nausea}. Any decision table deﬁnes a function ρ that maps the direct product of U and A into the set of all values. For example, in Table 1, ρ(1, T emperature) = high. Function ρ describing Table 1 is completely speciﬁed (total). A decision table with completely speciﬁed function ρ will be called completely specified, or, for the sake of simplicity, complete.

Data with Missing Attribute Values

81

Rough set theory [11], [12] is based on the idea of an indiscernibility relation, deﬁned for complete decision tables. Let B be a nonempty subset of the set A of all attributes. The indiscernibility relation IN D(B) is a relation on U deﬁned for x, y ∈ U as follows (x, y) ∈ IN D(B) if and only if ρ(x, a) = ρ(y, a) f or all a ∈ B. The indiscernibility relation IN D(B) is an equivalence relation. Equivalence classes of IN D(B) are called elementary sets of B and are denoted by [x]B . For example, for Table 1, elementary sets of IN D(A) are {1}, {2}, {3}, {4, 5}, {6, 8}, {7}. The indiscernibility relation IN D(B) may be computed using the idea of blocks of attribute-value pairs. Let a be an attribute, i.e., a ∈ A and let v be a value of a for some case. For complete decision tables if t = (a, v) is an attribute-value pair then a block of t, denoted [t], is a set of all cases from U that for attribute a have value v. For Table 1, [(Temperature, high)] = {1, 3, 4, 5}, [(Temperature, very high)] = {2}, [(Temperature, normal)] = {6, 7, 8}, [(Headache, yes)] = {1, 2, 4, 5, 6, 8}, [(Headache, no)] = {3, 7}, [(Nausea, no)] = {1, 3, 6}, [(Nausea, yes)] = {2, 4, 5, 7}. The indiscernibility relation IN D(B) is known when all elementary blocks of IND(B) are known. Such elementary blocks of B are intersections of the corresponding attribute-value pairs, i.e., for any case x ∈ U , [x]B = ∩{[(a, v)]|a ∈ B, ρ(x, a) = v}. We will illustrate the idea how to compute elementary sets of B for Table 1 and B = A. [1]A = [(T emperature, high)] ∩ [(Headache, yes)] ∩ [(N ausea, no)] = {1}, [2]A = [(T emperature, very high)] ∩ [(Headache, yes)] ∩ [(N ausea, yes)] = {2}, [3]A = [(T emperature, high)] ∩ [(Headache, no)] ∩ [(N ausea, no)] = {3}, [4]A = [5]A = [(T emperature, high)] ∩ [(Headache, yes)] ∩ [(N ausea, yes)] = {4, 5}, [6]A = [8]A = [(T emperature, normal)] ∩ [(Headache, yes)] ∩ [(N ausea, no] = {6, 8}, [7]A = [(T emperature, normal)] ∩ [(Headache, no] ∩ [(N ausea, yes)] = {7}. In practice, input data for data mining are frequently aﬀected by missing attribute values. In other words, the corresponding function ρ is incompletely speciﬁed (partial). A decision table with an incompletely speciﬁed function ρ will be called incompletely specified, or incomplete. For the rest of the paper we will assume that all decision values are speciﬁed, i.e., they are not missing. Also, we will assume that all missing attribute values are denoted either by “?” or by “*”, lost values will be denoted by “?”, “do not

82

Jerzy W. Grzymala-Busse Table 2. An incomplete decision table Attributes Decision Case Temperature Headache Nausea Flu 1 high ? no yes 2 very high yes yes yes 3 ? no no no 4 high yes yes yes 5 high ? yes no 6 normal yes no no 7 normal no yes no 8 * yes * yes

care” conditions will be denoted by “*”. Additionally, we will assume that for each case at least one attribute value is speciﬁed. Incomplete decision tables are described by characteristic relations instead of indiscernibility relations. Also, elementary blocks are replaced by characteristic sets. An example of an incomplete table is presented in Table 2. For incomplete decision tables the deﬁnition of a block of an attribute-value pair must be modiﬁed. If for an attribute a there exists a case x such that ρ(x, a) =?, i.e., the corresponding value is lost, then the case x is not included in the block [(a, v)] for any value v of attribute a. If for an attribute a there exists a case x such that the corresponding value is a “do not care” condition, i.e., ρ(x, a) = ∗, then the corresponding case x should be included in blocks [(a, v)] for all values v of attribute a. This modiﬁcation of the deﬁnition of the block of attribute-value pair is consistent with the interpretation of missing attribute values, lost and “do not care” condition. Thus, for Table 2 [(Temperature, high)] = {1, 4, 5, 8}, [(Temperature, very high)] = {2, 8}, [(Temperature, normal)] = {6, 7, 8}, [(Headache, yes)] = {2, 4, 6, 8}, [(Headache, no)] = {3, 7}, [(Nausea, no)] = {1, 3, 6, 8}, [(Nausea, yes)] = {2, 4, 5, 7, 8}. The characteristic set KB (x) is the intersection of blocks of attribute-value pairs (a, v) for all attributes a from B for which ρ(x, a) is speciﬁed and ρ(x, a) = v. For Table 2 and B = A, KA (1) = {1, 4, 5, 8} ∩ {1, 3, 6, 8} = {1, 8}, KA (2) = {2, 8} ∩ {2, 4, 6, 8} ∩ {2, 4, 5, 7, 8} = {2, 8}, KA (3) = {3, 7} ∩ {1, 3, 6, 8} = {3}, KA (4) = {1, 4, 5, 8} ∩ {2, 4, 6, 8} ∩ {2, 4, 5, 7, 8} = {4, 8}, KA (5) = {1, 4, 5, 8} ∩ {2, 4, 5, 7, 8} = {4, 5, 8}, KA (6) = {6, 7, 8} ∩ {2, 4, 6, 8} ∩ {1, 3, 6, 8} = {6, 8}, KA (7) = {6, 7, 8} ∩ {3, 7} ∩ {2, 4, 5, 7, 8} = {7}, and KA (8) = {2, 4, 6, 8}.

Data with Missing Attribute Values

83

Characteristic set KB (x) may be interpreted as the smallest set of cases that are indistinguishable from x using all attributes from B, using a given interpretation of missing attribute values. Thus, KA (x) is the set of all cases that cannot be distinguished from x using all attributes. The characteristic relation R(B) is a relation on U deﬁned for x, y ∈ U as follows (x, y) ∈ R(B) if and only if y ∈ KB (x). The characteristic relation R(B) is reﬂexive but – in general – does not need to be symmetric or transitive. Also, the characteristic relation R(B) is known if we know characteristic sets K(x) for all x ∈ U . In our example, R(A) = {(1, 1), (1, 8), (2, 2), (2, 8), (3, 3), (4, 4), (4, 8), (5, 4), (5, 5), (5, 8), (6, 6), (6, 8), (7, 7), (8, 2), (8, 4), (8, 6), (8, 8)}. The most convenient way is to deﬁne the characteristic relation through the characteristic sets. Nevertheless, the characteristic relation R(B) may be deﬁned independently of characteristic sets in the following way: (x, y) ∈ R(B) if and only if ρ(x, a) = ρ(y, a) or ρ(x, a) = ∗ orρ(y, a) = ∗ f or all a ∈ B such that ρ(x, a)?. For decision tables, in which all missing attribute values are lost, a special characteristic relation was deﬁned by J. Stefanowski and A. Tsoukias in [14], see also, e.g., [13], [15]. In this paper that characteristic relation will be denoted by LV (B), where B is a nonempty subset of the set A of all attributes. For x, y ∈ U characteristic relation LV (B) is deﬁned as follows: (x, y) ∈ LV (B) if and only if ρ(x, a) = r(y, a) f or all a ∈ B such that ρ(x, a) =?. For any decision table in which all missing attribute values are lost, the characteristic relation LV (B) is reﬂexive, but – in general – does not need to be symmetric or transitive. For decision tables where all missing attribute values are “do not care” conditions a special characteristic relation, in this paper denoted by DCC(B), was deﬁned by M. Kryszkiewicz in [9], see also, e.g., [10]. For x, y ∈ U , the characteristic relation DCC(B) is deﬁned as follows: (x, y) ∈ DCC(B) if and only if ρ(x, a) = ρ(y, a) orρ(x, a) = ∗ or ρ(y, a) = ∗ f or all a ∈ B. Relation DCC(B) is reﬂexive and symmetric but – in general – not transitive. Obviously, characteristic relations LV (B) and DCC(B) are special cases of the characteristic relation R(B). For a completely speciﬁed decision table, the characteristic relation R(B) is reduced to IN D(B).

84

3

Jerzy W. Grzymala-Busse

Lower and Upper Approximations

For completely speciﬁed decision tables lower and upper approximations are deﬁned on the basis of the indiscernibility relation. Any ﬁnite union of elementary sets, associated with B, will be called a B-definable set. Let X be any subset of the set U of all cases. The set X is called a concept and is usually deﬁned as the set of all cases deﬁned by a speciﬁc value of the decision. In general, X is not a B-deﬁnable set. However, set X may be approximated by two B-deﬁnable sets, the ﬁrst one is called a B-lower approximation of X, denoted by BX and deﬁned as follows {x ∈ U |[x]B ⊆ X}. The second set is called a B-upper approximation of X, denoted by BX and deﬁned as follows {x ∈ U |[x]B ∩ X = ∅. The above shown way of computing lower and upper approximations, by constructing these approximations from singletons x, will be called the first method. The B-lower approximation of X is the greatest B-deﬁnable set, contained in X. The B-upper approximation of X is the smallest B-deﬁnable set containing X. As it was observed in [12], for complete decision tables we may use a second method to deﬁne the B-lower approximation of X, by the following formula ∪{[x]B |x ∈ U, [x]B ⊆ X}, and the B-upper approximation of x may de deﬁned, using the second method, by ∪{[x]B |x ∈ U, [x]B ∩ X = ∅). For incompletely speciﬁed decision tables lower and upper approximations may be deﬁned in a few diﬀerent ways. First, the deﬁnition of deﬁnability should be modiﬁed. Any ﬁnite union of characteristic sets of B is called a B-definable set. In this paper we suggest three diﬀerent deﬁnitions of lower and upper approximations. Again, let X be a concept, let B be a subset of the set A of all attributes, and let R(B) be the characteristic relation of the incomplete decision table with characteristic sets K(x), where x ∈ U . Our ﬁrst deﬁnition uses a similar idea as in the previous articles on incompletely speciﬁed decision tables [9], [10], [13], [14], [15], i.e., lower and upper approximations are sets of singletons from the universe U satisfying some properties. Thus, lower and upper approximations are deﬁned by analogy with the above ﬁrst method, by constructing both sets from singletons. We will call these approximations singleton. A singleton B-lower approximation of X is deﬁned as follows: BX = {x ∈ U |KB (x) ⊆ X}. A singleton B-upper approximation of X is BX = {x ∈ U |KB (x) ∩ X = ∅}.

Data with Missing Attribute Values

85

In our example of the decision table presented in Table 2 let us say that B = A. Then the singleton A-lower and A-upper approximations of the two concepts: {1, 2, 4, 8} and {3, 5, 6, 7} are: A{1, 2, 4, 8} = {1, 2, 4}, A{3, 5, 6, 7} = {3, 7}, A{1, 2, 4, 8} = {1, 2, 4, 5, 6, 8}, A{3, 5, 6, 7} = {3, 5, 6, 7, 8}. The second method of deﬁning lower and upper approximations for complete decision tables uses another idea: lower and upper approximations are unions of elementary sets, subsets of U . Therefore we may deﬁne lower and upper approximations for incomplete decision tables by analogy with the second method, using characteristic sets instead of elementary sets. There are two ways to do this. Using the ﬁrst way, a subset B-lower approximation of X is deﬁned as follows: BX = ∪{KB (x)|x ∈ U, KB (x) ⊆ X}. A subset B-upper approximation of X is BX = ∪{KB (x)|x ∈ U, KB (x) ∩ X = ∅}. Since any characteristic relation R(B) is reﬂexive, for any concept X, singleton B-lower and B-upper approximations of X are subsets of the subset B-lower and B-upper approximations of X, respectively. For the same decision table, presented in Table 2, the subset A-lower and A-upper approximations are A{1, 2, 4, 8} = {1, 2, 4, 8}, A{3, 5, 6, 7} = {3, 7}, A{1, 2, 4, 8} = {1, 2, 4, 5, 6, 8}, A{3, 5, 6, 7} = {2, 3, 4, 5, 6, 7, 8}. The second possibility is to modify the subset deﬁnition of lower and upper approximation by replacing the universe U from the subset deﬁnition by a concept X. A concept B-lower approximation of the concept X is deﬁned as follows: BX = ∪{KB (x)|x ∈ X, KB (x) ⊆ X}. Obviously, the subset B-lower approximation of X is the same set as the concept B-lower approximation of X. A concept B-upper approximation of the concept X is deﬁned as follows: BX = ∪{KB (x)|x ∈ X, KB (x) ∩ X = ∅} = ∪{KB (x)|x ∈ X}. The concept B-upper approximation of X is a subset of the subset B-upper approximation of X. Besides, the concept B-upper approximations are truly the

86

Jerzy W. Grzymala-Busse

smallest B-deﬁnable sets containing X. For the decision table presented in Table 2, the concept A-lower and A-upper approximations are A{1, 2, 4, 8} = {1, 2, 4, 8}, A{3, 5, 6, 7} = {3, 7}, A{1, 2, 4, 8} = {1, 2, 4, 6, 8}, A{3, 5, 6, 7} = {3, 4, 5, 6, 7, 8}. Note that for complete decision tables, all three deﬁnitions of lower approximations, singleton, subset and concept, coalesce to the same deﬁnition. Also, for complete decision tables, all three deﬁnitions of upper approximations coalesce to the same deﬁnition. This is not true for incomplete decision tables, as our example shows.

4

Rule Induction

In the ﬁrst step of processing the input data ﬁle, the data mining system LERS (Learning from Examples based on Rough Sets) checks if the input data ﬁle is consistent (i.e., if the ﬁle does not contain conﬂicting examples). Table 1 is inconsistent because the fourth and the ﬁfth examples are conﬂicting. For these examples, the values of all three attributes are the same (high, yes, yes), but the decision values are diﬀerent, yes for the fourth example and no for the ﬁfth example. If the input data ﬁle is inconsistent, LERS computes lower and upper approximations of all concepts. Rules induced from the lower approximation of the concept certainly describe the concept, so they are called certain. On the other hand, rules induced from the upper approximation of the concept describe the concept only possibly (or plausibly), so they are called possible [2]. The same idea of blocks of attribute-value pairs is used in a rule induction algorithm LEM2 (Learning from Examples Module, version 2), a component of LERS. LEM2 learns discriminant description, i.e., the smallest set of minimal rules, describing the concept. The option LEM2 of LERS is most frequently used since – in most cases – it gives best results. LEM2 explores the search space of attribute-value pairs. Its input data ﬁle is a lower or upper approximation of a concept, so its input data ﬁle is always consistent. In general, LEM2 computes a local covering and then converts it into a rule set. We will quote a few deﬁnitions to describe the LEM2 algorithm. Let B be a nonempty lower or upper approximation of a concept represented by a decision-value pair (d, w). Set B depends on a set T of attribute-value pairs t = (a, v) if and only if [t] ⊆ B. ∅ = [T ] = t∈T

Set T is a minimal complex of B if and only if B depends on T and no proper subset T of T exists such that B depends on T . Let T be a nonempty collection

Data with Missing Attribute Values

87

of nonempty sets of attribute-value pairs. Then T is a local covering of B if and only if the following conditions are satisﬁed: (1) each member T of T is a minimal complex of B, (2) t∈T [T ] = B, and T is minimal, i.e., T has the smallest possible number of members. The procedure LEM2 is presented below. Procedure LEM2 (input: a set B, output: a single local covering T of set B); begin G := B; T := ∅; while G = ∅ begin T := ∅; T (G) := {t|[t] ∩ G = ∅} ; while T = ∅ or [T ] ⊆ B begin select a pair t ∈ T (G) such that |[t] ∩ G| is maximum; if a tie occurs, select a pair t ∈ T (G) with the smallest cardinality of [t]; if another tie occurs, select ﬁrst pair; T := T ∪ {t} ; G := [t] ∩ G ; T (G) := {t|[t] ∩ G = ∅}; T (G) := T (G) − T ; end {while} for each t ∈ T do if [T − {t}] ⊆ B then T := T − {t}; T := T ∪ {T }; G := B − ∪T ∈T [T ]; end {while}; for each T ∈ T do if S∈T −{T } [S] = B then T := T − {T }; end {procedure}. MLEM2 is a modiﬁed version of the algorithm LEM2. The original algorithm LEM2 needs discretization, a preprocessing, to deal with numerical attributes. The MLEM2 algorithm can induce rules from incomplete decision tables with numerical attributes. Its previous version induced certain rules from incomplete decision tables with missing attribute values interpreted as lost and with numerical attributes. Recently, MLEM2 was further extended to induce both certain and possible rules from a decision table with some missing attribute values being lost and some missing attribute values being “do not care” conditions, while

88

Jerzy W. Grzymala-Busse

some attributes may be numerical. Rule induction from decision tables with numerical attributes will be described in the next section. In this section we will describe a new way in which MLEM2 handles incomplete decision tables. Since all characteristic sets KB (x), where x ∈ U , are intersections of attributevalue pair blocks for attributes from B, and for subset and concept deﬁnitions of B–lower and B–upper approximations are unions of sets of the type KB (x), it is most natural to use an algorithm based on blocks of attribute-value pairs, such as LEM2 [1], [4] for rule induction. First of all let us examine rule induction usefulness for the three diﬀerent deﬁnition of lower and upper approximations: singleton, subset and concept. The ﬁrst observation is that singleton lower and upper approximations should not be used for rule induction. Let us explain that on the basis of our example of the decision table from Table 2. The singleton A-lower approximation of the concept {1, 2, 4, 8} is the set {1, 2, 4}. Our expectation is that we should be able to describe the set {1, 2, 4} using given interpretation of missing attribute values, while in the rules we are allowed to use conditions being attribute-value pairs. However, this is impossible, because, as follows from the list of all sets KA (x) there is no way to describe case 1 not describing at the same time case 8, but {1, 8} ⊆ {1, 2, 4}. Similarly, there is no way to describe the singleton A-upper approximation of the concept {3, 5, 6, 7}, i.e., the set {3, 5, 6, 7, 8}, since there is no way to describe case 5 not describing, at the same time, cases 4 and 8, however, {4, 5, 8} ⊆ {3, 5, 6, 7, 8}. On the other hand, both subset and concept A-lower and A-upper approximations are unions of the characteristic sets of the type KA (x), therefore, it is always possible to induce certain rules from subset and concept A-lower approximations and possible rules from concept and subset A-upper approximations. Subset A-lower approximations are identical with concept A-lower approximations so it does not matter which approximations we are going to use. Since concept A-upper approximations are subsets of the corresponding subset A-upper approximations, it is more feasible to use concept A-upper approximations, since they are closer to the concept X, and rules will more precisely describe the concept X. Moreover, it better ﬁts into the idea that the upper approximation should be the smallest set containing the concept. Therefore, we will use for rule induction only concept lower and upper approximations. In order to induce certain rules for our example of the decision table presented in Table 2, we have to compute concept A-lower approximations for both concepts, {1, 2, 4, 8} and {3, 5, 6, 7}. The concept lower approximation of {1, 2, 4, 8} is the same set {1, 2, 4, 8}, so we are going to pass to the procedure LEM2 as the set B. Initially G = B. The set T (G) is the following set {(Temperature, high), (Temperature, very high), (Temperature, normal), (Headache, yes), (Nausea, no), Nausea, yes)}. For three attribute-value pairs from T (G), namely, (Temperature, high), (Headache, yes) and (Nausea, yes), the following value [(attribute, value)] ∩ G

Data with Missing Attribute Values

89

is maximum. The second criterion, the smallest cardinality of [(attribute, value)], indicates (Temperature, high), (Headache, yes) (in both cases that cardinality is equal to four). The last criterion, “ﬁrst pair”, selects (Temperature, high). Thus T = {(Temperature, high)}, G = {1, 4, 8}, and the new T (G) is equal to {(Temperature, very high), (Temperature, normal), (Headache, yes), (Nausea, no), Nausea, yes)}. Since [(T emperature, high)] ⊆ B, we have to perform the next iteration of the inner WHILE loop. This time (Headache, yes) will be selected, the new T = {(Temperature, high), (Headache, yes)} and new G is equal to {4, 8}. Since [T ] = [(T emperature, high)] ∩ [(Headache, yes)] = {4, 8} ⊆ B, the ﬁrst minimal complex is computed. It is not diﬃcult to see that we cannot drop any of these two attribute-value pairs, so T = {T }, and the new G is equal to B − {4, 8} = {1, 2}. During the second iteration of the outer WHILE loop, the next minimal complex T is identiﬁed as {(Temperature, very high)}, so T = {{(Temperature, high), (Headache, yes)}, {(Temperature, very high)}} and G = {1}. We need one additional iteration of the outer WHILE loop, the next minimal complex T is computed as {(Temperature, high), (Nausea, no)}, and T = {{(Temperature, high), (Headache, yes)}, {(Temperature, very high)}, {( Temperature, high), (Nausea, no)}} becomes the ﬁrst local covering, since we cannot drop any of minimal complexes from T . The set of certain rules, corresponding to T and describing the concept {1, 2, 4, 8}, is (Temperature, high) & (Headache, yes) -> (Flu, yes), (Temperature, very high) -> (Flu, yes), (Temperature, high) & (Nausea, no) -> (Flu, yes). Remaining rule sets, certain for the second concept equal to {3, 5, 6, 7}, and both sets of possible rules are compute in a similar manner. Eventually, rules in the LERS format (every rule is equipped with three numbers, the total number of attribute-value pairs on the left-hand side of the rule, the total number of examples correctly classiﬁed by the rule during training, and the total number of training cases matching the left-hand side of the rule) are: certain rule set: 2, 2, 2 (Temperature, high) & (Headache, yes) -> (Flu, yes) 1, 2, 2 (Temperature, very high) -> (Flu, yes) 2, 2, 2 (Temperature, high) & (Nausea, no) -> (Flu, yes) 1, 2, 2 (Headache, no) -> (Flu, no) and possible rule set:

90

Jerzy W. Grzymala-Busse

1, 3, 4 (Headache, yes) -> (Flu, yes) 2, 2, 2 (Temperature, high) & (Nausea, no) -> (Flu, yes) 2, 1, 3 (Nausea, yes) & (Temperature, high) -> (Flu, no) 1, 2, 2 (Headache, no) -> (Flu, no) 1, 2, 3 (Temperature, normal) -> (Flu, no)

5

Other Approaches to Missing Attribute Values

So far we have used two approaches to missing attribute values, in the ﬁrst one a missing attribute value was interpreted as lost, in the second as a “do not care” condition. There are many other possible approaches to missing attribute values, for some discussion on this topic see [7]. Our belief is that for any possible interpretation of a missing attribute vale, blocks of attribute-value pairs may be re-deﬁned, a new characteristic relation may be computed, corresponding lower and upper approximations computed as well, and eventually, corresponding certain and possible rules induced. As an example we may consider another interpretation for “do not care” conditions. So far, in computing the block for an attribute-value pair (a, v) we added all cases with value “*” to such block [(a, v)]. Following [7], we may consider another interpretation of “do not care conditions”: If for an attribute a there exists a case x such that the corresponding value is a “do not care” condition, i.e., ρ(x, a) = ∗, then the corresponding case x should be included in blocks [(a, v)] for all values v of attribute a with the same decision value as for x (i.e., we will add x only to members of the same concept to which x belongs). With this new interpretation of “*”s, blocks of attribute-value pairs for Table 2 are: [(Temperature, high)] = {1, 4, 5, 8}, [(Temperature, very high)] = {2, 8}, [(Temperature, normal)] = {6, 7}, [(Headache, yes)] = {2, 4, 6, 8}, [(Headache, no)] = {3, 7}, [(Nausea, no)] = {1, 3, 6}, [(Nausea, yes)] = {2, 4, 5, 7, 8}. The characteristic set KB (x) for Table 2, a new interpretation of “*”s, and B = A, are: KA (1) = {1, 4, 5, 8} ∩ {1, 3, 6} = {1, 8}, KA (2) = {2, 8} ∩ {2, 4, 6, 8} ∩ {2, 4, 5, 7, 8} = {2, 8}, KA (3) = {3, 7} ∩ {1, 3, 6} = {3}, KA (4) = {1, 4, 5, 8} ∩ {2, 4, 6, 8} ∩ {2, 4, 5, 7, 8} = {4, 8},

Data with Missing Attribute Values

91

KA (5) = {1, 4, 5, 8} ∩ {2, 4, 5, 7, 8} = {4, 5, 8}, KA (6) = {6, 7} ∩ {2, 4, 6, 8} ∩ {1, 3, 6} = {6}, KA (7) = {6, 7} ∩ {3, 7} ∩ {2, 4, 5, 7, 8} = {7}, and KA (8) = {2, 4, 6, 8}. The characteristic relation R(B) is {(1, 1), (1, 8), (2, 2), (2, 8), (3, 3), (4, 4), (4, 8), (5, 4), (5, 5), (5, 8), (6, 6), (7, 7), (8, 2), (8, 4), (8, 6), (8, 8)}. Then we may deﬁne lower and upper approximations and induce rules using a similar technique as in the previous section.

6

Incomplete Decision Tables with Numerical Attributes

An example of an incomplete decision table with a numerical attribute is presented in Table 3. Table 3. An incomplete decision table with a numerical attribute Attributes Decision Case Temperature Headache Nausea Flu 1 98 ? no yes 2 101 yes yes yes 3 ? no no no 4 99 yes yes yes 5 99 ? yes no 6 96 yes no no 7 96 no yes no 8 * yes * yes

Numerical attributes should be treated in a little bit diﬀerent way as symbolic attributes. First, for computing characteristic sets, numerical attributes should be considered as symbolic. For example, for Table 3 the blocks of the numerical attribute Temperature are: [(Temperature, [(Temperature, [(Temperature, [(Temperature,

96)] = {6, 7, 8}, 98)] = {1, 8}, 99)] = {4, 5, 8}, 101)] = {2, 8}.

Remaining blocks of attribute-value pairs, for attributes Headache and Nausea, are the same as for Table 2. The characteristic sets KB (x) for Table 3 and B = A are: KA (1) = {1, 8} ∩ {1, 3, 6, 8} = {1, 8}, KA (2) = {2, 8} ∩ {2, 4, 6, 8} ∩ {2, 4, 5, 7, 8} = {2, 8}, KA (3) = {3, 7} ∩ {1, 3, 6, 8} = {3}, KA (4) = {4, 5, 8} ∩ {2, 4, 6, 8} ∩ {2, 4, 5, 7, 8} = {4, 8}, KA (5) = {4, 5, 8} ∩ {2, 4, 5, 7, 8} = {4, 5, 8},

92

Jerzy W. Grzymala-Busse

KA (6) = {6, 7, 8} ∩ {2, 4, 6, 8} ∩ {1, 3, 6, 8} = {6, 8}, KA (7) = {6, 7, 8} ∩ {3, 7} ∩ {2, 4, 5, 7, 8} = {7}, and KA (8) = {2, 4, 6, 8}. The characteristic relation R(B) is {(1, 1), (1, 8), (2, 2), (2, 8), (3, 3), (4, 4), (4, 8), (5, 4), (5, 5), (5, 8), (6, 6), (6, 8), (7, 7), (8, 2), (8, 4), (8, 6), (8, 8)}. For the decision presented in Table 3, the concept A-lower and A-upper approximations are A{1, 2, 4, 8} = {1, 2, 4, 8}, A{3, 5, 6, 7} = {3, 7}, A{1, 2, 4, 8} = {1, 2, 4, 6, 8}, A{3, 5, 6, 7} = {3, 4, 5, 6, 7, 8}. For inducing rules, blocks of attribute-value pairs are deﬁned diﬀerently than in computing characteristic sets. MLEM2 has an ability to recognize integer and real numbers as values of attributes, and labels such attributes as numerical. For numerical attributes MLEM2 computes blocks in a diﬀerent way than for symbolic attributes. First, it sorts all values of a numerical attribute, ignoring missing attribute values. Then it computes cutpoints as averages for any two consecutive values of the sorted list. For each cutpoint c MLEM2 creates two blocks, the ﬁrst block contains all cases for which values of the numerical attribute are smaller than c, the second block contains remaining cases, i.e., all cases for which values of the numerical attribute are larger than c. The search space of MLEM2 is the set of all blocks computed this way, together with blocks deﬁned by symbolic attributes. Starting from that point, rule induction in MLEM2 is conducted the same way as in LEM2. Note that if in a rule there are two attribute value pairs with overlapping intervals, a new condition is computed with the intersection of both intervals. Thus, the corresponding blocks for Temperature are: [(Temperature, [(Temperature, [(Temperature, [(Temperature, [(Temperature, [(Temperature,

96..97)] = {6, 7, 8}, 97..101)] = {1, 2, 4, 5, 8}, 96..98.5)] = {1, 6, 7, 8}, 98.5..101)] = {2, 4, 5, 8}, 96..100)] = {1, 4, 5, 6, 7, 8}, 100..101)] = {2, 8}.

Remaining blocks of attribute-value pairs, for attributes Headache and Nausea, are the same as for Table 2. Using the MLEM2 algorithm, the following rules are induced from the concept approximations: certain rule set: 2, 3, 3 (Temperature, 98.5..101) & (Headache, yes) -> (Flu, yes) 1, 2, 2 (Temperature, 97..98.5) -> (Flu, yes) 1, 2, 2 (Headache, no) -> (Flu, no)

Data with Missing Attribute Values

93

possible rule set: 1, 3, 4 (Headache, yes) -> (Flu, yes) 2, 2, 3 (Temperature, 96..98.5) & (Nausea, no) -> (Flu, yes) 2, 2, 4 (Temperature, 96..100) & (Nausea, yes) -> (Flu, no) 1, 2, 3 (Temperature, 96..97) -> (Flu, no) 1, 2, 2 (Headache, no) -> (Flu, no)

7

Conclusions

It was shown in the paper that the idea of attribute-value pair blocks is an extremely useful tool. That idea may be used for computing characteristic relations for incomplete decision tables; in turn, characteristic sets are used for determining lower and upper approximations. Furthermore, the same idea of

Computing attribute-value pair blocks

? Computing characteristic sets

? Computing characteristic relations

? Computing lower and upper approximations

? Computing additional blocks of attribute-value pairs for numerical attributes

? Inducing certain and possible rules

Fig. 1. Using attribute-value pair blocks for rule induction from incomplete decision tables

94

Jerzy W. Grzymala-Busse

attribute-value pair blocks may be used for rule induction, for example, using the MLEM2 algorithm. The process is depicted in Figure 1. Note that it is much more convenient to deﬁne the characteristic relations through the two-stage process of determining blocks of attribute-value pairs and then computing characteristic sets than to deﬁne characteristic relations, for every interpretation of missing attribute values, separately. For completely speciﬁed decision tables any characteristic relation is reduced to an indiscernibility relation. Also, it is shown that the most useful way of deﬁning lower and upper approximations for incomplete decision tables is a new idea of concept lower and upper approximations. Two new ways to deﬁne lower and upper approximations for incomplete decision tables, called subset and concept, and the third way, deﬁned previously in a number of papers [9], [10], [13], [14], [15] and called here singleton lower and upper approximations, are all reduced to respective well-known deﬁnitions of lower and upper approximations for complete decision tables.

References 1. Chan, C.C. and Grzymala-Busse, J.W.: On the attribute redundancy and the learning programs ID3, PRISM, and LEM2. Department of Computer Science, University of Kansas, TR-91-14, December 1991, 20 pp. 2. Grzymala-Busse, J.W.: Knowledge acquisition under uncertainty – A rough set approach. Journal of Intelligent & Robotic Systems 1 (1988), 3–16. 3. Grzymala-Busse, J.W.: On the unknown attribute values in learning from examples. Proc. of the ISMIS-91, 6th International Symposium on Methodologies for Intelligent Systems, Charlotte, North Carolina, October 16–19, 1991. Lecture Notes in Artiﬁcial Intelligence, vol. 542, Springer-Verlag, Berlin, Heidelberg, New York (1991) 368–377. 4. Grzymala-Busse, J.W.: LERS – A system for learning from examples based on rough sets. In Intelligent Decision Support. Handbook of Applications and Advances of the Rough Sets Theory, ed. by R. Slowinski, Kluwer Academic Publishers, Dordrecht, Boston, London (1992) 3–18. 5. Grzymala-Busse., J.W.: MLEM2: A new algorithm for rule induction from imperfect data. Proceedings of the 9th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2002, July 1–5, Annecy, France, 243–250. 6. Grzymala-Busse, J.W.: Rough set strategies to data with missing attribute values. Workshop Notes, Foundations and New Directions of Data Mining, the 3-rd International Conference on Data Mining, Melbourne, FL, USA, November 19–22, 2003, 56–63. 7. Grzymala-Busse, J.W. and Hu, M.: A comparison of several approaches to missing attribute values in data mining. Proceedings of the Second International Conference on Rough Sets and Current Trends in Computing RSCTC’2000, Banﬀ, Canada, October 16–19, 2000, 340–347. 8. Grzymala-Busse, J.W. and A. Y. Wang A.Y.: Modiﬁed algorithms LEM1 and LEM2 for rule induction from data with missing attribute values. Proc. of the Fifth International Workshop on Rough Sets and Soft Computing (RSSC’97) at the Third Joint Conference on Information Sciences (JCIS’97), Research Triangle Park, NC, March 2–5, 1997, 69–72.

Data with Missing Attribute Values

95

9. Kryszkiewicz, M.: Rough set approach to incomplete information systems. Proceedings of the Second Annual Joint Conference on Information Sciences, Wrightsville Beach, NC, September 28–October 1, 1995, 194–197. 10. Kryszkiewicz, M.: Rules in incomplete information systems. Information Sciences 113 (1999) 271–292. 11. Pawlak, Z.: Rough Sets. International Journal of Computer and Information Sciences 11 (1982) 341–356. 12. Pawlak, Z.: Rough Sets. Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Dordrecht, Boston, London (1991). 13. Stefanowski, J.: Algorithms of Decision Rule Induction in Data Mining. Poznan University of Technology Press, Poznan, Poland (2001). 14. Stefanowski, J. and Tsoukias, A.: On the extension of rough sets under incomplete information. Proceedings of the 7th International Workshop on New Directions in Rough Sets, Data Mining, and Granular-Soft Computing, RSFDGrC’1999, Ube, Yamaguchi, Japan, November 8–10, 1999, 73–81. 15. Stefanowski, J. and Tsoukias, A.: Incomplete information tables and rough classiﬁcation. Computational Intelligence 17 (2001) 545–566. 16. Yao, Y.Y.: Two views of the theory of rough sets in ﬁnite universes. International J. of Approximate Reasoning 15 (1996) 291–317. 17. Yao, Y.Y.: Relational interpretations of neighborhood operators and rough set approximation operators. Information Sciences 111 (1998) 239–259. 18. Yao, Y.Y.: On the generalizing rough set theory. Proc. of the 9th Int. Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing (RSFDGrC’2003), Chongqing, China, October 19–22, 2003, 44–51.

Generalizations of Rough Sets and Rule Extraction Masahiro Inuiguchi Division of Mathematical Science for Social Systems Department of Systems Innovation Graduate School of Engineering Science, Osaka University 1-3, Machikaneyama, Toyonaka, Osaka 560-8531, Japan [email protected] http://www-inulab.sys.es.osaka-u.ac.jp/~inuiguti/

Abstract. In this paper, two kinds of generalizations of rough sets are proposed based on two diﬀerent interpretations of rough sets: one is an interpretation of rough sets as approximation of a set by means of elementary sets and the other is an interpretation of rough sets as classiﬁcation of objects into three diﬀerent classes, i.e., positive objects, negative objects and boundary objects. Under each interpretation, two diﬀerent definitions of rough sets are given depending on the problem setting. The fundamental properties are shown. The relations between generalized rough sets are given. Moreover, rule extraction underlying each rough set is discussed. It is shown that rules are extracted based on modiﬁed decision matrices. A simple example is given to show the diﬀerences in the extracted rules by underlying rough sets.

1

Introduction

Rough sets [7] are useful in applications to data mining, knowledge discovery, decision making, conﬂict analysis, and so on. Rough set approaches [7] have been developed under equivalence relations. The equivalence relation implies that attributes are all nominal. Because of this weak assumption, unreasonable results for human intuition have been exempliﬁed when some attributes are ordinal [3]. To overcome such unreasonableness, the dominance-based rough set approach has been proposed by Greco et al. [3]. On the other hand, the generalization of rough sets is an interesting topic not only in mathematical point of view but also in practical point of view. Along this direction, rough sets have been generalized under similarity relations [5, 10], covers [1, 5] and general relations [6, 11–13]. Those results demonstrate a diversity of generalizations. Moreover, recently, the introduction of fuzziness into rough set approaches attracts researchers in order to obtain more realistic and useful tools (see, for example, [2]). Considering applications of rough sets in the generalized setting, the interpretation of rough sets plays an important role. This is because any mathematical model cannot be properly applied without its interpretation. In other words, the interpretation should be proper for the aim of application. The importance of the J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 96–119, 2004. c Springer-Verlag Berlin Heidelberg 2004

Generalizations of Rough Sets and Rule Extraction

97

interpretation increases as the problem setting becomes more generalized such as a fuzzy setting. This is because the diversity of deﬁnitions and treatments which are same in the original setting is increased by the generalization. Two major interpretations have been traditionally given to rough sets. One is an interpretation of rough sets as approximation of a set by means of elementary sets. The other is an interpretation of rough sets as classiﬁcation of objects into three diﬀerent classes, i.e., positive objects, negative objects and boundary objects. Those interpretations can be found in the terminologies, ‘lower approximation’ (resp. ‘upper approximation’) and ‘positive region’ (resp. ‘possible region’) in the classical rough sets. The lower approximation of a set equals to the positive region of the set in the classical rough sets, i.e., rough sets under equivalence relations. However, they can be diﬀerent in a general setting. For example, Inuiguchi and Tanino [5] showed the diﬀerence under a similarity relation. They described the diﬀerence under a more generalized setting (see [6]). However, fundamental properties have not been considerably investigated yet. Moreover, from deﬁnitions of rough sets in the previous papers, we may have some other deﬁnitions of rough sets under generalized settings. When generalized rough sets are given, we may have a question how we can extract decision rules based on them. The type of extracted decision rules would be diﬀerent depending on the underlying generalized rough set. To this question, Inuiguchi and Tanino [6] demonstrated the diﬀerence in rule extraction based on generalized rough sets. In this paper, we discuss the generalized rough sets in the two diﬀerent interpretations restricting ourselves into crisp setting as extensions of a previous paper [6]. Such investigations are necessary and important also for proper deﬁnitions and applications of fuzzy rough sets. We introduce some new deﬁnitions of generalized rough sets. The fundamental properties of those generalized rough sets are newly given. The relations between rough sets under two diﬀerent interpretations are discussed. In order to see the diﬀerences of those generalized rough sets in applications, we discuss rule extraction based on the generalized rough sets. We demonstrate the diﬀerence in the types of decision rules depending on underlying generalized rough sets. Moreover, we show that decision rules with minimal conditions can be extracted by modifying the decision matrix. This paper is organized as follows. The classical rough sets are brieﬂy reviewed in the next section. In Section 3, interpreting rough sets as classiﬁcation of objects, we deﬁne rough sets under general relations. The fundamental properties of the generalized rough sets are investigated. In Section 4, using the interpretation of rough sets as approximation by means of elementary sets, we deﬁne rough sets under a family of sets. The fundamental properties of this generalized rough sets are also investigated. Section 5 is devoted to relations between those two kinds of rough sets. In Section 6, we discuss decision rule extraction based on generalized rough sets. Extraction methods using modiﬁed decision matrices are proposed. In Section 7, a few numerical examples are given to demonstrate the diﬀerences among the extracted decision rules based on diﬀerent generalized rough sets. Some concluding remarks are given in Section 8.

98

2 2.1

Masahiro Inuiguchi

Classical Rough Sets Definitions, Interpretations and Fundamental Properties

Let R be an equivalence relation in the ﬁnite universe U , i.e., R ⊆ U × U . In rough set literature, R is referred to as an indiscernibility relation and a pair (U, R) is called an approximation space. By the equivalence relation R, U can be partitioned into a collection of equivalence classes or elementary sets, U |R = {E1 , E2 , . . . , Ep }. Deﬁne R(x) = {y ∈ U | (y, x) ∈ R}. Then we have x ∈ Ei if and only if Ei = R(x). Note that U |R = {R(x) | x ∈ U }. Let X be a subset of U . Using R(x), a rough set of X is deﬁned by a pair of the following lower and upper approximations; R∗ (X) = {x ∈ X | R(x) ⊆ X} = U − {R(y) | y ∈ U − X} = {Ei | Ei ⊆ X, i = 1, 2, . . . , p}, (U − Ei ) (U − Ei ) ⊆ X, I ⊆ {1, 2, . . . .p} , (1) = i∈I i∈I R∗ (X) = {R(x) | x ∈ X} = U − {y ∈ U − X | R(y) ⊆ U − X} Ei Ei ⊇ X, I ⊆ {1, 2, . . . , p} = =

i∈I

i∈I

{U − Ei | U − Ei ⊇ X}.

(2)

Let us interpret R(x) as a set of objects we intuitively identify as members of X from the fact x ∈ X. Then, from the ﬁrst expression of R∗ (X) in (1), R∗ (X) is interpreted as a set of objects which are consistent with the intuition that R(x) ⊆ X if x ∈ X. Under the same interpretation of R(x), R∗ (X) is interpreted as a set of objects which can be intuitively inferred as members of X from the ﬁrst expression of R∗ (X) in (2). In other words, R∗ (X) and R∗ (X) show positive (consistent) and possible members of X. Moreover, R∗ (X)−R∗ (X) and U − R∗ (X) show ambiguous (boundary) and negative members of X. In this way, a rough set classiﬁes objects of U into three classes, i.e., positive, negative and boundary regions. On the contrary let us interpret R(x) as a set of objects we intuitively identify as membersof U − X from the fact x ∈ U − X. In the same way as previous discussion, {R(y) | y ∈ U − X} and {y ∈ U − X | R(y) ⊆ U − X} show possible and positive members of U −X, respectively. From the second expression of R∗ (X) in (1), R∗ (X) can be regarded as a set of impossible members of U − X. In other words, R∗ (X) show certain members of X. Similarly, from the second expression of R∗ (X) in (2), R∗ (X) can be regarded as a set of nonpositive members of U − X. Namely, R∗ (X) show conceivable members of X. R∗ (X)−R∗ (X) and U −R∗ (X) show border and inconceivable members of X. In this case, a rough set again classiﬁes objects of U into three classes, i.e., certain, inconceivable and border regions.

Generalizations of Rough Sets and Rule Extraction

99

Table 1. Fundamental properties of rough sets (i) (ii) (iii) (iv) (v) (vi) (vii)

R∗ (X) ⊆ X ⊆ R∗ (X). R∗ (∅) = R∗ (∅) = ∅, R∗ (U ) = R∗ (U ) = U . R∗ (X ∩ Y ) = R∗ (X) ∩ R∗ (Y ), R∗ (X ∪ Y ) = R∗ (X) ∪ R∗ (Y ). X ⊆ Y implies R∗ (X) ⊆ R∗ (Y ), X ⊆ Y implies R∗ (X) ⊆ R∗ (Y ). R∗ (X ∪ Y ) ⊇ R∗ (X) ∪ R∗ (Y ), R∗ (X ∩ Y ) ⊆ R∗ (X) ∩ R∗ (Y ). R∗ (U − X) = U − R∗ (X), R∗ (U − X) = U − R∗ (X). R∗ (R∗ (X)) = R∗ (R∗ (X)) = R∗ (X), R∗ (R∗ (X)) = R∗ (R∗ (X)) = R∗ (X).

From the third expression of R∗ (X) in (1), R∗ (X) is the best approximation of X by means of the union of elementary sets Ei such that Ei ⊆ X. On the other hand, from the third expression of R∗ (X) in (2), R∗ (X) is the minimal superset of X by means of the union of elementary sets Ei . Finally, from the fourth expression of R∗ (X) in (1), R∗ (X) is the maximal subset of X by means of the intersection of complements of elementary sets U − Ei . From the fourth expression of R∗ (X) in (2), R∗ (X) is the best approximation of X by means of the intersection of complements of elementary sets U − Ei such that U − Ei ⊇ X. We introduced only four kinds of expressions of lower and upper approximations but there are other many expressions [5, 10–13]. The interpretation of rough sets depends on the expression of lower and upper approximations. Thus we may have more interpretations by adopting the other expressions. However the interpretations described above seem appropriate for applications of rough sets. Those interpretations can be divided into two categories: interpretation of rough sets as classiﬁcation of objects and interpretation of rough sets as approximation of a set. The fundamental properties listed in Table 1 are satisﬁed with the lower and upper approximations of classical rough sets.

3 3.1

Classification-Oriented Generalization Proposed Definitions

We generalize classical rough sets under interpretation of rough sets as classiﬁcation of objects. As described in the previous section, there are two expressions in this interpretation, i.e., the ﬁrst and second expressions of (1) and (2). First we describe the generalization based on the second expressions of (1) and (2). In this case, we assume that there exists a relation P ⊆ U × U such that P (x) = {y ∈ U | (y, x) ∈ P } means a set of objects we intuitively identify as members of X from the fact x ∈ X. Then if P (x) ⊆ X for an object x ∈ X then there is no objection against x ∈ X. In this case, x ∈ X is consistent with the intuitive knowledge based on the relation P . Such an object x ∈ X can be considered as a positive member of X. Hence the positive region of X can be deﬁned as

100

Masahiro Inuiguchi

P∗ (X) = {x ∈ X | P (x) ⊆ X}.

(3)

On the other hand, by the intuition from the relation P , an object y ∈ P (x) for x ∈ X can be a member of X. Such an object y ∈ U is a possible member of X. Moreover, every object x ∈ X is evidently a possible member of X. Hence the possible region of X can be deﬁned as (4) P ∗ (X) = X ∪ {P (x) | x ∈ X}. Using the positive region P∗ (X) and the possible region P ∗ (X), we can deﬁne a rough set of X as a pair (P∗ (X), P ∗ (X)). We can call such rough sets as classiﬁcation-oriented rough sets under a positively extensive relation P of X (for short CP-rough sets). The relation P depends on the meaning of X whose positive and possible regions we are interested in. Thus, we cannot always deﬁne the CP-rough set of U −X by using the same relation P . To deﬁne a CP-rough set of U −X, we should introduce another relation Q ⊆ U × U such that Q(x) = {y ∈ U | (y, x) ∈ Q} means a set of objects we intuitively identify as members of U − X from the fact x ∈ U − X. Using Q we have positive and possible regions of U − X by Q∗ (U − X) = {x ∈ U − X | Q(x) ⊆ U − X}, Q∗ (U − X) = (U − X) ∪ {Q(x) | x ∈ U − X}.

(5) (6)

Using those, we can deﬁne certain and conceivable regions of X by ¯ ∗ (X) = U − Q∗ (U − X) = X ∩ U − {Q(x) | x ∈ U − X} , Q

(7)

¯ ∗ (X) = U − Q∗ (U − X) = U − {x ∈ U − X | Q(x) ⊆ U − X}. Q

(8)

Those deﬁnitions correspond to the second expressions of (1) and (2). ¯ ∗ (X)) with the ¯ ∗ (X), Q We can deﬁne another rough set of X as a pair (Q ∗ ¯ ¯ certain region Q∗ (X) and the conceivable region Q (X). We can call this type of rough sets as classiﬁcation-oriented rough sets under a negatively extensive relation Q of X (for short CN-rough sets). Let Q−1 (x) = {y ∈ U | (x, y) ∈ Q}. As is shown in [10], we have {Q(x) | x ∈ U − X} = {x ∈ U | Q−1 (x) ∩ (U − X) = ∅}. (9) Therefore, we have ¯ ∗ (X) = X ∩ {x ∈ U | Q−1 (x) ∩ (U − X) = ∅} Q = {x ∈ X | Q−1 (x) ⊆ X} = QT (10) ∗ (X) ∗ ¯ Q (X) = U − {x ∈ U − X | Q(x) ∩ X = ∅} = X ∪ {x ∈ U | Q(x) ∩ X = ∅} ∗ = X ∪ {Q−1 (x) | x ∈ X} = QT (X), (11) where QT is the converse relation of Q, i.e., QT = {(x, y) | (y, x) ∈ Q}. Note that we have QT (x) = {x ∈ U | (x, y) ∈ QT } = {x ∈ U | (y, x) ∈ Q} = Q−1 (x).

Generalizations of Rough Sets and Rule Extraction

101

From (10) and (11), the classiﬁcation-oriented rough sets under a negatively extensive relation Q can be seen as the classiﬁcation-oriented rough sets under a positively extensive relation QT . By the same discussion, the classiﬁcationoriented rough sets under a positively extensive relation P can be also seen as the classiﬁcation-oriented rough sets under a negatively extensive relation P T . Moreover, when P = QT , we have the classiﬁcation-oriented rough sets under a positively extensive relation P coincides with the classiﬁcation-oriented rough sets under a negatively extensive relation Q. 3.2

Relationships to Previous Definitions

Rough sets were previously deﬁned under a general relation. We discuss the relationships of the proposed generalized rough sets with previous ones. First of all, let us review the previous generalized rough sets brieﬂy. In analogy to Kripke model in modal logic, Yao and Lin [13] and Yao [11, 12] proposed a generalized rough set with the following lower and upper approximations: T∗ (X) = {x ∈ U | T −1 (x) ⊆ X}, T ∗ (X) = {x | T −1 (x) ∩ X = ∅},

(12) (13)

where T is a general binary relation and T −1 (x) = {y ∈ U | (x, y) ∈ T }. In Yao [12], T −1 (x) is replaced with a neighborhood n(x) of x ∈ U . Slowi´ nski and Vanderpooten [10] proposed rough sets under a similarity relation S. They assume the reﬂexivity of S ((x, x) ∈ S, for each x ∈ U ). They classify all objects in U into the following four categories under the intuition that y ∈ U to which x ∈ U is similar must be in the same set containing x: (i) positive objects, i.e., objects x ∈ U such that x ∈ X and S −1 (x) ⊆ X, (ii) ambiguous objects of type I, i.e., objects x ∈ U such that x ∈ X but S −1 (x) ∩ (U − X) = ∅, (iii) ambiguous objects of type 2, i.e., objects x ∈ U such that x ∈ U − x but S −1 (x) ∩ X = ∅, and (iv) negative objects, i.e., x ∈ U − X and S −1 (x) ⊆ U − X. Based on this classiﬁcation, lower and upper approximations are deﬁned by (12) and (13) with substitution of S for T . Namely, the lower approximation is a collection of positive objects and the upper approximation is a collection of positive and ambiguous objects. Note that they expressed the upper approximation as S ∗ (X) = {S(x) | x ∈ X} which is equivalent to (13) with the substitution of S for T (see Slowi´ nski and Vanderpooten [10]), where S(x) = {y ∈ U | (y, x) ∈ S}. Greco, Matarazzo and Slowi´ nski [3] proposed rough sets under a dominance relation D. They assume the reﬂexivity of D ((x, x) ∈ D, for each x ∈ U ). Let X be a set of objects better than x. Under the intuition that y ∈ U by which x ∈ U is dominated must be better than x, i.e., y must be at least in X, they deﬁned lower and upper approximations as D∗ (X) = {x ∈ U | D(x) ⊆ X}, D∗ (X) = {D(x) | x ∈ X},

(14) (15)

where D(x) = {y ∈ U | (y, x) ∈ D}. It can be shown that D∗ (X) = {x | D−1 (x) ∩ X = ∅}, where D−1 (x) = {y ∈ U | (x, y) ∈ D}.

102

Masahiro Inuiguchi

Finally, Inuiguchi and Tanino [5] assume that a set X corresponds to an ambiguous concept so that we may have a set X composed of objects that everyone agrees their membership and a set X composed of objects that only someone agrees their membership. A given X can be considered a set of objects whose memberships are evaluated by a certain person. Thus, we assume that X ⊆ X ⊆ X. Let S be a reﬂexive similarity relation. Assume that only objects which are similar to a member of X are possible candidates for members of X for any set X. Then we have X ⊆ {S(x) | x ∈ X} = {x ∈ U | S −1 (x) ∩ X = ∅}. (16) From the deﬁnitions of X and X, we can have U − X = U − X and U − X = U − X. Hence we also have X ⊆ {x ∈ U | S −1 (x) ⊆ X}.

(17)

We do not know X and X but X. With substitution of S for T , we obtain the lower approximation of X by (12) and the upper approximation of X by (13). Now, let us discuss relationships between the previous deﬁnitions and the proposed deﬁnitions. The previous deﬁnitions are formally agreed in the deﬁnition of upper approximation by (13) with substitution of a certain relation for T . However, the proposed deﬁnition (4) is similar but diﬀerent since they take a union with X. By this union, X ⊆ P ∗ (X) is guaranteed. In order to have this property of the upper approximation, Slowi´ nski and Vanderpooten [10], Greco, Matarazzo and Slowi´ nski [3] and Inuiguchi and Tanino [5] assumed the reﬂexivity of the binary relations S and D. The idea of the proposed CP-rough set follows that of rough sets under a dominance relation proposed by Greco, Matarazzo and Slowi´ nski [3]. On the other hand the idea of the proposed CN-rough set is similar to those of Slowi´ nski and Vanderpooten [10] and Inuiguchi and Tanino [5] since we may regard S as a negatively extensive relation, i.e., S(x) means a set of objects we intuitively identify as members of U − X from the fact x ∈ U − X. However, the diﬀerences are found in the restrictions, i.e., x ∈ U in (14) versus x ∈ X in (3). In other words, we take an intersection with X, i.e., P∗ (X) = X∩{x ∈ U | P (x) ⊆ X} and ¯ ∗ (X) = X ∩ {x ∈ U | Q−1 (x) ⊆ X}. This intersection guarantees X ⊆ P∗ (X) Q ¯ ∗ (X). In order to guarantee those relations, the reﬂexivity of the and X ⊆ Q relation is assumed in Slowi´ nski and Vanderpooten [10], Greco, Matarazzo and Slowi´ nski [3] and Inuiguchi and Tanino [5]. Finally, we remark that, in deﬁnitions by Slowi´ nski and Vanderpooten [10] and Inuiguchi and Tanino [5], S acts as a positively extensive relation P and a negatively extensive relation Q at the same time. 3.3

Fundamental Properties

The fundamental properties of the CP- and CN-rough sets can be obtained as in Table 2. In property (vii), we assume that P can be regarded as positively

Generalizations of Rough Sets and Rule Extraction

103

Table 2. Fundamental properties of CP- and CN-rough sets ¯ ∗ (X) ⊆ X ⊆ Q ¯ ∗ (X). (i) P∗ (X) ⊆ X ⊆ P ∗ (X), Q ∗ ∗ (ii) P∗ (∅) = P (∅) = ∅, P∗ (U ) = P (U ) = U , ¯ ∗ (∅) = ∅, Q ¯ ∗ (U ) = Q ¯ ∗ (U ) = U . ¯ ∗ (∅) = Q Q (iii) P∗ (X ∩ Y ) = P∗ (X) ∩ P∗ (Y ), P ∗ (X ∪ Y ) = P ∗ (X) ∪ P ∗ (Y ), ¯ ∗ (X) ∩ Q ¯ ∗ (Y ), Q ¯ ∗ (X ∪ Y ) = Q ¯ ∗ (X) ∪ Q ¯ ∗ (Y ). ¯ ∗ (X ∩ Y ) = Q Q ∗ ∗ (iv) X ⊆ Y implies P∗ (X) ⊆ P∗ (Y ), P (X) ⊆ P (Y ), ¯ ∗ (Y ), Q ¯ ∗ (X) ⊆ Q ¯ ∗ (Y ). ¯ ∗ (X) ⊆ Q X ⊆ Y implies Q ∗ (v) P∗ (X ∪ Y ) ⊇ P∗ (X) ∪ P∗ (Y ), P (X ∩ Y ) ⊆ P ∗ (X) ∩ P ∗ (Y ), ¯ ∗ (X) ∪ Q ¯ ∗ (Y ), Q ¯ ∗ (X ∩ Y ) ⊆ Q ¯ ∗ (X) ∩ Q ¯ ∗ (Y ). ¯ ∗ (X ∪ Y ) ⊇ Q Q (vi) When Q is the converse of P , i.e., (x, y) ∈ P if and only if (y, x) ∈ Q, ¯ ∗ (X), P∗ (X) = U − Q∗ (U − X) = Q ¯ ∗ (X). P ∗ (X) = U − Q∗ (U − X) = Q ∗ (vii) X ⊇ P (P∗ (X)) ⊇ P∗ (X) ⊇ P∗ (P∗ (X)), X ⊆ P∗ (P ∗ (X)) ⊆ P ∗ (X) ⊆ P ∗ (P ∗ (X)), ¯ ∗ (X)) ⊇ Q ¯ ∗ (X) ⊇ Q ¯ ∗ (Q ¯ ∗ (X)), ¯ ∗ (Q X⊇Q ¯ ∗ (X)) ⊆ Q ¯ ∗ (X) ⊆ Q ¯ ∗ (Q ¯ ∗ (X)). ¯ ∗ (Q X⊆Q When P is transitive, P∗ (P∗ (X)) = P∗ (X), P ∗ (P ∗ (X)) = P ∗ (X). ¯ ∗ (X)) = Q ¯ ∗ (X), Q ¯ ∗ (Q ¯ ∗ (X)) = Q ¯ ∗ (X). ¯ ∗ (Q When Q is transitive, Q When P is reﬂexive and transitive, P ∗ (P∗ (X)) = P∗ (X) = P∗ (P∗ (X)), P∗ (P ∗ (X)) = P ∗ (X) = P ∗ (P ∗ (X)). When Q is reﬂexive and transitive, ¯ ∗ (X)) = Q ¯ ∗ (X) = Q ¯ ∗ (Q ¯ ∗ (X)), Q ¯ ∗ (Q ¯ ∗ (X)) = Q ¯ ∗ (X) = Q ¯ ∗ (Q ¯ ∗ (X)). ¯ ∗ (Q Q

extensive relations of P∗ (X), P ∗ (X), P∗ (P∗ (X)), P ∗ (P∗ (X)), P∗ (P ∗ (X)) and P ∗ (P ∗ (X)). Similarly, we assume also that Q can be regarded as negatively extensive relations of Q∗ (X), Q∗ (X), Q∗ (Q∗ (X)), Q∗ (Q∗ (X)), Q∗ (Q∗ (X)) and Q∗ (Q∗ (X)). Properties (i)–(v) are obvious. The proofs of (vi) and (vii) are given in Appendix. As shown in Table 2, (i)–(v) in Table 1 are preserved by classiﬁcation-oriented generalization. However (vi) and (vii) in Table 1 are conditionally preserved. A part of (vii) in Table 1 is unconditionally preserved. However the other part is satisﬁed totally when P is reﬂexive and transitive. When P is transitive, we have P ∗ (· · · (P ∗ (P∗ (X))) · · ·) = P ∗ (P∗ (X)) ⊆ X and P∗ (· · · (P∗ (P ∗ (X))) · · ·) = ¯ ∗ (· · · (Q ¯ ∗ (Q ¯ ∗ (X))) · · ·) = Q ¯ ∗ (Q ¯ ∗ (X)) ⊆ X P∗ (P ∗ (X)) ⊇ X. Similarly, we have Q ∗ ∗ ¯ ¯ ¯ ¯ ¯ and Q∗ (· · · (Q∗ (Q (X))) · · ·) = Q∗ (Q (X)) ⊇ X when Q is transitive. Those facts mean that the ﬁrst operation governs the relations with the original set when the relation is transitive. When relations P and Q represent the similarity between objects, P and Q can be equal each other. In such case, the condition for (vi) implies that P , or equivalently, Q is symmetric.

4 4.1

Approximation-Oriented Generalization Proposed Definitions

In order to generalize classical rough sets under the interpretation of rough sets as approximation of a set by means of elementary sets, we introduce a family with a

104

Masahiro Inuiguchi

ﬁnite number of elementary sets on U , F = {F1 , F2 , . . . , Fp }, as a generalization of a partition U |R = {E1 , E2 , . . . , Ep }. Each Fi is a group of objects collected according to some speciﬁc meaning. There are two ways to deﬁne lower and upper approximations of a set X under a family F : one is approximations by means of the union of elementary sets Fi and the other is approximations by means of the intersection of complements of elementary sets U − Fi . Namely, from the third and fourth expressions of lower and upper approximations in (1) and (2), lower and upper approximations of a set X under F are deﬁned straightforwardly in the following two ways: F∗∪ (X) = {Fi | Fi ⊆ X, i = 0, 1, . . . , p}, (18) (U − Fi ) (U − Fi ) ⊆ X, I ⊆ {1, 2, . . . , p + 1} , (19) F∗∩ (X) = F∪∗ (X) = F∩∗ (X) =

i∈I

i∈I

Fi Fi ⊇ X, I ⊆ {1, 2, . . . , p + 1} ,

i∈I

(20)

i∈I

{U − Fi | U − Fi ⊇ X, i = 0, 1, . . . , p},

(21)

where, for convenience, we deﬁne F0 = ∅ and Fp+1 = U . Because Fi ∩ Fj = ∅ for i = j does not always hold, F∗∩ (X) (resp. F∪∗ (X)) is not always an intersection of complements of elementary sets U − Fi (resp. a union of elementary sets Fi ) but a union of several maximal intersections Fj∩ (X), j = 1, 2, . . . , t1 of complements of elementary sets U − Fi (resp. an intersection ∪ of several minimal unions Fj (X), j = 1, 2, . . . , t2 of elementary sets Fi ) if (U − F ) ⊆ X (resp. i i=1,2,...,p i=1,2,...,p Fi ⊇ X) is satisﬁed. Namely, we have F∗∩ (X) = j=1,2,...,t1 Fj∩ (X) and F∪∗ (X) = j=1,2,...,t2 Fj∪ (X). We can call a pair (F∗∪ (X), F∪∗ (X)) an approximation-oriented rough set by means of the union of elementary sets Fi under a family F (for short, an AU-rough set) and a pair (F∗∩ (X), F∩∗ (X)) an approximation-oriented rough set by means of the intersection of complements of elementary sets U − Fi under a family F (for short, an AI-rough set). 4.2

Relationships to Previous Definitions

So far, rough sets have been generalized also under a ﬁnite cover and neighborhoods. In this subsection, we discuss relationships of AU- and AI-rough sets with the previous rough sets. First, we describe previous deﬁnitions. Bonikowski, Bryniarski and Wybraniec-Skardowska [1] proposed rough sets under a ﬁnite cover C = {C1 , C2 . . . , Cp } such that i=1,2,...,p Ci = U . They deﬁned the lower approximation of X ⊆ U by C∗ (X) = {Ci ∈ C | Ci ⊆ X}. (22) In order to deﬁne the upper approximation, we should deﬁne the minimal description of an object x ∈ U and the boundary of X. The minimal description of an object x ∈ U is a family deﬁned by

Generalizations of Rough Sets and Rule Extraction

105

M d(x) = {Ci ∈ C | x ∈ Ci , ∀Cj ∈ C(x ∈ Cj ∧ Cj ⊆ Ci → Ci = Cj )}. (23) Then the boundary of X is a family deﬁned by Bn(X) = {M d(x) | x ∈ X, x ∈ C∗ (X)}. The upper approximation of X is deﬁned by Bn(X) ∪ C∗ (X). (24) C ∗ (X) = Owing to M d(x), we have C ∗ (X) ⊆ C∗ (X) ∪ {Ci | Ci ∩ (X − C∗ (X)) = ∅} ⊆ {Ci | Ci ∩ X = ∅}.(25) Yao [12] proposed rough sets under neighborhoods {n(x) | x ∈ U } where n : U → 2U and n(x) is interpreted as the neighbothood of x ∈ U . Three kinds of rough sets were proposed. One of them has been described in subsection 3.2. The lower and upper approximations in the second kind of rough sets are ν∗ (X) = {n(x) | x ∈ U, n(x) ⊆ X} = {x ∈ U | ∃y(x ∈ n(y) ∧ n(y) ⊆ X)},

(26)

ν ∗ (X) = U − ν∗ (U − X) = {x ∈ U | ∀y(x ∈ n(y) → n(y) ∩ X = ∅)}. (27) As shown above, those lower and upper approximations are closely related with interior and closure operations in topology. The upper and lower approximations in the third kind of rough sets are deﬁned as follows: ν ∗ (X) = {n(x) | x ∈ U, n(x) ∩ X = ∅} = {x ∈ U | ∃y(x ∈ n(y) ∧ n(y) ∩ X = ∅)}, ν∗ (X)

∗

= U − ν (U − X) = {x ∈ U | ∀y(x ∈ n(y) → n(y) ⊆ X)}.

(28) (29)

Inuiguchi and Tanino [5] also proposed rough sets under a cover C. They deﬁned upper and lower approximations as (30) C∗ (X) = {Ci | Ci ⊆ X, i = 1, 2, . . . , p}, C ∗ (X) = U − C∗ (U − X) = {U − Ci | U − Ci ⊇ X, i = 1, 2, . . . , p}. (31) Now let us discuss the relationships with AU- and AI-rough sets. When F = ∗ ∗ ∗ ∗ C, we have F∗∪ (X) = C∗ (X) = C∗ (X),∪ F∪ (X) ⊆ C (X) and F∩ (X) = C (X). ∗ We also have C (X) ⊇ j=1,2,...,t2 Fj (X). The equality does not hold always because we have the possibility of Fj∪ (X) ⊂ C∗ (X) ∪ x∈X−C∗ (X) C(x), where C(x) is an arbitrary Ci ∈ M d(x). When F = {n(x) | x ∈ U }, we have F∗∪ (X) = ν∗ (X) ⊇ ν∗ (X) and F∩∗ (X) = ν ∗ (X) ⊆ ν ∗ (X). Generally, F∗∩ (X) (resp. F∪∗ (X)) has no relation with ν∗ (X) and ν∗ (X) (resp. ν ∗ (X) and ν ∗ (X)). From those relations, we know that F∗∪ (X) and F∗∩ (X) are maximal approximations among six lower approximations while F∪∗ (X) and F∩∗ (X) are minimal approximations among six upper approximations. This implies that the proposed lower and upper

106

Masahiro Inuiguchi Table 3. Fundamental properties of AU- and AI-rough sets

(i) F∗∪ (X) ⊆ X ⊆ F∪∗ (X), F∗∩ (X) ⊆ X ⊆ F∩∗ (X). ∗ ∗ = F∗∩ (∅) (ii) F∗∪ (∅) = ∅, F∪ (U ) = F∩ (U ) = U . When F = i=1,...,p Fi = U , F∩∗ (∅) = ∅, F∗∪ (U ) = U . When F = i=1,...,p Fi = ∅, F∪∗ (∅) = ∅, F∗∩ (U ) = U . (iii) F∗∪ (X ∩ Y ) ⊆ F∗∪ (X) ∩ F∗∪ (Y ), F∗∩ (X ∩ Y ) = F∗∩ (X) ∩ F∗∩ (Y ), F∪∗ (X ∪ Y ) = F∪∗ (X) ∪ F∪∗ (Y ), F∩∗ (X ∪ Y ) ⊇ F∩∗ (X) ∪ F∩∗ (Y ). When Fi ∩ Fj = ∅, for any i = j, F∗∪ (X ∩ Y ) = F∗∪ (X) ∩ F∗∪ (Y ), F∩∗ (X ∪ Y ) = F∩∗ (X) ∪ F∩∗ (Y ). (iv) X ⊆ Y implies F∗∪ (X) ⊆ F∗∪ (Y ), F∗∩ (X) ⊆ F∗∩ (Y ), X ⊆ Y implies F∪∗ (X) ⊆ F∪∗ (Y ), F∩∗ (X) ⊆ F∩∗ (Y ). (v) F∗∪ (X ∪ Y ) ⊇ F∗∪ (X) ∪ F∗∪ (Y ), F∗∩ (X ∪ Y ) ⊇ F∗∩ (X) ∪ F∗∩ (Y ), F∪∗ (X ∩ Y ) ⊆ F∪∗ (X) ∩ F∪∗ (Y ), F∩∗ (X ∩ Y ) ⊆ F∩∗ (X) ∩ F∩∗ (Y ). (vi) F∗∪ (U − X) = U − F∩∗ (X), F∗∩ (U − X) = U − F∪∗ (X), F∪∗ (U − X) = U − F∗∩ (X), F∩∗ (U − X) = U − F∗∪ (X). (vii) F∗∪ (F∗∪ (X)) = F∗∪ (X), F∗∩ (F∗∩ (X)) = F∗∩ (X), F∪∗ (F∪∗ (X)) = F∪∗ (X), F∩∗ (F∩∗ (X)) = F∩∗ (X), F∪∗ (F∗∪ (X)) = F∗∪ (X), F∗∩ (F∩∗ (X)) = F∩∗ (X), F∩∗ (F∗∩ (X)) ⊇ F∗∩ (X), F∩∗ (Fj∩ (X)) = Fj∩ (X), j = 1, 2, . . . , t1 , F∗∪ (F∪∗ (X)) ⊆ F∪∗ (X), F∗∪ (Fj∪ (X)) = Fj∪ (X), j = 1, 2, . . . , t2 , When Fi ∩ Fj = ∅, for any i = j, F∩∗ (F∗∩ (X)) = F∗∩ (X), F∗∪ (F∪∗ (X)) = F∪∗ (X).

approximations are better approximations of X so that they are suitable for our interpretation of rough sets. Moreover, the proposed deﬁnitions are applicable under a more general setting since we neither assume that F is a cover nor that p = Card(F ) ≤ n = Card(U ), i.e., the number of elementary sets Fi is not less than the number of objects. 4.3

Fundamental Properties

The fundamental properties of AU- and AI-rough sets are shown in Table 3. Properties (i), (iv) and (v) in Table 1 are preserved for both of AU- and AI-rough sets. Parts of (ii) and (iii) in Table 1 are preserved, however, some conditions are necessary for full preservation. The duality, i.e., property (vi) in Table 1 is preserved between upper (resp. lower) approximations of AU-rough sets (resp. AI-rough sets) and lower (resp. upper) approximations of AI-rough sets (resp. AU-rough sets). Property (vii) in Table 1 is almost preserved. F∩∗ (F∗∩ (X)) = F∗∩ (X) (resp. F∗∪ (F∪∗ (X)) = F∪∗ (X)) is not always preserved because F∗∩ (X) (resp. F∪∗ (X)) is not always a union of elementary sets Fi (resp. an intersection of complements of elementary sets U − Fi ). However, for the minimal union Fj∪ (resp. Fj∩ ), the property corresponding to (vii) holds. The proof of property (iii) is given in Appendix. The other properties can be proved easily.

Generalizations of Rough Sets and Rule Extraction

107

Table 4. Relationships between two kinds of rough sets (a) When P is reﬂexive, P∗ (X) ⊆ P∗∪ (X) = P ∗ (P∗ (X)) ⊆ X ⊆ P∪∗ (X) ⊆ P ∗ (X). ¯ ∗ (Q ¯ ∗ (X)) ⊇ X ⊇ Q∩ ¯ ¯ ∗ (X) ⊇ Q∗∩ (X) = Q When Q is reﬂexive, Q ∗ (X) ⊇ Q∗ (X). (b) When P is transitive, P∪∗ (X) ⊇ P ∗ (X) ⊇ X ⊇ P∗ (X) ⊇ P∗∪ (X). ∗ ¯ ¯∗ When Q is transitive, Q∩ ∗ (X) ⊆ Q∗ (X) ⊆ X ⊆ Q (X) ⊆ Q∩ (X). (c) When P is reﬂexive and transitive, P∗ (X) = P∗∪ (X) = P ∗ (P∗ (X)) ⊆ X ⊆ P∪∗ (X) = P ∗ (X). When Q is reﬂexive and transitive, ¯ ∗ (Q ¯ ∗ (X)) ⊇ X ⊇ Q∩ ¯ ¯ ∗ (X) = Q∗∩ (X) = Q Q ∗ (X) = Q∗ (X).

5

Relationships between Two Kinds of Rough Sets

Given a relation P , we may deﬁne a family by P = {P (x) | x ∈ U }.

(32)

Therefore, under a positively extensive relation P is given, we obtain not only CP-rough sets but also AU- and AI-rough sets. This is the same for a negatively extensive relation Q. Namely, by a family Q = {Q(x) | x ∈ U }, we obtain AUand AI-rough sets. The relationships between CP-/CN-rough sets and AU/AI-rough sets are listed in Table 4. In Table 4 we recognize a strong relation between CP- and AU-rough sets as well as a strong relation between CN- and AI-rough sets. The proofs of (a) and (b) in Table 4 are given in Appendix.

6 6.1

Rule Extraction Decision Table and Problem Setting

In this section, we discuss rule extraction from decision tables based on generalized rough sets. Consider a decision table I = U, C ∪ {d}, V, ρ, where U = {x1 , x2 , . . . , xn } is a universe of objects,C is a set of all condition attributes, d is a unique decision attribute, V = a∈C∪{d} Va , Va is a ﬁnite set of attribute values of attribute a, and ρ : U × C ∪ {d} → V is the information function such that ρ(x, a) ∈ Va for all a ∈ C ∪{d}. By decision attribute value ρ(x, d), we assume that we can group objects into several classes Dk , k = 1, 2, . . . , m. Dk , k = 1, 2, . . . , m do not necessary form a partition but a cover. Namely, Dk ∩ Dj = ∅ does not always hold but k=1,2,...,m Dk = U . Corresponding to Dk , k = 1, 2, . . . , m, we assume that there is a relation Pa ∈ Va2 is given to each condition attribute a ∈ C so that if x ∈ Dk and (y, x) ∈ Pa then we intuitively conclude y ∈ Dk from the viewpoint of attribute a. For each A ⊆ C, we deﬁne a positively extensive relation by PA = {(x, y) | (ρ(x, a), ρ(y, a)) ∈ Pa , ∀a ∈ A}.

(33)

Moreover, we also assume that there is a relation Qa ∈ Va × Va is given to each condition attribute a ∈ C so that if x ∈ U − Dk and (y, x) ∈ Qa then

108

Masahiro Inuiguchi

we intuitively conclude y ∈ U − Dk from the viewpoint of attribute a. For each A ⊆ C, we deﬁne a negatively extensive relation by QA = {(x, y) | (ρ(x, a), ρ(y, a)) ∈ Qa , ∀a ∈ A}.

(34)

For the purpose of the comparison, we may built ﬁnite families based on relations Pa and Qa as described below. We can build families using PA and QA as P = {PA (x) | x ∈ U, A ⊆ C},

(35)

Q = {QA (x) | x ∈ U, A ⊆ C},

(36)

where PA (x) = {y ∈ U | (y, x) ∈ PA } and QA (x) = {y ∈ U | (y, x) ∈ QA }. For A = {a1 , a2 , . . . , as } and v = (v1 , v2 , . . . , vs ) ∈ Va1 × Va2 × · · · × Vas , let us deﬁne ZA (v) = {x ∈ U | (ρ(x, ai ), vi ) ∈ Pai , i = 1, 2, . . . , s}, WA (v) = {x ∈ U | (ρ(x, ai ), vi ) ∈ Qai , i = 1, 2, . . . , s}.

(37) (38)

Using those sets, we may build the following families deﬁned by Z = {ZA (v) | v ∈ Va1 × Va2 × · · · × Vas , A = {a1 , a2 , . . . , as } ⊆ U }, (39) W = {WA (v) | v ∈ Va1 × Va2 × · · · × Vas , A = {a1 , a2 , . . . , as } ⊆ U }. (40) 6.2

Rule Extraction Based on Positive and Certain Regions

¯ ∗ (X) has the As shown in (10), a positive region P∗ (X) and a certain region Q same representation. The diﬀerence is the adoption of the relation, i.e., P versus QT . Therefore the rule extraction method is the same. In this subsection, we describe the rule extraction method based on a positive region P∗ (X). The rule ¯ ∗ (X) is obtained by replacing a extraction method based on a certain region Q T relation P with a relation Q . We discuss the extraction of decision rules from the decision table I = U, C ∪ {d}, V, ρ described in the previous subsection. First, let us discuss the type of decision rule corresponding to the positive region (3). For any object y ∈ U satisfying the condition of the decision rules, y ∈ Dk and PC (y) ⊆ Dk . Dk should not be in the condition part since we would like to infer the members of Dk . Considering those requirements, we should explore suitable conditions of the decision rules. When we conﬁrm y = x for an object x ∈ PC ∗ (Dk ), we may obviously conclude y ∈ PC ∗ (Dk ). Since each object is characterized by conditional attributes a ∈ C, y = x can be conjectured from ρ(y, a) = ρ(x, a), ∀a ∈ C. However, it is possible that there exists z ∈ U such that ρ(z, a) = ρ(x, a), ∀a ∈ C but z ∈ Dk . When PC is reﬂexive, we always have x ∈ PC ∗ (Dk ) if such an object z ∈ U exists. Since we do not assume the reﬂexivity, x ∈ PC ∗ (Dk ) is possible even in the case such an object z ∈ U exists. From these observations, we obtain the following type of decision rule based on x ∈ PC ∗ (Dk ) only when there is no object z ∈ U such that ρ(z, a) = ρ(x, a), ∀a ∈ C but z ∈ Dk : if ρ(y, a1 ) = v1 and · · · and ρ(y, al ) = vl then y ∈ Dk ,

Generalizations of Rough Sets and Rule Extraction

109

where vj = ρ(x, ai ), i = 1, 2, . . . , l and we assume C = {a1 , a2 , . . . , al }. Let us call this type of the decision rule, an identity if-then rule (for short, id-rule). When PC is transitive, we may conclude y ∈ PC ∗ (Dk ) from the fact that (y, x) ∈ PC and x ∈ PC ∗ (Dk ). This is because we have PC (y) ⊆ PC (x) ⊆ Dk and y ∈ PC (x) ⊆ X from transitivity and the fact x ∈ PC ∗ (Dk ). In this case, we may have the following type of decision rule, if (ρ(y, a1 ), v1 ) ∈ Pa1 and · · · and (ρ(y, al ), vl ) ∈ Pal then y ∈ Dk . This type of if-then rule is called a relational if-then rule (for short, R-rule). When the relation PC is reﬂexive and transitive, an R-rule includes the corresponding id-rule. As discussed above, based on an object x ∈ PC ∗ (Dk ), we can extract id-rules, and R-rules if PC is transitive. We prefer to obtain decision rules with minimum length conditions. To this end, we should calculate the minimal condition attribute set A ⊆ C such that x ∈ PA ∗ (Dk ). Let A = {a1 , a2 , . . . , aq } be such a minimal condition attribute set. Then we obtain the following id-rule when there is no object z ∈ U − Dk such that ρ(z, ai ) = vi , i = 1, 2, . . . , q, if ρ(y, a1 ) = v1 and · · · and ρ(y, aq ) = vq then y ∈ Dk , where vi = ρ(x, ai ), i = 1, 2, . . . , q. When PC is transitive, we obtain an R-rule, if (ρ(y, a1 ), v1 ) ∈ Pa1 and · · · and (ρ(y, aq ), vq ) ∈ Paq then y ∈ Dk . Note that PC is transitive if and only if PA is transitive for each A ⊆ C. Moreover, the minimal condition attribute set is not always unique and, for each minimal condition attribute set, we obtain id- and R-rules. Through the above procedure, we will obtain many decision rules. The decision rules are not always independent. Namely, we may have two decision rules ‘if Cond1 then Dec’ and ‘if Cond2 then Dec’ such that Cond1 implies Cond2 . Eventually, the decision rule ‘if Cond1 then Dec’ is superﬂuous and then omitted. This is diﬀerent from the rule extraction based on the classical rough set. For extracting all decision rules with minimal length conditions, we can utilize a decision matrix [8] with modiﬁcations. Consider the extraction of decision rules concluding y ∈ Dk . We begin with the calculation of PC ∗ (Dk ). Based on the obtained PC ∗ (Dk ), we deﬁne two disjoint index sets K + = {i | xi ∈ PC ∗ (Dk )} and K − = {i | xi ∈ Dk }. The decision matrix M id (I) = (Mijid ) is deﬁned by Mijid = {(a, v˜i ) | v˜i = ρ(xi , a), (ρ(xj , a), ρ(xi , a)) ∈ Pa , ρ(xj , a) = ρ(xi , a), a ∈ C}, i ∈ K + , j ∈ K − .

(41)

Note that the size of the decision matrix M id (I) is Card(K + ) × Card(K − ). An element (a, v˜i ) of Mijid corresponds to a condition ‘ρ(y, a) = v˜i ’ which is not satisﬁed with y = xj but with y = xi . Moreover Mijid (I) can be empty and in this case, we cannot obtain any id-rule from xi ∈ PC ∗ (Dk ). When PC is transitive, we should consider another decision matrix for Rrules. The decision matrix M R (I) = (MijR ) is deﬁned by MijR = {(a, v˜i ) | v˜i = ρ(xi , a), (ρ(xj , a), ρ(xi , a)) ∈ Pa , a ∈ C}, i ∈ K +, j ∈ K −.

(42)

110

Masahiro Inuiguchi

Note that the size of the decision matrix M R (I) is Card(K + ) × Card(K − ). An element (a, v˜i ) of MijR shows a condition ‘(ρ(y, a), v˜i ) ∈ Pa ’ which is not satisﬁed with y = xj but with y = xi . Let Id((a, v)) be a statement ‘ρ(x, a) = v’ and P˜ ((a, v)) a statement ‘(ρ(x, a), v) ∈ Pa ’. Then all minimal conditions in all possible decision rules with respect to Dk are obtained as conjunctive terms in the disjunctive normal form of the following logical function: Id(Mijid ), if PC is not transitive, + − j∈K i∈K Bk = (43) P˜ (MijR ) , Id(Mijid ) ∨ i∈K + j∈K − i∈K + j∈K − if PC is transitive. By the construction, it is obvious that z ∈ Dk does not satisfy the conditions of decision rules and that x ∈ PC ∗ (Dk ) satisﬁes them. Moreover we can prove that z ∈ Dk − PC ∗ (Dk ) does not satisfy the conditions. The proof is as follows. Let z ∈ Dk − PC ∗ (Dk ) and let y ∈ Dk such that (ρ(y, a), ρ(z, a)) ∈ Pa for all a ∈ C. The existence of y is guaranteed by the deﬁnition of z. First consider the condition of an arbitrary id-rule, ‘ρ(w, a1 ) = v1 , ρ(w, a2 ) = v2 and · · · and ρ(w, aq ) = vq ’, where A = {a1 , a2 , . . . , aq } ⊆ C. Suppose z satisﬁes this condition, i.e., ‘ρ(z, a1 ) = v1 , ρ(z, a2 ) = v2 and · · · and ρ(z, aq ) = vq ’. Since vi = ρ(x, ai ), i = 1, 2, . . . , q, the fact z ∈ Dk − PC ∗ (Dk ) implies that (ρ(y, a), ρ(x, a)) ∈ Pa for all a ∈ A, i.e., (y, x) ∈ PA (y ∈ PA (x)). From y ∈ Dk , we have PA (x) ⊆ Dk . On the other hand, by the construction of Mijid , for each y ∈ Dk , there exists a ∈ A such that (ρ(y, a), ρ(x, a)) ∈ Pa . This implies PA (x) ⊆ Dk . A contradiction. Thus, for each id-rule, there is no z ∈ Dk − PC ∗ (Dk ) satisfying the condition. Next, assuming that PC is transitive, we consider the condition of an arbitrary R-rule, ‘(ρ(w, a1 ), v1 ) ∈ Pa1 , and · · · and (ρ(w, aq ), vq ) ∈ Paq ’, where {a1 , a2 , . . . , aq } ⊆ C. Suppose z satisﬁes this condition, i.e., ‘(ρ(z, a1 ), v1 ) ∈ Pa1 , and · · · and (ρ(z, aq ), vq ) ∈ Paq ’. From the transitivity and the fact (ρ(y, a), ρ(z, a)) ∈ Pa for all a ∈ C, we have ‘(ρ(y, a1 ), v1 ) ∈ Pa1 , and · · · and (ρ(y, aq ), vq ) ∈ Paq ’. This contradicts the construction of the condition of R-rule. Therefore, for each R-rule, there is no z ∈ Dk − PC ∗ (Dk ) satisfying the condition. The rule extraction method based on certain region is obtained by replacing T T PC , PA and Pa of the above discussion with QT C , QA and Qa , respectively. 6.3

Rule Extraction Based on Lower Approximations of AU-Rough Sets

As in the previous subsection, we discuss the extraction of decision rules from the decision table I = U, C ∪ {d}, V, ρ. First, let us discuss the type of decision rule corresponding to the lower approximation of AU-rough set (18). For any object y ∈ U satisfying the condition of the decision rules, we should have y ∈ Fi and

Generalizations of Rough Sets and Rule Extraction

111

Fi ⊆ Dk . When we conﬁrm y ∈ Fi for an elementary set Fi ∈ F such that Fi ⊆ Dk , we may obviously conclude y ∈ Dk . From this fact, when Fi ⊆ Dk we have the following type of decision rule; if y ∈ Fi then y ∈ Dk . For the decision table I = U, C ∪ {d}, V, ρ, we consider two cases; (a) a case when F = P and (b) a case when F = Z. In those cases the corresponding decision rules from the facts PA (x) ⊆ X and ZA (v) ⊆ X become Case (a): if (ρ(y, a1 ), v¯1 ) ∈ Pa1 and · · · and (ρ(y, as ), v¯s ) ∈ Pas then y ∈ Dk , Case (b): if (ρ(y, a1 ), v1 ) ∈ Pa1 and · · · and (ρ(y, as ), vs ) ∈ Pas then y ∈ Dk , where A = {a1 , a2 , . . . , as }, v¯i = ρ(x, ai ), i = 1, 2, . . . , s and v = (v1 , v2 , . . . , vs ). By the construction of P and Z, we have PA (x) ⊇ PA (x) and ZA (v ) ⊇ ZA (v) for A ⊆ A, where A = {ak1 , ak2 , . . . , akt } ⊆ A and v = (vk1 , vk2 , . . . , vkt ) is a sub-vector of v. Therefore, the decision rules with respect to minimal attribute sets A are suﬃcient since they cover all decision rules with larger attribute sets A ⊇ A . By this observation, we enumerate all decision rules with respect to minimal attribute sets. The enumeration can be done by a modiﬁcation of the decision matrix [8]. We describe the method in Case (a). Consider an enumeration of all decision rules with respect to a decision class Dk . To apply the decision matrix method, we ﬁrst obtain a CP-rough set P∗ (Dk ) = {x ∈ U | PC (x) ⊆ Dk }. Using K + = {i | xi ∈ P∗ (Dk )} and K − = {i | xi ∈ Dk }, we deﬁne the decision matrix ˜ (I) = (M ˜ ij ) by M ˜ ij = {(a, v) | v = ρ(xi , a), (ρ(xj , a), ρ(xi , a)) ∈ Pa , a ∈ C}, M i ∈ K +, j ∈ K −.

(44)

Then all minimal conditions in all possible decision rules with respect to Dk are obtained as conjunctive terms in the disjunctive normal form of the following logical function: ˜ ij ). ˜k = P˜ (M B (45) i∈K + j∈K −

In Case (b), we calculate K(Dk ) = {v ∈ V1 ×V2 ×· · · Vl | Z(v) ⊆ Dk } instead of P∗ (Dk ). Number elements of K(Dk ) such that K(Dk ) = {v 1 , v 2 , . . . , v r }, ˜ (I) = (M ˜ ) by where r = Card(K(Dk )). Then we deﬁne the decision matrix M ij ˜ = {(ad , v i ) | (ρ(xj , ad ), v i ) ∈ Pa , ad ∈ C}, i ∈ {1, . . . , r}, j ∈ K − , (46) M d ij d d where vdi is the d-th component of v i , i.e., v i = (v1i , v2i , . . . , vli ). All minimal conditions in all possible decision rules with respect to Dk are obtained as conjunctive terms in the disjunctive normal form of the following logical function: ˜ ). ˜ = P˜ (M (47) B k

ij

i∈{1,2,...,r} j∈K −

112

6.4

Masahiro Inuiguchi

Rule Extraction Based on Lower Approximations of AI-Rough Sets

Let us discuss the type of decision rule corresponding to the lower approximation of AI-rough set (19). For any object y ∈ U satisfying the condition of the decision rules, y ∈ j=1,2,...,t1 Fj∩ . Therefore, for each Fj∩ , we have the following type of decision rule: if y ∈ Fj∩ then y ∈ Dk . By the deﬁnition, Fj∩ is represented by an intersection of a number of complementary sets of elementary sets, i.e., i∈I (U − Fi ) for a certain I ⊆ {1, 2, . . . , p}. Therefore the condition part of the decision rule can be represented by ‘y ∈ Fi1 , y ∈ Fi2 ,..., and y ∈ FiCard(I) ’, where I = {i1 , i2 , . . . , iCard(I) }. Since each Fiz is a conjunction of sentences (ρ(y, ac ), vc ) ∈ Qac , ac ∈ C in our problem setting, at ﬁrst glance, the condition of the above decision rule seems to be very long. Note that we should use the relation Qa , a ∈ C. Otherwise, it is not suitable for the meaning of the relation Pa because we approximate Dk by monotonone set operations of U − Pa (va ), a ∈ C, va ∈ Va . Accordingly, we consider two cases; (c) F = Q and (d) F = W. By the construction of Q and W, the condition part of the decision rule becomes simpler. This relies on the following fact. Suppose Fi∩ = (U − (Qa1 (x) ∩ Qa2 (x))) ∩ (U − (Qa3 (y) ∩ Qa4 (y))), where x, y ∈ U , {a1 , a2 }, {a3 , a4 } ⊆ C and it is possible that x = y and {a1 , a2 } ∩ {a3 , a4 } = ∅. Then we have Fi∩ = ((U −Qa1 (x))∩(U −Qa3 (y)))∪((U −Qa1 (x))∩(U −Qa4 (y)))∪((U −Qa2 (x))∩(U − sub Qa3 (y)))∪((U −Qa2 (x))∩(U −Qa4 (y))). Let Fi1 = (U −Qa1 (x))∩(U −Qa3 (y)), sub sub Fi2 = (U − Qa1 (x)) ∩ (U − Qa4 (y)), Fi3 = (U − Qa2 (x)) ∩ (U − Qa3 (y)) and sub sub sub sub sub = (U − Qa2 (x)) ∩ (U − Qa4 (y)). We have Fi∩ = Fi1 ∪ Fi2 ∪ Fi3 ∪ Fi4 . Fi4 ∩ This implies that the decision rule ‘if y ∈ Fi then y ∈ Dk ’ can be decomposed sub to ‘if y ∈ Fij then y ∈ Dk ’, j = 1, 2, 3, 4. From this observation, for Fj∩ , j = 1, 2, . . . , t1 , we have the following body of if-then rules: sub if y ∈ Fji then y ∈ Dk , i = 1, 2, . . . , i(j), j = 1, 2, . . . , t1 , ∩ sub sub where Fj = i=1,2,...,i(j) Fji . It can be seen that Fji , i = 1, 2, . . . , i(j), j = 1, 2, . . . , t include all maximal sets of the form 1 a⊆C, x∈U (U − Qa (x)) such that (U − Q (x)) ⊆ D . This can be proved as follows. Suppose that a k a∈A⊆C, x∈I⊆U sub sub is one of the maximal sets which does not included in Fji , i = 1, 2, . . . , i(j), G sub j = 1, 2, . . . , t1 . By the construction of Q, G is a member of Q. This implies that there is a set A⊆C, x∈I⊆U (U −QA (x)) such that G sub ⊆ A⊆C, x∈I⊆U (U − QA (x)) ⊆ Dk . This contradicts to the fact that Fj∩ , j = 1, 2, . . . , t1 are maximal. sub Hence, Fji , i = 1, 2, . . . , i(j), j = 1, 2, . . . , t1 include all maximal sets of the form a⊆C, x∈U (U − Qa (x)) such that a∈A⊆C, x∈I⊆U (U − Qa (x)) ⊆ Dk . The same discussion is valid in Case (d), i.e., F = W. Therefore we consider the type of decision rule,

Case (c): if (ρ(y, a1 ), v¯1 ) ∈ Qa1 and · · · (ρ(y, as ), v¯s ) ∈ Qas then y ∈ Dk , Case (d): if (ρ(y, a1 ), v1 ) ∈ Qa1 and · · · (ρ(y, as ), vs ) ∈ Qas then y ∈ Dk ,

Generalizations of Rough Sets and Rule Extraction

113

where v¯i = ρ(xi , ai ), xi ∈ U , ai ∈ C, i = 1, 2, . . . , s and vi ∈ Vai , i = 1, 2, . . . , s. We should enumerate all minimal conditions of the decision rules above. This can be done also by a decision matrix method with modiﬁcations described below. − In Case (c), let K + = {i | xi ∈ Q∩ = {i | xi ∈ Dk }. We deﬁne a ∗ (Dk )} and K Q Q decision matrix M = (Mij ) by MijQ = {(a, v) | (ρ(xj , a), v) ∈ Qa , (ρ(xi , a), v) ∈ Qa , (48) v = ρ(a, x), x ∈ U, a ∈ C}, i ∈ K + , j ∈ K − . ˜ Let ¬Q((a, v)) be a statement ‘(ρ(y, a), v) ∈ Q’. Then the all minimal conditions are obtained as conjunctive terms in the disjunctive normal form of the following logical function: Q ˜ BkQ = ¬Q(M ij ). i∈K + j∈K −

In Case (d), let K = {i | xi ∈ W∗∩ (Dk )} and K − = {i | xi ∈ Dk }. We deﬁne a decision matrix MW = (MijW ) by +

MijW = {(a, v) | (ρ(xj , a), v) ∈ Qa , (ρ(xi , a), v) ∈ Qa , v ∈ Va , a ∈ C}, (49) i ∈ K +, j ∈ K −. All minimal conditions in all possible decision rules with respect to Dk are obtained as conjunctive terms in the disjunctive normal form of the following logical function: W ˜ BkW = (50) ¬Q(M ij ). i∈K + j∈K −

6.5

Comparison and Correspondence between Definitions and Rules

As shown in the previous sections, the extracted decision rules are diﬀerent by the underlying generalized rough sets. The correspondences between underlying generalized rough sets and types of decision rules are arranged in Table 5. Table 5. Correspondence between generalized rough sets and types of decision rules deﬁnition of rough set

type of decision rule if ρ(x, a1 ) = v1 and · · · and ρ(x, ap ) = vp then x ∈ Dk , CP-rough set: if (ρ(x, a1 ), v1 ) ∈ Pa1 and · · · and (ρ(x, ap ), vp ) ∈ Pap {x ∈ X | P (x) ⊆ X} then x ∈ Dk . (when Pa is transitive) CN-rough set: if ρ(x, a1 ) = v1 and · · · and ρ(x, ap ) = vp then x ∈ Dk , X∩ if (v1 , ρ(x, a1 )) ∈ Qa1 and · · · and (vp , ρ(x, ap )) ∈ Qap U − {Q(x) | x ∈ U − X} then x ∈ Dk . (when Qa is transitive) AU-rough if (ρ(x, a1 ), v1 ) ∈ Pa1 and · · · and (ρ(x, ap ), vp ) ∈ Pap set: {Fi ∈ F | Fi ⊆ X} then x ∈ Dk . (when F = P of (35) or F = Z of (39)) if (ρ(y, a1 ), v1 ) ∈ Qa1 and · · · and (ρ(y, ap ), vp ) ∈ Qap AI-rough set: (∗1) then y ∈ Dk . (when F = Q of (36) or F = W of (40)) (∗1)

{

i∈I

(U − Fi ) |

i∈I

(U − Fi ) ⊆ X, I ⊆ {1, . . . , p + 1}}

114

Masahiro Inuiguchi

When Pa , a ∈ C are reﬂexive and transitive, the type of decision rules are the same between CP- and AU-rough sets. However, extracted decision rules are not always the same. More speciﬁcally, condition parts of extracted decision rules based on CP-rough sets are the same as those based on AU-rough sets when F = P of (35) but usually stronger than those based on AU-rough sets when ˜ ij ⊆ M ˜ . When Pa , a ∈ C are F = Z of (39). This is because we have MijR = M ij only transitive, the extracted R-rules based on CP-rough sets are the same as extracted decision rules based on AU-rough sets with F = P. In this case, the extracted decision rules include id-rules. Namely, extracted decision rules based on CP-rough sets are more than those based on AU-rough sets. While converse relations QT a , a ∈ C appear in extracted R-rules based on CN-rough sets when Q is transitive, complementary relations (U × U ) − Qa , a ∈ C appear in extracted decision rules based on AI-rough sets. Table 6. Car evaluation Car fuel consumption (F u) selling price (P r) Car1 medium medium high medium Car2 [medium,high] low Car3 low [low,medium] Car4 high [low,high] Car5 [low,medium] low Car6

7

size (Si) marketability (M a) medium poor [medium,large] poor [medium,large] poor large good [small,medium] poor [medium,large] good

Simple Examples

Example 1. Let us consider a decision table with interval attribute values about car evaluation, Table 6. An interval attribute value in this table shows that we do not know the exact value but the possible range within which the exact value exists. Among attribute values, we have orderings, low ≤ medium ≤ high and small ≤ medium ≤ large. Let us consider the decision class of good marketability, i.e., D1 = {Car4, Car6}, and extract conditions of good marketability. A car with low fuel consumption, low selling price and large size is preferable. Therefore, we can deﬁne PF =≤st , PP =≤st and PS =≥st , QF =≥st , QP =≥st L R and QS =≤st , where for intervals E1 = [ρL1 , ρR 1 ] and E2 = [ρ2 , ρ2 ], we deﬁne st R L st L R E1 ≤ E2 ⇔ ρ1 ≤ ρ2 and E1 ≥ E2 ⇔ ρ1 ≥ ρ2 . We consider P of (35), Z of (39), Q of (36) and W of (40). P and Q are not reﬂexive but transitive. ∩ We obtain PC∗ (D1 ) = P∗∪ (D1 ) = Z∗∪ (D1 ) = Q∩ ∗ (D1 ) = W∗ (D1 ) = {Car4, ¯ Car6}, QC∗ (D1) = {Car4}, where C = {F u, P r, Si}. Applying the proposed methods, we obtain the following decision rules:

Generalizations of Rough Sets and Rule Extraction

115

PC∗ (D1 ): if P r=[low,medium] then M a=good, if F u=[low,medium] then M a=good, if F uR ≤ low then M a=good, if SiL ≥ large then M a=good, if F uR ≤ medium and P rR ≤ low then M a=good, ¯ C∗ (D1 ): if F u=low then M a=good, Q if P r=[low,medium] then M a=good, if F uL ≤ low then M a=good, P∗∪ (D1 ): if F uR ≤ low then M a=good, if SiL ≥ large then M a=good, if F uR ≤ medium and P rR ≤ low then M a=good, L Q∩ ∗ (D1 ): if F u < medium then M a=good, where we use F u = [F uL , F uR ] = ρ(y, F u), P r = [P rL , P rR ] = ρ(y, P r), Si = [SiL , SiR ] = ρ(y, Si) and M a = ρ(y, M a) for convenience. Extracted decision rules based on Z∗∪ (D1 ) and W∗∩ (D1 ) are same as those based on P∗∪ (D1 ) and Q∩ ∗ (D1 ). We can observe the similarity between rules based on PC∗ (D1 ) and ¯ C∗ (D1 ) and Q∩ P∗∪ (D1 ) and between rules based on Q ∗ (D1 ), respectively. Table 7. Survivability of alpinists with respect to foods and tools

Alp1 Alp2 Alp3 Alp4 Alp5

foods (F o) tools (T o) survivability (Sur) {a} {A, B} low {a, b, c} {A, B} high {a, b} {A} low {b} {A} low {a, b} {A, B} high

Example 2. Consider an alpinist problem. There are three packages a, b and c of foods and two packages A and B of tools. When an alpinist climbs a mountain, he/she should carry foods and tools in order to be back safely. Assume the survivability Sur is determined by foods F o and tools T o packed in his/her knapsack and a set of data is given as in Table 7. Discarding the weight, we think that the more foods and tools, the higher the survivability. In this sense, we consider an inclusion relation ⊇ for both attributes F o and T o. Namely, we adopt ⊇ for the positively extensive relation P and ⊆ for the negatively extensive relation Q. Since ⊇ satisﬁes the reﬂexivity and transitivity and ⊆ is the converse of ⊇, all generalized rough sets described in this paper, i.e., CP-rough sets, CNrough sets, AU-rough sets and AI-rough sets coincide one another. Indeed, for ¯ C∗ (D1 ) = P ∪ (D1 ) = Z ∪ (D1 ) = a class D1 of Sur =high, we have PC∗ (D1 ) = Q ∗ ∗ ∩ ∩ Q∗ (D1 ) = W∗ (D1 ) = D1 = {Alp2, Alp3}, where C = {F o, T o, Sur} and P, Q, Z and W are deﬁned by (35), (36), (39) and (40). ¯ C∗ (D1 ), P ∪ (D1 ), Extracting decision rules based on rough sets PC∗ (D1 ), Q ∗ ∪ ∩ ∩ Z∗ (D1 ), Q∗ (D1 ) and W∗ (D1 ), we have the following decision rules;

116

Masahiro Inuiguchi

PC∗ (D1 ): if if ∪ Z∗ (D1 ): if if (D ): if Q∩ 1 ∗ if

F o ⊇ {a, b, c} then Sur = high, F o ⊇ {a, b} and T o ⊇ {A, B} then Sur = high, F o ⊇ {c} then Sur = high, F o ⊇ {b} and T o ⊇ {B} then Sur = high, F o ⊆ {a, b} then Sur = high, F o ⊆ {a} and T o ⊆ {A} then Sur = high,

where we use F o = ρ(y, F o), T o = ρ(y, T o) and Sur = ρ(y, Sur) for convenience. ¯ C∗ (D1 ), P∗∪ (D1 ) and W∗∩ (D1 ) are same as Extracted decision rules based on Q those based on P∗∪ (D1 ), P∗∪ (D1 ) and Q∩ ∗ (D1 ), respectively. Unlike the previous example, the extracted decision rules based on Q∩ ∗ (D1 ) ¯ C∗ (D1 ), i.e., those based on PC∗ (D1 ). are not very similar to those based on Q This is because an inclusion relation ⊆ is a partial order so that the negation of an inclusion relation is very diﬀerent from the converse of the inclusion relation. As shown in this example, even if positive region, certain region and lower approximations coincide each other, the extracted if-then rules are diﬀerent by underlying generalized rough sets.

8

Concluding Remarks

We have proposed four kinds of generalized rough sets based on two diﬀerent interpretations of rough sets: rough sets as classiﬁcation of objects into positive, negative and boundary regions and rough sets as approximation by means of elementary sets in a given family. We have described relationships of the proposed rough sets to the previous rough sets in general settings. Fundamental properties of the generalized rough sets have been investigated. Moreover relations among four generalized rough sets have been also discussed. Rule extraction based on the generalized rough sets has been proposed. We have shown the diﬀerences in the types of extracted decision rules by underlying rough sets. Rule extraction methods based on modiﬁed decision matrices have been proposed. A few numerical examples have been given to illustrate the diﬀerences among extracted decision rules. One of the examples has demonstrated that extracted decision rules can be diﬀerent by underlying generalized rough sets even when positive region, certain region and lower approximations coincide one another. For rule extraction, we did not utilize possible regions, conceivable regions and upper approximations. It would be possible to extract decision rules corresponding to those sets. The proposed rule extraction methods are all based on decision matrices and require a lot of computational eﬀort. The other extraction methods like LERS [4] should be investigated. In this case, we should abandon to extract all decision rules but extract only useful decision rules or a minimal body of decision rules which covers all objects. In all methods proposed in this paper, we extracted all minimal conditions. This may increase the risk to give wrong conclusions for objects when we apply the obtained decision rules to infer conclusions of new objects. Risk and minimal descriptions of conditions are in a trade-oﬀ relation. We should investigate an extraction method of decision rules with moderate risk and suﬃciently weak conditions. Those topics and applications to real world problems would be our future work.

Generalizations of Rough Sets and Rule Extraction

117

References 1. Bonikowski, Z., Bryniarski, E., Wybraniec-Skardowska, U.: Extensions and intensions in the rough set theory. Information Sciences 107 (1998) 149–167 2. Dubois, D., Grzymala-Busse, J., Inuiguchi, M., Polkowski, L. (eds.): Fuzzy Rough Sets: Fuzzy and Rough and Fuzzy along Rough. Springer-Verlag, Berlin (to appear) 3. Greco, S., Matarazzo, B., Slowi´ nski, R.: The use of rough sets and fuzzy sets in MCDM. in: Gal, T., Stewart, T. J., Hanne, T. (Eds.) Multicriteria Decision Making: Advances in MCDM Models, Algorithms, Theory, and Applications, Kluwer Academic Publishers, Boston, MA (1999) 14-1–14-59 4. Grzymala-Busse, J. W.: LERS: A system for learning from examples based on rough sets. in: Slowi´ nski (ed.): Intelligent Decision Support: Handbook pf Applications and Advances of the Rough Sets Theory, Kluwer Academic Publishers, Dordrecht, (1992) 3–18. 5. Inuiguchi, M., Tanino, T.: On rough sets under generalized equivalence relations. Bulletin of International Rough Set Society 5(1/2) (2001) 167–171 6. Inuiguchi, M., Tanino, T.: Generalized rough sets and rule extraction. in: Alpigini, J. J., Peters, J. F., Skowron, A., Zhong, N. (eds.): Rough Sets and Current Trends in Computing, Springer-Verlag, Berlin (2002) 105–112 7. Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning About Data, Kluwer Academic Publishers, Boston, MA (1991) 8. Shan, N., Ziarko, W.: Data-based acquisition and incremental modiﬁcation of classiﬁcation rules. Computational Intelligence 11 (1995) 357–370 9. Skowron, A., Rauszer, C. M.: The discernibility matrix and functions in information systems. in: Slowi´ nski, R. (ed.) Intelligent Decision Support: Handbook of Applications and Advances of the Rough Sets Theory, Kluwer Academic Publishers, Dordrecht (1992) 331–362 10. Slowi´ nski, R., Vanderpooten, D.: A generalized deﬁnition of rough approximations based on similarity. IEEE Transactions on Data and Knowledge Engineering 12(2) (2000) 331–336 11. Yao, Y.Y.: Two views of the theory of rough sets in ﬁnite universes. International Journal of Approximate Reasoning 15 (1996) 291–317 12. Yao, Y.Y.: Relational interpretations of neighborhood operators and rough set approximation operators. Information Sciences 111 (1998) 239–259 13. Yao, Y.Y., Lin, T.Y.: Generalization of rough sets using modal logics. Intelligent Automation and Soft Computing 2(2) (1996) 103–120

Appendix: Proofs of Fundamental Properties (a) The proof of (vi) in Table 2 When Q is the converse of P , we have y ∈ P (x) if and only if x ∈ Q(y). Then we obtain Q∗ (U − X) = (U − X) ∪ {Q(x) | x ∈ U − X} = (U − X) ∪ {x ∈ U | P (x) ∩ (U − X) = ∅}. Hence we have ¯ ∗ (X) = U − Q∗ (U − X) = X ∩ {x ∈ U | P (x) ∩ (U − X) = ∅} Q = X ∩ {x ∈ U | P (x) ⊆ X} = {x ∈ X | P (x) ⊆ X} = P∗ (X). The other equation can be obtained similarly.

118

Masahiro Inuiguchi

(b) The proof of (vii) in Table 2 P ∗ (P∗ (X)) = P∗ (X) ∪ {P (x) | x ∈ P∗ (X)} = P∗ (X) ∪ {P (x) | P (x) ⊆ X, x ∈ X} ⊆ X, P∗ (P ∗ (X)) = {x ∈ P ∗ (X) | P (x) ⊆ P ∗ (X)} = P ∗ (X) ∩ x ∈ U P (x) ⊆ X ∪ {P (x) | x ∈ X} ⊇ X are valid. Thus we have X ⊇ P ∗ (P∗ (X)) and X ⊆ P∗ (P ∗ (X)). This implies also ¯ ∗ (Q ¯ ∗ (X)) and X ⊆ Q ¯ ∗ (Q ¯ ∗ (X)) because we obtain U − X ⊇ Q∗ (Q∗ (U − X⊇Q ∗ X)) and U − X ⊆ Q∗ (Q (U − X)). Hence, ﬁrst four relations are obvious. When P is transitive, x ∈ P (y) implies P (x) ⊆ P (y). Let z ∈ P∗ (X), i.e., z ∈ X and P (z) ⊆ X. Suppose z ∈ P∗ (P∗ (X)). Then we obtain P (z) ⊆ P∗ (X). Namely, there exists y ∈ P (z) such that y ∈ P∗ (X). Since P (z) ⊆ X, y ∈ X. Combining this with y ∈ P∗ (X), we have P (y) ⊆ X. From the transitivity of P , y ∈ P (z) ⊆ X implies P (y) ⊆ X. Contradiction. Therefore, we proved P∗ (X) ⊆ P∗ (P∗ (X)). The opposite inclusion is obvious. Hence P∗ (P∗ (X)) = P∗ (X). Now, let us prove P ∗ (P ∗ (X)) = P ∗ (X) when P is transitive. It suﬃces to prove P ∗ (P ∗ (X)) ⊆ P ∗ (X) since the opposite inclusion is obvious. Let z ∈ P ∗ (P ∗ (X)), i.e, (i) z ∈ P ∗ (X) or (ii) there exists y ∈ P ∗ (X) such that z ∈ P (y). We prove z ∈ P ∗ (X). Thus, in case of (i), it is straightforward. Consider case (ii). Since y ∈ P ∗ (X), (iia) y ∈ X or (iib) there exists w ∈ X such that y ∈ P (w). In case of (iia), we obtain z ∈ P ∗ (X) from z ∈ P (y). In case of (iib), from the transitivity of P , we have P (y) ⊆ P (w). Combining this fact with z ∈ P (y), z ∈ P (w). Since w ∈ X, we obtain z ∈ P ∗ (X). Therefore, in any case, we obtain z ∈ P ∗ (X). Hence, P ∗ (P ∗ (X)) = P ∗ (X). The same properties with respect to a relation Q can be proved similarly. When P is reﬂexive and transitive, we can prove {x ∈ U | P (x) ⊆ X} = {P(x) | P (x) ⊆ X}. This equation can be proved in the following way. Let y ∈ {P (x) | P (x) ⊆ X}. There exists z ∈ U such that y ∈ P (z) ⊆ X. Because of the transitivity, P (y) ⊆ P (z) ⊆ X. This implies that y ∈ {x ∈ U | P (x) ⊆ X}. Hence, {x ∈ U | P (x) ⊆ X} ⊇ {P (x) | P (x) ⊆ X}. The opposite inclusion is obvious from the reﬂexivity. ∗ From the reﬂexivity, we have P∗ (X) = {x ∈ X | P (x) ⊆ X}, P (X) = {P (x) | x ∈ X}. Using these equations, we obtain P ∗ (P∗ (X)) = {P (x) | x ∈ P∗ (x)} = {P (x) | P (x) ⊆ X} = P∗ (X), P∗ (P ∗ (X)) = {x ∈ X | P (x) ⊆ P ∗ (X)} = {P (x) | P (x) ⊆ P ∗ (X)} = {P (x) | x ∈ X} = P ∗ (X). The properties with respect to the relation Q can be proved in the same way. (c) The proof of (iii) in Table 3 The ﬁrst and fourth inclusion relations are obvious. We prove F∪∗ (X ∪ Y ) = F∪∗ (X) ∪ F∪∗ (Y ) only. The second equality can be proved by the duality (vi).

Generalizations of Rough Sets and Rule Extraction

119

F∪∗ (X ∪ Y ) ⊇ F∪∗ (X) ∪ F∪∗ (Y ) is straightforward. We prove the opposite inclusion. Let x ∈ F∪∗ (X ∪ Y ). Suppose x ∈ F∪∗ (X) and x ∈ F∪∗ (Y). Then there exist J, K ⊆ {1, 2, . . . , p} such that x ∈ j∈J Fj ⊇X and x ∈ j∈K Fj ⊇ Y . This fact implies that x ∈ j∈J Fj ∪ j∈K Fj = j∈J∪K Fj ⊇ X ∪ Y . This contracts with x ∈ F∪∗ (X ∪ Y ). Hence, we have x ∈ F∪∗ (X) ∪ F∪∗ (Y ). (d) The proof of (a) in Table 4 We only prove P∗ (X) ⊆ P∗∪ (X) = P ∗ (P∗ (X)) ⊆ X ⊆ P∪∗ (X) ⊆ P ∗ (X) when P is reﬂexive. The other assertion can be proved similarly. First inclusion and the inequality are obvious from the reﬂexivity. Relations with a set X is obtained from (i) in Table 3. From the reﬂexivity, we have ∗ P (x) ∈ P (x) X ⊆ P (x), Y ⊆ U P (X) = x∈X

x∈Y

x∈Y

Then the last inclusion is proved as follows: P (x) X ⊆ P (x), Y ⊆ U = P∪∗ (X). P ∗ (X) ⊇ x∈Y

x∈Y

(e) The proof of (b) in Table 4 We only prove the ﬁrst part. The second part can be obtained similarly. Let x ∈ P∗∪ (X). There exists y such that x ∈ P (y) ⊆ X. Because of the transitivity, P (x) ⊆ P (y). Therefore, P (x) ⊆ X. This fact together with x ∈ X implies x ∈ P∗ (X). Hence, P∗∪ (X) ⊆ P∗ (X). The relation P∗ (X) ⊆ X ⊆ P ∗ (X) has been given as (i) in Table 2. ∗ ∗ ∗ Finally, we prove P (X) ⊆ P∪ (X). Let z ∈ X ⊆ P∪ (X). Then, for all Wi such that X ⊆ w∈Wi P (w), there exists wi ∈ Wi such that z ∈ P (wi ). By transitivity, P (z) ⊆ P (wi ). Therefore, P (w) ⊆ P (w) X ⊆ P (w) P (z) ⊆ P (wi ) X ⊆ w∈Wi

w∈Wi

w∈Wi

Hence, we have ∗

P (X) =

z∈X

P (z) ⊆

w∈Wi

P (w) X ⊆ P (w) = P∪∗ (X). w∈Wi

Towards Scalable Algorithms for Discovering Rough Set Reducts Marzena Kryszkiewicz1 and Katarzyna Cichoń1,2 1

Institute of Computer Science, Warsaw University of Technology Nowowiejska 15/19, 00-665 Warsaw, Poland [email protected] 2 Institute of Electrical Apparatus, Technical University of Lodz Stefanowskiego 18/22, 90-924 Lodz, Poland [email protected]

Abstract. Rough set theory allows one to find reducts from a decision table, which are minimal sets of attributes preserving the required quality of classification. In this article, we propose a number of algorithms for discovering all generalized reducts (preserving generalized decisions), all possible reducts (preserving upper approximations) and certain reducts (preserving lower approximations). The new RAD and CoreRAD algorithms, we propose, discover exact reducts. They require, however, the determination of all maximal attribute sets that are not supersets of reducts. In the case, when their determination is infeasible, we propose GRA and CoreGRA algorithms, which search approximate reducts. These two algorithms are well suited to the discovery of supersets of reducts from very large decision tables.

1 Introduction Rough set theory has been conceived as a non-statistical tool for analysis of imperfect data [17]. Rough set methodology allows one to discover interesting data dependencies, decision rules, repetitive data patterns and to analyse conflict situations [24]. The reasoning in the rough set approach is based solely on available information. Objects are perceived as indiscernible if they have the same description in the system. This may be a reason for uncertainty. Two or more objects identically described in the system may belong to different classes (concepts). Such concepts, though vague, can be defined roughly by means of a pair of crisp sets: lower approximation and upper approximation. Lower approximation of a concept is a set of objects that surely belong to that concept, whereas upper approximation is a set of objects that possibly belong to that concept. Rough set theory allows one to find reducts from a decision table, which are minimal sets of attributes preserving the required quality of classification. For example, a reduct may preserve lower approximations of decision classes, or upper approximations of decision classes, or both. A number of methods for discovering reducts have already been proposed in the literature [2-8, 11, 15-17, 20-31]. The most popular J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 120–143, 2004. © Springer-Verlag Berlin Heidelberg 2004

Towards Scalable Algorithms for Discovering Rough Set Reducts

121

methods are based on discernibility matrices [20]. Other methods are based, e.g., on the theory of cones and fences [7, 19]. Unfortunately, the existing methods are not capable to discover all reducts from very large decision tables, although research on discovering rough set decision rules in large data sets started a few years ago (see e.g., [9-10, 14]). One may try to overcome this problem either by applying heuristics or data sampling or both, or by restricting search to looking for some reducts instead of all of them. Recently, we have proposed the GRA-like (GeneralizedReductsApriori) algorithms for discovering approximate generalized, possible and certain reducts from very large decision tables [13]. This article extends the results obtained in [13]. Here, we propose new algorithms - RAD and CoreRAD - for discovering exact generalized, possible and certain reducts. CoreRAD is a variation of RAD, which uses information on the so-called core in order to restrict the number of candidates for reducts and the number of scans of the decision table. The new algorithms require the determination of all maximal sets that are not supersets of reducts (MNSR). The knowledge of MNSR is sufficient to evaluate candidates for reducts correctly. The method of creating and pruning candidates is very similar to the one proposed in GRA [13]. In the case, when the calculation of MNSR is infeasible, we advocate to search approximate reducts. In the article, we first introduce the theory behind approximate reducts and then present in detail respective algorithms (GRA and CoreGRA). The layout of the article is as follows: In Section 2, we remind basic rough set notions and prove some of their properties that will be applied in the proposed algorithms. In Section 3, we propose the RAD algorithm for discovering generalized and possible reducts. A number of optimizations of the basic algorithm are discussed as well. The CoreRAD algorithm, which calculates both the core and the reducts, is offered in Section 4. In Section 5, we discuss briefly how to adapt RAD and CoreRAD for the discovery of certain reducts. The notions of approximate reducts are introduced in Section 6. We prove that approximate reducts are supersets of exact reducts. The properties of approximate generalized reducts are used in the construction of the GRA algorithm, which is presented in Section 7. In Section 8, we discuss the CoreGRA algorithm, which calculates both the approximate generalized reducts and the approximate core. In Section 9, we propose simple modifications of GRA and CoreGRA that enable the usage of these algorithms for discovering approximate certain reducts. Section 10 concludes the results indicating that the proposed solutions can be applied in the case of incomplete decision tables as well.

2 Basic Notions 2.1 Information Systems An information system (IS) is a pair S = (O, AT), where O is a non-empty finite set of objects and AT is a non-empty finite set of attributes, such that a: O → Va for any a∈AT, where Va is called domain of the attribute a.

122

Marzena Kryszkiewicz and Katarzyna Cichoń

An attribute-value pair (a,v), where a∈AT and v∈Va, is called an atomic descriptor. An atomic descriptor or its conjunction is called a descriptor [20]. A conjunction of atomic descriptors for attributes A⊆AT is called A-descriptor. Let S = (O, AT). Each subset of attributes A⊆AT determines a binary indiscernibility relation IND(A), IND(A) = {(x,y)∈O×O| ∀a∈A, a(x) = a(y)}. The relation IND(A), A⊆AT, is an equivalence relation and constitutes a partition of O. Objects indiscernible with regard to their description on attribute set A in the system will be denoted by IA(x); that is, IA(x) = {y∈O| (x,y)∈IND(A)}. Property 1 [9]. Let A, B ⊆ AT. a) If A ⊆ B, then IB(x) ⊆ IA(x). b) IA∪B(x) = IA(x) ∩ IB(x). c)

IA(x) = ∩a∈A Ia(x).

Let X⊆O and A⊆AT. AX is defined as a lower approximation of X iff AX = {x∈O| IA(x) ⊆ X} = {x ∈ X | IA(x) ⊆ X}. A X is defined as an upper approximation of X iff A X = {x∈O| IA(x) ∩ X ≠ ∅} =

∪{IA(x)| x ∈ X}. AX is the set of objects that

belong to X with certainty, while A X is the set of objects that possibly belong to X. 2.2 Decision Tables A decision table is an information system DT = (O, AT∪{d}), where d∉AT is a distinguished attribute called the decision, and the elements of AT are called conditions. The set of all objects whose decision value equals k, k∈Vd, will be denoted by Xk. Let us define the function ∂A: O → P(Vd), A⊆AT, as follows [18]:

∂A(x) = {d(y)| y∈IA(x)}. ∂A will be called A-generalized decision in DT. For A = AT, an A-generalized decision will be also called briefly a generalized decision. Table 1. DT = (O, AT∪{f}) extended by generalized decision ∂AT. x∈O 1 2 3 4 5 6 7 8 9

a 1 1 0 0 0 1 1 1 1

b 0 1 1 1 1 1 1 1 0

c 0 1

D 1 1 1 0 1 0 1 2 0 2 0 2 0 3 0 3

e 1 2 3 3 2 2 2 2 2

f 1 1 1 2 2 2 3 3 3

∂AT {1} {1} {1,2} {1,2} {2} {2,3} {2,3} {3} {3}

Table 2. DT’ = (O, AT∪{∂AT}) – sorted and reduced version of DT from Table 1. x∈O in DT’ (x∈O in DT) 1 (3,4) 2 (5) 3 (1) 4 (9) 5 (6,7) 6 (8) 7 (2)

a 0 0 1 1 1 1 1

b 1 1 0 0 1 1 1

c 1 1 0 0 0 0 1

d 0 2 1 3 2 3 1

e 3 2 1 2 2 2 2

∂AT {1,2} {2} {1} {3} {2,3} {3} {1}

Example 1. Table 1 describes a sample decision table DT. The conditional attributes are as follows: AT = {a, b, c, d, e}. The decision attribute is f. One may note that objects 3 and 4 are indiscernible with respect to the conditional attributes in AT.

Towards Scalable Algorithms for Discovering Rough Set Reducts

123

Hence, ∂AT for object 3 contains both the decision 1 for object 3, as well as the decision 2 for object 4. Analogously, ∂AT for object 4 contains both its own decision (2), as well as the decision of object 3 (1). Please see the last column in Table 1 for generalized decision ∂AT for all objects in DT. Let X1 be the class of objects determined by decision 1; that is, X1 = {1,2,3}. The lower and upper approximations of X1 are as follows: ATX1 = {1,2} and AT X1 = {1,2,3,4}. Property 2 shows that the approximations of decision classes can be expressed by means of an A-generalized decision. Property 2 [9-11]. Let Xi ⊆ O and A⊆AT. a) IA(x) ⊆ Xi iff ∂A(x) = {i}. b) IA(x) ∩ Xi ≠ ∅ iff i ∈ ∂A(x). c) AXi = {x∈O| ∂A(x) = {i}}. d) A Xi = {x∈O| i ∈ ∂A(x)}. e) ∂A(x) = ∂A(y) for any (x,y)∈IND(A). By Property 2e, objects having the same A-descriptor have also the same A-generalized decision value; that is, the A-descriptor uniquely determines the A-generalized decision value for all objects satisfying this descriptor. In the sequel, the A-generalized decision value determined by A-descriptor t, such that t is satisfied by at least one object in the system, will be denoted by ∂t. Table 2 shows the generalized decision values determined by atomic descriptors that occur in Table 1. Table 3. Generalized decision values ∂(a,v) determined by atomic descriptors (a,v), where a∈AT, v∈Va, supported by DT from Table 1. (a,v)

∂(a,v)

(a,0) (a,1) (b,0) (b,1) (c,0) (c,1) (d,0) (d,1) (d,2) (d,3) {1,2} {1,2,3} {1,3} {1,2,3} {1,2,3} {1,2} {1,2} {1} {2,3} {3}

(e,1) (e,2) (e,3) {1} {1,2,3} {1,2}

We note that the A- and B-generalized decision values for object x provide an upper bound on the A∪B-generalized decision value for x. Property 3 [13]. Let A,B⊆AT, x∈DT. ∂A∪B(x) ⊆ ∂A(x) ∩ ∂B(x). Proof: ∂A∪B(x) = {d(y)| y∈IA∪B(x)} = /* by Property 1b */ = {d(y)| y∈(IA(x) ∩ IB(x))} ! ⊆ {d(y)| y∈IA(x)} ∩ {d(y)| y∈IB(x)} = ∂A(x) ∩ ∂B(x). We conclude further that the elementary a-generalized decision values for x, a∈A, can be used for calculating an upper bound on the A-generalized decision value for x. Corollary 1. Let A⊆AT and x∈DT. ∂A(x) ⊆ ∩a∈A ∂a(x) = ∩a∈A ∂(a, a(x)). Example 2. The {ce}-generalized decision value calculated from DT in Table 1 for object 5 (∂{ce}(5) = {1,2}) equals its upper bound ∂c(5) ∩ ∂e(5) = ∂(c,1) ∩ ∂(e,2) = {1,2} ∩ {1,2,3} = {1,2}. On the other hand, the {ce}-generalized decision value for object 6 (∂{ce}(6) = {2,3}) is a proper subset of its upper bound ∂c(6) ∩ ∂e(6) = ∂(c,0) ∩ ∂(e,2) = {1,2,3} ∩ {1,2,3} = {1,2,3}. !

124

Marzena Kryszkiewicz and Katarzyna Cichoń

Corollary 2. Let A⊆B⊆AT, x∈DT. ∂B(x) ⊆ ∂A(x). Proof: By Property 3, ∂B(x) ⊆ ∂A(x) ∩ ∂B\A(x). Hence, ∂B(x) ⊆ ∂A(x).

!

Finally, we observe that A- and B-generalized decision values for object x, where A⊆B⊆AT, are identical when their cardinalities are identical. Proposition 1. Let A⊆B⊆AT and x∈DT. ∂A(x) = ∂B(x) iff |∂A(x)| = |∂B(x)|. Proof: (⇒) Straightforward. (⇐) Let |∂A(x)| = |∂B(x)| (*). Since, A⊆B, then by Corollary 2, ∂A(x) ⊇ ∂B(x). Taking ! into account (*), we conclude ∂A(x) = ∂B(x). 2.3 Reducts for Decision Tables Reducts for decision tables are minimal sets of conditional attributes that preserve the required properties of classification. In what follows, we provide definitions of reducts preserving lower and upper approximations of decision classes and objects’ generalized decisions, respectively. Let ∅≠A⊆AT. A is a certain reduct (c-reduct) of DT iff A is a minimal attribute set such that (c) ∀x∈O, x∈ATXd(x) ⇒ IA(x) ⊆ Xd(x) A certain reduct is a set of attributes that allows us to distinguish each object x belonging to the lower approximation of its decision class in DT from the objects that do not belong to this approximation. A is a possible reduct (p-reduct) of DT iff A is a minimal attribute set such that ∀x∈O, IA(x) ⊆ AT Xd(x)

(p)

A possible reduct is a set of attributes that allows us to distinguish each object x in DT from objects that do not belong to the upper approximation of its decision class. A is a generalized decision reduct (g-reduct) of DT iff A is a minimal set such that ∀x∈O, ∂A(x) = ∂AT(x)

(g)

A generalized decision reduct is a set of attributes that preserves the generalized decision value for each object x in DT. In the sequel, a superset of a t-reduct, where t ∈ {c, p, g}, will be called a t-super-reduct. Corollary 3. AT is a superset of all c-reducts, p-reducts, and g-reducts for any DT. Proposition 2. Let A ⊆ AT. a) If A satisfies property (c), then all of its supersets satisfy property (c). b) If A does not satisfy property (c), then all of its subsets do not satisfy (c). c) If A satisfies property (p), then all of its supersets satisfy property (p). d) If A does not satisfy property (p), then all of its subsets do not satisfy (p). e) If A satisfies property (g), then all of its supersets satisfy property (g). f) If A does not satisfy property (g), then all of its subsets do not satisfy (g). Proof: Let A⊆B⊆AT and x∈O.

Towards Scalable Algorithms for Discovering Rough Set Reducts

125

Ad a) Let A satisfy property (c) and x∈ATXd(x). We are to prove that IB(x) ⊆ Xd(x). Since A satisfies property (c), then IA(x) ⊆ Xd(x) (*). By Property 1a, IB(x) ⊆ IA(x) (**). By (*) and (**), IB(x) ⊆ Xd(x). Ad b) Analogous to a). Ad c) Let A satisfy property (g). We are to prove that ∂B(x) = ∂AT(x). Since A satisfies property (g), then ∂A(x) = ∂AT(x) (*). By Corollary 2, ∂AT(x) ⊆ ∂B(x) ⊆ ∂A(x) (**). By (*) and (**), ∂B(x) = ∂AT(x). Ad b, d, f) Follow immediately from Proposition 2a, b, c, respectively. ! Corollary 4. a) c-super-reducts are all and the only attribute sets that satisfy property (c). b) p-super-reducts are all and the only attribute sets that satisfy property (p). c) g-super-reducts are all and the only attribute sets that satisfy property (g). Proof: By definition of reducts and Proposition 2.

!

Interestingly, not only g-reducts, but also p-reducts and c-reducts, can be determined by examining generalized decisions. Theorem 1 [11]. The set of all generalized decision reducts of DT equals the set of all possible reducts of DT. Lemma 1 [13]. A⊆AT is a c-reduct of DT iff A is a minimal set such that ∀x∈O, ∂AT(x) = {d(x)} ⇒ ∂A(x) = {d(x)}. !

Proof: By Property 2a,c.

Corollary 5 [13]. A⊆AT is a c-reduct of DT iff A is a minimal set such that∀x∈O, ∂AT(x) = {d(x)} ⇒ ∂A(x) = ∂AT(x). 2.4 Core The notion of a core is meant to be the greatest set of attributes without which an attribute set does not satisfy the required classification property (i.e. is not a superreduct). The generic notion of a t-core, t ∈ {c, p, g}, corresponding to c-reducts, preducts and g-reducts, respectively, is defined as follows: t-core = {a∈AT| AT\{a} is not a t-super-reduct}. Clearly, the p-core and g-core are the same. Proposition 3. Let R be all reducts of the same type t, where t ∈ {c, p, g}. t-core = ∩R.

Proof: Let us consider the case when R is the set of all c-reducts. Let b ∈ c-core. Hence b is an attribute in AT such that AT\{b} is not a superset of c-reduct. By Corollary 4a and Proposition 2b, no attribute set without b satisfies property (c). Hence, no

attribute set without b is a c-reduct. Thus, all c-reducts contain b; that is, ∩R ⊇ {b}. Generalizing this observation, ∩R ⊇ c-core.

126

Marzena Kryszkiewicz and Katarzyna Cichoń

Now, we will prove by contradiction that

∩R

\ c-core is an empty set. Let

d ∈ ∩R and d ∉ c-core. Since d ∉ c-core, then, by definition of a core, AT\{d} is a superset of some c-reduct, say B. Since B is a subset of AT\{d}, then B does not contain d either. This means that among c-reducts, there is an attribute set (B), which

does not contain d. Therefore, d ∉ ∩R, which contradicts the assumption. The cases when R is the set of all p-reducts or g-reducts can be proved analogously from Corollary 4b,c and Proposition 2d,f, respectively. !

3 Discovering Generalized Reducts 3.1 Main Algorithm Notation for RAD • Rk – candidate k attribute sets (potential g-reducts); • Ak – k attribute sets that are not g-super-reducts; • MNSR – all maximal conditional attribute sets that are not g-super-reducts; • MNSRk – k attribute sets in MNSR; • DT’ – reduced DT; • x.a – the value of an attribute a for object x; • x.∂AT – the generalized decision value for object x. Algorithm. RAD; DT’ = GenDecRepresentation-of-DT(DT); MNSR = MaximalNonSuperReducts(DT’); /* search g-reducts - note: g-reducts are all attribute sets that are not subsets of any set in MNSR */ if |MNSR|AT|-1| = |AT| then return AT; // optional optimizing step 1 R1 = {{a}| a∈AT}; A1 = {}; // initialize 1 attribute candidates for g-reducts forall B ∈ MNSR do move subsets of B from R1 to A1; // subsets of non-super-reducts are not reducts for (k = 1; Ak ≠ {}; k++) do begin if |MNSR| = 1 then return ∪k Rk; // optional optimizing step 2 MNSR = MNSR \ MNSRk; // MNSRk is not useful any more – optional optimizing step 3 /* create k+1 attribute g-reducts Rk+1 and non-g-super-reducts Ak+1 from Ak and MNSR */ RADGen(Rk+1, Ak+1, Ak, MNSR); endfor; return ∪k Rk;

The RAD (ReductsAprioriDiscovery) algorithm we propose starts by determining the reduced decision table DT’ that stores only conditional attributes AT and the AT-generalized decision for each object in DT instead of the original decision (see Section 3.2 for the description of the GenDecRepresentation-of-DT function). Each class of objects indiscernible w.r.t. AT ∪ {∂AT} in DT (see Table 1) is represented by one object in DT’ (see Table 2). Next, DT’ is examined in order to find all maximal attribute sets MNSR that are not g-super-reducts (see Section 3.3 for the description of the MaximalNonSuperReducts function). The information on MNSR is sufficient to derive all g-reducts; namely, g-reducts are these sets each of which has no superset in MNSR (i.e., is a g-super-reduct), but all proper subsets of which have supersets in MNSR (i.e., are not g-reducts).

Towards Scalable Algorithms for Discovering Rough Set Reducts

127

Now, RAD creates initial candidates for g-reducts that are singleton sets and are stored in R1. The candidates in R1 that are subsets of MNSR are moved to 1 attribute non-g-super-reducts A1. The main loop starts. In each k-th iteration, k ≥ 1, k+1 attribute candidates Rk+1 are created from k attribute sets in Ak, which are not gsuper-reducts (see Section 3.4 for the description of the RADGen procedure). The information on non-g-super-reducts MNSR is used to prune candidates in Rk+1. Namely, each candidate in Rk+1 that has a superset in MNSR is not a g-superreduct. Therefore it is moved from Rk+1 to Ak+1. The algorithm stops when Ak = {}. Optional optimizing steps in RAD are discussed in Section 3.5. 3.2 Determining Generalized Decision Representation of Decision Table The GenDecRepresentation-of-DT function starts with sorting the given decision table DT w.r.t. the set of all conditional attributes and (optionally) the decision attribute. The sorting enables fast determination of the generalized decision values for all classes of objects indiscernible w.r.t. AT. Each such class will be represented by one object in the new decision table DT’ = (AT, {∂AT}), where the decision attribute is replaced by the generalized decision. function GenDecRepresentation-of-DT(decision table DT); DT’ = {}; sort DT with respect to AT and d; // apply any ordering of attributes in AT, e.g. lexicographical x = first object in DT; // or null if DT is empty while x is not null do begin forall a∈AT do x’.a = x.a; x’.∂AT = {d(y)| y∈IAT(x)}; add x’ to DT’; x = the first object located just after IAT(x) in DT; endwhile; return DT’;

3.3 Calculating Maximal Non-super-reducts The purpose of the MaximalNonSuperReducts function is to determine all maximal conditional attribute sets that are not g-super-reducts. To this end, each object in the reduced decision table DT’ is compared with all other objects from different generalized decision classes. The result of the comparison of two objects, say x and y, belonging to different classes is the set of all attributes on which x and y are indiscernible. Clearly, such a resulting set is not a g-super-reduct, since it does not discern at least one pair of objects belonging to different generalized decision classes. The comparison results, which are non-g-super-reducts, are stored in the NSR variable. After the comparison of objects is accomplished, NSR contains a superset of all maximal non-g-super-reducts. The function returns MAX(NSR), which can be calculated as the final step or on the fly. For DT’ from Table 2, MaximalNonSuperReducts will find NSR = {abc, b, bc, e, bde, be, bce, ac, ace, ae, abce, abe}, and eventually will return MAX(NSR) = {abce, bde}.

128

Marzena Kryszkiewicz and Katarzyna Cichoń

function MaximalNonSuperReducts(reduced decision table DT’); NSR = {}; forall objects x in DT’ do forall objects y following x in DT’ do if x.∂AT ≠ y.∂AT then /* objects x and y should be distinguishable as they belong to different generalized decision classes; */ /* the set {a∈AT| x.a = y.a} is not a g-super-reduct since it does not distinguish between x and y */ insert in {a∈AT| x.a = y.a}, if non-empty, to NSR; return MAX(NSR); // note: MAX(NSR) contains all maximal non-g-super-reducts

3.4 Generating Candidates for Reducts The RADGen procedure has 4 arguments. Two of them are input ones: k attribute non-g-super-reducts Ak and the maximal non-g-super-reducts MNSR. The two remaining candidates Rk+1 and Ak+1 are output ones. After the completion of the function, Rk+1 contains k+1 attribute g-reducts and Ak+1 contains k+1 attribute nong-super-reducts. During the first phase of the procedure, new k+1 attribute candidates are created by merging k attribute non-g-super-reducts in Ak that differ only in their final attributes. The characteristic feature of such a method of creating candidates is that no candidate that is likely to be a solution (here: g-reduct) is missed and that no candidate is generated twice (please, see the detailed description of the Apriori algorithm [1] for justification). In the second phase, it is checked for each newly obtained k+1 attribute candidate whether all its proper k attribute subsets are contained in nong-super-reducts Ak. If yes, then a candidate remains in Rk+1; otherwise it is pruned as a proper superset of some g-super-reduct. Finally, all candidates in Rk+1 that are subsets of maximal non-g-super-reducts MNSR are found non-g-super-reducts too, and thus are moved to Ak+1. procedure RADGen(var Rk+1, var Ak+1, in Ak, in MNSR); forall B, C ∈Ak do /* Merging */ if B[1] = C[1] ∧ ... ∧ B[k-1] = C[k-1] ∧ B[k] < C[k] then begin A = B[1]•B[2]•...•B[k]•C[k]; add A to Rk+1; endif; forall A∈Rk+1 do /* Pruning */ forall k attribute sets B ⊂ A do if B ∉ Ak then delete A from Rk+1; // A is a proper superset of g-super-reduct B forall B∈MNSR do move subsets of B from Rk+1 to Ak+1; /* Removing subsets of non-g-super-reducts */ return;

3.5 Optimizing Steps in RAD In the main algorithm, we offer an optimization that may speed up checking which candidates are not g-reducts (optimizing step 3) and two optimizations for reducing the number of useless iterations (optimizing steps 1 and 2). In step 3, k attribute sets are deleted from MNSR since they are useless for identifying non-g-superset-reducts among l attribute candidates, where l > k.

Towards Scalable Algorithms for Discovering Rough Set Reducts

129

Optimizing step 1 is based on the following observation: the condition |MNSR|AT|-1| = |AT| implies that all AT\{a} sets are not g-super-reducts. Hence, AT is the only greduct for DT and thus the algorithm can be stopped. Optimizing step 2 can be applied when |MNSR| = 1. This condition implies that all sets in Ak, which are not g-super-reducts, have exactly one - the same superset, say B, in maximal non-g-super-reducts MNSR. If one continues the creation of k+1 attribute candidates Rk+1 by merging sets in Ak, then the new k+1 attribute candidates would be still subsets of B. Hence, they would be pruned by the RADGen procedure from Rk+1 to Ak+1. As a result, one would obtain Rk+1 = {} and |MNSR| = 1. Such a scenario would continue when creating longer candidates until Al = {B}, l > k. Then, RADGen will produce empty Rl+1 and empty Al+1; that is, the condition, which stops the RAD algorithm. In conclusion, the condition |MNSR| = 1 implies that no more g-reducts will be discovered, so the algorithm can be stopped. 3.6 Illustration of RAD Let us illustrate now the discovery of g-reducts of DT from Table 1. We assume that maximal non-g-super-reducts MNSR are already found and are equal to {{abce}, {bde}}. Table 4 shows how candidates for g-reducts change in each iteration. Table 4. Rk and Ak after verification w.r.t. MNSR in subsequent iterations of New. k 1 2 3 4

Ak (each X in Ak has a superset in MNSR) {a}, {b}, {c}, {d}, {e} {ab}, {ac}, {ae}, {bc}, {bd}, {be}, {ce}, {de} {abc}, {abe}, {ace}, {bce}, {bde} {abce}

Rk (each X in Rk has no superset in MNSR) {ad}, {cd}

4 Core-Oriented Discovery of Generalized Reducts 4.1 Main Algorithm In this section, we offer the CoreRAD procedure, which finds not only g-reducts, but also their core. The layout of CoreRAD reminds that of RAD. CoreRAD, however, differs from RAD in that it first checks if the set of all maximal non-g-super-reducts MNSR is empty. If yes, then each single conditional attribute is a g-reduct, so

CoreRAD returns {{a}| a∈AT} as the set of all g-reducts and ∩a∈AT {a} = ∅ as the g-core (by Proposition 3). Otherwise, CoreRAD determines the g-core by definition from all maximal |AT|-1 non-g-super-reducts in MNSR. All sets in MNSR that are not supersets of the g-core are deleted, since the only candidates considered in CoreRAD will be the g-core and its supersets. If the reduced MNSR is an empty set, then the g-core does not have subsets in MNSR and thus it is the only g-reduct. Otherwise, the g-core is not a g-reduct, and the new candidates R|core|+1 are created by merging the g-core with the remaining attributes in AT. Clearly, the new candidates

130

Marzena Kryszkiewicz and Katarzyna Cichoń

which have supersets in maximal non-g-super-reducts MNSR are not g-reducts either, and hence are moved from R|core|+1 to A|core|+1. From now on, CoreRAD is performed in the same way as RAD. Algorithm. CoreRAD; DT’ = GenDecRepresentation-of-DT(DT); MNSR = MaximalNonSuperReducts(DT’); if MNSR = {} then return (∅,{{a}| a∈AT}); // each conditional attribute is a g-reduct core = ∅; forall A∈MNSR|AT|-1 do begin {a} = AT\A; core = core ∪ {a} endfor; if |MNSR|AT|-1| = |AT| then return (AT, AT); // or if core = AT then - optional optimizing step 1 MNSR = {B ∈ MNSR| B ⊇ core}; // g-reducts are supersets of the g-core if MNSR = {} then return (core, {core}); // g-core is a g-reduct as there is no its superset in MNSR MNSR = MNSR \ MNSR|core|; // or equivalently MNSR = MNSR \ {core}; /* initialize candidate for reducts as g-core’s supersets */ startLevel = |core| + 1; RstartLevel = {}; AstartLevel = {}; forall a∈AT \ core do begin A = core ∪ {a}; RstartLevel = RstartLevel ∪ {A} endfor; forall B ∈ MNSR do move subsets of B from RstartLevel to AstartLevel; for (k = startLevel; Ak ≠ {}; k++) do begin if |MNSR| = 1 then return (core, ∪k Rk); // optional optimizing step 2 MNSR = MNSR \ MNSRk; // MNSRk is not useful any more – optional optimizing step 3 /* create k+1 attribute g-reducts Rk+1 and non-g-super-reducts Ak+1 from Ak and MNSR */ GRAGen(Rk+1, Ak+1, Ak, MNSR); endfor; return (core, ∪k Rk);

4.2 Illustration of CoreRAD We will illustrate now the core-oriented discovery of g-reducts of DT from Table 1. We assume that MNSR has already been calculated and equals {{abce}, {bde}}. Hence, core = AT / {abce} = {d}. Now, we leave only the supersets of the core in MNSR; thus MNSR becomes equal to {{bde}}. Table 5 shows how candidates for g-reducts change in each iteration (here: only 1 iteration was sufficient). Table 5. Rk and Ak after verification w.r.t. MNSR in subsequent iterations of CoreRAD. K 2

Ak (each X in Ak has a superset in MNSR) {bd}, {de}

Rk (each X in Rk has no superset in MNSR) {ad}, {cd}

5 Discovering Certain Reducts RAD and CoreRAD can easily be adapted for the discovery of certain reducts. It suffices to modify line 4 of the MaximalNonSuperReducts function as follows: if (x.∂AT ≠ y.∂AT) and (| x.∂AT | = 1 or | y.∂AT | = 1) then

This modification guarantees that all objects from lower approximations of all decision classes, which have singleton generalized decisions, will be compared with all objects not belonging to the lower approximations of their decision classes.

Towards Scalable Algorithms for Discovering Rough Set Reducts

131

6 Approximate Attribute Reduction 6.1 Approximate Reducts for Decision Table The discovery of reducts may be very time consuming. Therefore, one may resign from calculating strict reducts and search more efficiently for approximate reducts, which however, should be supersets of exact reducts and subsets of AT. In this section, we introduce the notion of such approximate reducts based on the observation that for any object x in O: ∩a∈A ∂a(x) ⊇ ∂A(x) (by Corollary 1). Let ∅≠A⊆AT. AT is defined an approximate generalized decision reduct (ag-

reduct) of DT iff ∃x∈O, ∩a∈AT ∂a(x) ⊃ ∂AT(x). Otherwise, A is an approximate generalized decision reduct (g-reduct) of DT iff A is a minimal set such that ∀x∈O, ∩a∈A ∂a(x) = ∂AT(x)

(ag)

Corollary 5 specifies properties of certain decision reducts in terms of generalized decisions. By analogy to this corollary, we define an approximate certain decision reduct as follows: AT is defined an approximate certain decision reduct (ac-reduct) of DT iff ∃x∈O,

∂AT(x) = {d(x)} ⇒ ∩a∈AT ∂a(x) ⊃ ∂AT(x). Otherwise, A is defined an approximate certain reduct (ac-reduct) of DT iff A is a minimal attribute set such that ∀x∈O, ∂AT(x) = {d(x)} ⇒ ∩a∈A ∂a(x) = ∂AT(x)

(ac)

In the sequel, a superset of a t-reduct, t ∈ {ac, ag}, will be called a t-super-reduct. Corollary 6. AT is a superset of all ac-reducts and ag-reducts for any DT. Proposition 4. Let x∈O and A ⊆ AT. If ∩a∈A ∂a(x) = ∂AT(x), then:

∩a∈A ∂a(x) = ∂A(x) = ∂AT(x). b) ∀B ⊆ AT, B⊃A ⇒ ∩a∈B ∂a(x) = ∂B(x) = ∂AT(x). Proof: Let ∩a∈A ∂a(x) = ∂AT(x) (*). Ad a) By Corollaries 1-2, ∩a∈A ∂a(x) ⊇ ∂A(x) ⊇ ∂AT(x). Taking into account (*), ∩a∈A ∂a(x) = ∂A(x) = ∂AT(x). a)

Ad b) Let B ⊆ AT, B⊃A. By Corollary 2, ∂A(x) ⊇ ∂B(x) ⊇ ∂AT(x). Taking into account

∩a∈A ∂a(x) = ∂A(x) = ∂B(x) = ∂AT(x) (**). Clearly, ∩a∈A ∂a(x) ⊇ ∩a∈B ∂a(x) ⊇ ∩a∈AT ∂a(x). Taking into account (**), ∂B(x) = ∂AT(x) = ∩a∈A ∂a(x) ⊇ ∩a∈B ∂a(x) ⊇ ∩a∈AT ∂a(x) ⊇ ∂AT(x). Hence, ∩a∈B ∂a(x) = ∂B(x) = ∂AT(x). !

Proposition 4a,

Corollary 7. a) An ag-reduct is a g-super-reduct. b) An ag-reduct is a p-super-reduct. c) An ac-reduct is a c-super-reduct.

132

Marzena Kryszkiewicz and Katarzyna Cichoń

Proof: Ad a) Let A be an ag-reduct. If ∃x∈O, ∩a∈AT ∂a(x) ⊃ ∂AT(x), then A = AT, which by Corollary 3 is a g-super-reduct. Otherwise, by definition of an ag-reduct and Proposition 4a, ∀x∈O, ∩a∈A ∂a(x) = ∂A(x) = ∂AT(x). Thus A satisfies property (g). Hence, by Corollary 4c, A is a g-super-reduct. Ad b) Follows from Theorem 1 and Corollary 7a. Ad c) Analogous, to the proof of Corollary 7a. Follows from the definition of an ac-reduct, Corollary 3, Corollary 5, Corollary 4a and Proposition 4a. Proposition 5. Let A ⊆ AT. a) If A satisfies property (ag), then all of its supersets satisfy property (ag). b) If A does not satisfy property (ag), then all of its subsets do not satisfy (ag). c) If A satisfies property (ac), then all of its supersets satisfy property (ac). d) If A does not satisfy property (ac), then all of its subsets do not satisfy (ac). Proof: Ad a,c) Follow from Proposition 4. Ad b, d) Follow immediately from Proposition 5a, c, respectively.

!

Corollary 8. a) ag-super-reducts are all and the only attribute sets that satisfy property (ag). b) ac-super-reducts are all and the only attribute sets that satisfy property (ac). Proof: By definition of respective approximate reducts and Proposition 5.

!

6.2 Approximate Core An approximate core will be defined in usual way; that is, t-core = {a∈AT| AT\{a} is not a t-super-reduct}, where t ∈ {ac, ag}. Proposition 6. Let R be all approximate reducts of the same type t, t ∈ {ac, ag}.

t-core = ∩R. Proof: Follows from Corollary 8 and Proposition 5, and is analogous to the proof of Proposition 3. !

7 Discovering Approximate Generalized Reducts 7.1 Main Algorithm The GRA (GeneralizedReductsApriori) algorithm, we have recently introduced in [13], finds all ag-reducts from the decision table DT. Unlike in RAD, GRA, does not need to store all maximal non-g-super-reducts MNSR. On the other hand, GRA requires the candidates for reducts to be evaluated against the decision table. The validation of the candidate solution against the decision table DT in our algorithm consists in checking if the candidate satisfies property (ag); that is, if the intersection of the elementary generalized decisions of the attributes in the candidate set determines the same generalized decision value as the set of all conditional attributes AT does for each object in DT. We will use the following properties in the process of searching reducts in order to prune the search space efficiently:

Towards Scalable Algorithms for Discovering Rough Set Reducts

133

• Proper supersets of ag-reducts are not ag-reducts, and hence such sets shall not be evaluated against the decision table. • Subsets of attribute sets that are not ag-super-reducts are not ag-reducts, and thus such sets shall not be evaluated against the decision table. • An attribute set whose all proper subsets are not ag-super-reducts may or may not be an ag-reduct, and hence should be evaluated against the decision table. Since our algorithm is to work with very large decision tables, we propose to restrict the number of decision table objects against which a candidate should be evaluated. Our proposal is based on the following observation: • If an attribute set A satisfies property (ag) for the first n objects in DT (or reduced DT’) and does not satisfy it for object n+1, then A is certainly not an ag-reduct and thus evaluating it against the remaining objects in DT (DT’) is useless. • If an attribute set A satisfies property (ag) for the first n objects in DT (or DT’), then property (ag) will be satisfied for these objects for all supersets of A. Hence, the evaluation of the first n objects should be skipped for a candidate that is a proper superset of A. The GRA algorithm starts with building the reduced version DT’ of decision table DT (see Section 3.2 for the description of the GenDecRepresentation-of-DT function). DT’ stores only the AT-generalized decisions instead of the original decisions. Next, the a-generalized decision value for each atomic descriptor (a,v) occurring in DT (or in DT’) is calculated as the set of the decisions (or the union of the ATgeneralized decisions) of the objects supporting (a,v) in DT (or in DT’). Each pair: (atomic descriptor, its generalized decision) is stored in Γ. Now GRA creates initial candidates for ag-reducts. The initial candidates are singleton sets and are stored in R1. The set of 1 attribute non-ag-super-reducts A1, as well as known maximal nonag-super-reducts NSR, are initialized to an empty set. The main loop starts. In each k-th iteration, k ≥ 1, the k attribute candidates Rk are evaluated during one pass over DT’ (see Section 7.2 for the description of the EvaluateCandidates procedure). As a side effect of evaluating of Rk, all k attribute non-ag-super-reducts Ak are found and known maximal non-ag-super-reducts NSR are updated. The case when NSR|AT| = AT indicates that AT does not satisfy property (ag) for some object. Hence, by definition AT is the only ag-reduct and the algorithms stops. Otherwise, k+1 attribute candidates Rk+1 are created from k attribute sets in Ak, which turned out not to be agsuper-reducts (see Section 7.4 for the description of the GRAGen procedure). The information on non-ag-super-reducts NSR is used to prune the candidates in Rk+1. Namely, each candidate in Rk+1 that has a superset in NSR is known a priori not to be an ag-reduct. Therefore it is moved from Rk+1 to Ak+1. The algorithm stops when Rk = Ak = {}. Optimizations steps 1-2 in GRA are analogous to steps 1-2 in RAD, which were discussed in Section 3.5.

134

Marzena Kryszkiewicz and Katarzyna Cichoń

Modified or additional notation for GRA • Rk – candidate k attribute sets (potential ag-reducts); • Ak – k attribute sets that are not ag-super-reducts; • A.id – the identifier of the object against which attribute set A should be evaluated; • NSR – quasi maximal attribute sets found not to be ag-super-reducts; • NSRk – k attribute sets in NSR; • x.identifier – the identifier of object x; • Γ - the set containing generalized decision values determined by atomic descriptors supported by objects in DT (DT’); that is: Γ = ∪a∈AT, v∈Va {{(a,v), ∂(a,v))}. Algorithm. GRA; DT’ = GenDecRepresentation-of-DT(DT); /* calculate a-generalized decision value for each atomic descriptor (a,v) supported by DT (or DT’) */ for each conditional attribute a∈AT do for each domain value v∈Va do begin compute ∂(a,v); store ((a,v), ∂(a,v)) in Γ endfor; /* initialize 1 attribute candidates */ R1 = {{a}| a∈AT}; A1 = {}; NSR = {}; // conditional attributes are candidates for ag-reducts for each A in R1 do A.id = 1; // the evaluation of candidate A should start from object 1 in DT’ /* search reducts */ for (k = 1; Ak ≠ {} ∨ Rk ≠ {}; k++) do begin if Rk ≠ {} then begin /* find and move non-ag-reducts from Rk to Ak and determine maximal non-ag-super-reducts NSR */ EvaluateCandidates(Rk, Ak, Γ, NSR); if |NSR|AT|| = 1 then return AT; // or equivalently, if NSR|AT| = AT then if |NSR|AT|-1| = |AT| then return AT; // optional optimizing step 1 elseif |NSR| = 1 then return ∪k Rk; // optional optimizing step 2 endif; /* create k+1 attribute candidates Rk+1 and non-ag-super-reducts Ak+1 from Ak and NSR */ GRAGen(Rk+1, Ak+1, Ak, NSR); endfor; return ∪k Rk;

A characteristic feature of our algorithm, which is shared by all Apriori-like algorithms (see [1] for the Apriori algorithm), is that the evaluation of candidates requires no more than n scans of the data set (decision table), where n is the length of a longest candidate (here: n ≤ |AT|). GRA, however, differs from Apriori in several ways. First of all, our candidates are sets of attributes instead of descriptors. Next, we evaluate candidates whether they satisfy property (ag), while the evaluation in Apriori consists in calculating the number of objects satisfying candidate descriptors. Additionally, our algorithm uses dynamically obtained information on non-ag-super-reducts to restrict the search space as quickly as possible. Another distinct optimizing feature of our algorithm is that the majority of candidates is evaluated against a fraction of the decision table instead of the entire decision table (see Section 7.2). Namely, having found that a candidate A does not satisfy the required property (ag) for some object x, the next objects are not considered for evaluating this candidate at all. In addition, the evaluation of candidates that are proper supersets of the invalidated candidate A starts from object x. These two optimizations may speed up the evaluation process considerably.

Towards Scalable Algorithms for Discovering Rough Set Reducts

135

7.2 Evaluating Candidates for Approximate Reducts The EvaluateCandidates procedure takes 4 arguments: k attribute candidates for agreducts Rk, k attribute sets that are known not to be ag-super-reducts Ak, the generalized decisions determined by atomic descriptors Γ, and known maximal non- agapproximate-super-reducts NSR. For each object read from DT’, the candidates in Rk that should be evaluated against this object are identified. These are candidates A such that A.id equals the identifier of the object. Let x be the object under consideration and A be a candidate such that A.id = x.identifier. The upper bound ∂ on ∂A(x) is calculated from the generalized decisions determined by the atomic descriptors stored in Γ. If ∂ equals x.∂AT, then A satisfies property (ag) for object x and still has a chance to be an ag-reduct. Hence, A.id is incremented to indicate that A should be evaluated against the next object after x in DT’ too. Otherwise, if ∂ ≠ x.∂AT, then A is certainly not an ag-reduct and thus is moved from candidates Rk to non-ag-super-reducts Ak. Additionally, the MaximalNonAGSuperReduct procedure (see Section 7.3) is called to determine a quasi maximal superset (nsr) of A that does not satisfy property (ag) for object x either. If nsr obtains the maximal possible length (i.e. |nsr| = |AT|), AT is returned as the maximal set the approximate generalized decision of which differs from the real AT-generalized decision, and the procedure stops. Otherwise, the found non-ag-super-reduct is stored in NSR’. Since the evaluation of candidates against objects may result in moving all candidates from Rk to Ak, scanning of DT’ is stopped as soon as all candidates turned out false ones. The last step of the EvaluateCandidates procedure consists in updating maximal non-ag-super-reducts NSR with NSR’. Please note that k attribute sets are not stored in the final NSR since they are useless for identifying non-super-reducts among l attribute candidates, where l > k. procedure EvaluateCandidates(var Rk, var Ak, in Γ, var NSR); /* assert: Γ = ∪a∈AT, v∈Va {{(a,v), ∂(a,v))} */ NSR’ = {}; for each object x in DT’ do begin for each candidate A in Rk do if A.id = x.identifier then begin ∂ = ∩a∈A ∂(a, x.a); // note: each ((a, x.a), ∂(a, x.a)) ∈ Γ if ∂ ≠ x.∂AT then begin // or equivalently: if | ∂ | ≠ | x.∂AT | then move A from Rk to Ak; nsr = MaximalNonAGSuperReduct(A, x, ∂ , Γ); // find a quasi maximal non-ag-super-reduct if nsr = AT then begin NSR = {AT}; return endif; // or equivalently: if |nsr| = |AT| then add nsr to NSR’; else A.id = x.identifier + 1 // A should be evaluated against the next object endif endif; if Rk = {} then break; endfor; NSR = MAX((NSR’ \ NSRk’) ∪ (NSR \ NSRk)); return;

136

Marzena Kryszkiewicz and Katarzyna Cichoń

7.3 Calculating Quasi Maximal Non-approximate Generalized Super-reducts The MaximalNonAGSuperReduct function is called whenever a candidate, say A, does not satisfy property (ag) for some object x. This function returns a quasi maximal superset of A that does not satisfy property (ag) for x. Clearly, there may be many such supersets of A; however the function creates and evaluates supersets of A in a specific order. Namely, nsr variable, which initially equals A, is extended in each iteration with one attribute (assigned to variable a) that is next after the one recently added to nsr. Please note that the first attribute in AT is assumed to be next to the last attribute in AT. The creation of supersets stops when an evaluated attribute nsr∪{a} satisfies property (ag) for object x. Then, MaximalNonAGSuperReduct returns nsr as a known maximal superset of A, which is not an ag-super-reduct. function MaximalNonAGSuperReduct(in A, in x, in ∂, in Γ); /* assert: ∂ ≠ x.∂AT */ nsr = A; ∂nsr = ∂; previous_a = last attribute in A; for (i=1; i 1 then if NSR ≠ {} then // ag-core is not an ag-reduct as there is its superset in NSR NSR = NSR \ NSR|core| // or equivalently NSR = NSR \ {core}; else begin R|core| = {core}; A|core| = {}; EvaluateCandidates(R|core|, A|core|, Γ, NSR); if |NSR|AT|| = 1 then return (AT, AT); endif; if R|core| = {core} then return(core, R|core|) // or equivalently if |R|core|| = 1 then else begin startLevel = |core| + 1; RstartLevel = {}; AstartLevel = {}; forall {a}∈A1 such that a∉core do begin A = core ∪ {a}; A.id = max(core.id, {a}.id); // candidates should contain ag-core RstartLevel = RstartLevel ∪ {A} endfor; forall B ∈ NSR do move subsets of B from RstartLevel to AstartLevel; endif endif; for (k = startLevel; Ak ≠ {} ∨ Rk ≠ {}; k++) do begin /* ag-reducts’ regular search */ if Rk ≠ {} then begin /* find and move non-ag-reducts from Rk to Ak and determine maximal non-ag-super-reducts NSR */ EvaluateCandidates(Rk, Ak, Γ, NSR); if |NSR|AT|| = 1 then return (AT, AT) endif; elseif |NSR| = 1 then return (core; ∪k Rk); // optional optimizing step endif; GRAGen(Rk+1, Ak+1, Ak, NSR); // create (k+1)-candidates from k attribute non-ag-reducts endfor; return (core; ∪k Rk);

Towards Scalable Algorithms for Discovering Rough Set Reducts

139

The CoreGRA algorithm, we propose, finds not only ag-reducts, but also their core. The layout of CoreGRA reminds that of GRA. CoreGRA, however, differs from GRA, in that it evaluates 1 attribute candidates in special way that provides sufficient information to determine the ag-core, and next creates subsequent candidates only as supersets of the found ag-core. CoreGRA calls the EvaluateCandidate1 procedure (see Section 8.2) in order to evaluate 1 attribute candidates. Unlike the EvaluateCandidate procedure, EvaluateCandidate1 guarantees that all maximal |AT|-1 nonag-super-reducts will be determined and returned in NSR. Using this information, the ag-core will then be calculated according to its definition. If the ag-core is an empty set, then 2 attribute and longer candidates are created and evaluated as in GRA. Otherwise, all sets in NSR that are not supersets of the ag-core are deleted, since the only candidates considered in CoreGRA will be the ag-core and its supersets. If the ag-core contains only one attribute, it is not evaluated because singleton attributes were already evaluated. The ag-core is not evaluated also in the case, when NSR, already restricted to non-ag-super-reducts being the core’s supersets, is not empty. In this case, the ag-core is also a non-ag-super-reduct as a subset of some non-ag-super-reduct in NSR. Otherwise, the ag-core is evaluated. Provided the ag-core is found an ag-reduct, it is returned as the only ag-reduct. If the ag-core is not a reduct, the new candidates R|core|+1 are created by merging the core with the remaining attributes in AT. Clearly, the new candidates which have supersets in maximal known non-ag-super-reducts NSR, are not ag-reducts either, and hence are moved from R|core|+1 to A|core|+1. From now on, CoreGRA is performed in the same way as GRA. It is expected that CoreGRA should perform better than GRA, when the ag-core consists of a sufficient number of attributes. Then fewer iterations should be performed and probably fewer candidates will be evaluated. Nevertheless, when the number of attributes in the ag-core is small, CoreGRA may be less effective than GRA because of the more exhaustive evaluation of 1 attribute candidates (their nsr fields are likely to be evaluated against the entire decision table in CoreGRA). 8.2 Evaluating Singleton Candidates Below we describe the EvaluateCandidates1 procedure, which is primarily intended to be applied only to 1 attribute candidates in CoreGRA, although it can be applied for evaluating candidates of any length. It is assumed that an additional field nsr is associated with each k attribute candidate A in Rk. The EvaluateCandidates1 procedure differs from EvaluateCandidates in that after discovering that a candidate A is not an ag-reduct, it is not removed from Rk immediately. Nevertheless, EvaluateCandidates1 stops advancing A.id field as soon as the first object invalidating A is found (like EvaluateCandidates does). In such a case, instead of evaluating A, its nsr field is extended and evaluated against the remaining objects in the decision table as long as nsr obtains the maximal possible length (i.e. |nsr| = |AT|) or the end of the decision table is reached. In the former case, AT is returned as the maximal set the approximate generalized decision of which differs from

140

Marzena Kryszkiewicz and Katarzyna Cichoń

the real AT-generalized decision, and the procedure stops. In the latter case, the remaining candidates A in Rk that turned out not ag-reducts (i.e. such that A.id ≠ |DT|+1), are moved to Ak and NSR’ is updated with their nsr fields. procedure EvaluateCandidates1(var Rk, var Ak, in Γ, var NSR); NSR’ = {}; for each object x in DT do begin for each candidate A in Rk do begin ∂ = ∩a∈A.nsr ∂(a, x.a); // note: each ((a,x.a), ∂(a,x.a)) ∈ Γ if ∂ ≠ x.∂AT then begin // or equivalently: if |∂t| = |x.∂AT| then A.nsr = MaximalNonAGSuperReduct(A.nsr, x, ∂, Γ); // find a maximal non-ag-super-reduct if A.nsr = AT then begin NSR = {AT}; return endif // or equivalently: if |A.nsr| = |AT| then elseif A.id = x.identifier then A.id = x.identifier + 1 // evaluate A’s supersets against the next object endif; endfor; if Rk = {} then break; endfor; for each candidate A in Rk do // A is not an ag-reduct if A.id ≠ |DT|+1 then move A from Rk to Ak; add A.nsr to NSR’ endif; NSR = MAX(NSR’ \ NSRk’); // NSR = MAX((NSR’ \ NSRk’) ∪ (NSR \ NSRk)) for k > 1 return;

8.3 Illustration of CoreGRA In this section, we illustrate how CoreGRA searches ag-reducts in the decision table DT from Table 1. Table 7 shows how candidates change in each iteration before and after validation against the reduced decision table DT’ from Table 2. After 1 attribute candidates were evaluated by EvaluateCandidates1, NSR became equal to {{abce}, {de}}. Thus, {abce} was the only set in NSR the length of which was equal to |AT|-1. Hence, the ag-core was determined as AT\{abce} = {d}. Since the new candidates were to be supersets of the ag-core, all sets from NSR that were not supersets of the ag-core were pruned and NSR became equal to {{de}}. The agcore {d} is not an ag-reduct, as it was not present in the set of the positively evaluated candidates R1 (here: R1 = ∅). New candidates were created by merging the ag-core with the remaining attributes in AT resulting in the following 4 attribute candidates: {ad}, {bd}, {cd}, {de}. One of them ({de}) was known a priori not to be an ag-reduct as a subset of the known nonag-super-reduct {de} in NSR. From now on, CoreGRA proceeded as GRA. The execution of the CoreGRA algorithm resulted in enumeration of 9 attribute sets instead of 21 (see Section 7.5). Table 7. Rk, Ak, and NSR in subsequent iterations of CoreGRA. k

Rk before validation

1 {a}[id:1], {b}[id:1], {c}[id:1], {d}[id:1], {e}[id:1] 2 {ad}[id:2], {bd}[id:2], {cd}[id:3]

Ak before Rk after validation validation

{de}[id:2]

{ad}[id:8], {cd}[id:8]

Ak after validation {a}[id:2], {b}[id:1], {c}[id:3], {d}[id:2], {e}[id:2] {bd}[id:2], {de}[id:2]}

NSR’ {abc}, {bc}, {c}, {de}, {abce} {bde}

NSR {abce}, {de} {bde}

Towards Scalable Algorithms for Discovering Rough Set Reducts

141

9 Discovering Approximate Certain Reducts Approximate certain reducts of DT are defined by means of generalized decisions only of objects in DT with singleton AT-generalized decisions. This observation suggests that the GRA and CoreGRA algorithms shall calculate ac-reducts of DT correctly, provided the candidate attribute sets are evaluated only against the objects in DT with singleton AT-generalized decisions. This can be achieved in two ways: a) either the initialization of candidates in the GRA procedure should be preceded by an additional operation that removes all objects from DT (or DT’) that have non-singleton AT-generalized decisions and renumbers the remaining objects; b) or the evaluation of candidates should be modified so that to ignore objects with non-singleton AT-generalized decisions safely (please, see [13]).

10 Conclusion In the article, we have offered two new algorithms: RAD and CoreRAD for discovering all exact generalized (and by this also possible) and certain reducts from decision tables. In addition, CoreRAD determines the core. Both algorithms require the calculation of all maximal attribute sets MNSR that are not super-reducts. An Apriorilike method of determining reducts based on MNSR was proposed. Our method of determining MNSR is orthogonal to the methods that determine a discernibility matrix (DM), which stores information on sets of attributes each of which discerns at least one pair of objects that should be discerned, and return the family of all such minimal sets (MDM). The reducts are then found from MDM by applying Boolean reasoning. The calculation of MNSR (as well as MDM) requires comparing each pair of objects in the decision table and finding maximal (minimal) attribute sets among those that are the result of the objects’ comparison. This operation is very costly when the number of objects in a decision table is large. In order to overcome this problem one may use a reduced table (AT, {∂AT}), which stores one object instead of many original objects that are indiscernible on AT and ∂AT. Nevertheless, when the number of objects in the reduced table is still large or the number of MNSR (MDM) is large, the calculation of reducts may be infeasible. Our preliminary experiments indicate that the determination of MNSR is a bottleneck of the proposed RAD-like algorithms in such cases. To the contrary, the proposed Apriori-like method of determining reducts based on MNSR is very efficient. In the case, when the determination of MNSR is infeasible, we advocate to search approximate reducts. In the article, we have defined such approximate reducts based on the properties of a generalized decision function. We have shown that for each A-generalized decision one may derive its upper bound (A-approximate generalized decision) from elementary a-generalized decisions, where a∈A. Whereas exact generalized (certain) reducts preserve the AT-generalized decision for all objects (for objects with singleton generalized decisions), each approximate generalized (certain) reduct A guarantees that A-approximate generalized decision is equal to the

142

Marzena Kryszkiewicz and Katarzyna Cichoń

AT-generalized decision for all objects (for objects with singleton generalized decisions). An exception to the rule is the case, when there is an object for which the approximate AT-generalized decision differs from the actual AT-generalized decision. Then the entire set of conditional attributes AT is defined as a reduct. We have proved that approximate generalized and certain reducts are supersets of exact reducts of respective types. In addition, approximate generalized reducts are supersets of exact possible reducts. We have presented GRA and CoreGRA algorithms for discovering approximate generalized (and by this also possible) reducts and certain reducts from very large decision tables. The experiments we have carried out and reported in [13] prove that the GRA-like algorithms are scalable with respect to the number of objects in a decision table and that CoreGRA tends to outperform GRA with increasing number of conditional attributes. For a few conditional attributes, however, GRA may find reducts faster. Nevertheless, the experiments need to be continued to fully recognize the performance characteristics of particular GRA-like algorithms. Finally, we note that all the proposed algorithms are capable to discover all discussed types of reducts from incomplete decision tables as well. The only difference consists in a slightly different determination of generalized decision value for atomic descriptors, namely ∂A(x) = {d(y)| y∈SA(x)}, where SA(x) = {y∈O | ∀a∈A, (a(x) = a(y)) ∨ (a(x) is NULL) ∨ (a(y) is NULL)} (see e.g. [12]). In the future, we intend to develop scalable algorithms for discovering all exact reducts.

References 1. Agrawal, R., Mannila, H., Srikant, R., Toivonen, H., Verkamo, A.I.: Fast Discovery of Association Rules. In: Advances in KDD. AAAI, Menlo Park, California (1996) 307-328 2. Bazan, J., Skowron, A., Synak, P.: Dynamic Reducts as a Tool for Extracting Laws from Decision Tables. In: Proc. of ISMIS ’94, Charlotte, USA. LNAI, Vol. 869, SpringerVerlag, (1994) 346–355 3. Bazan, J., Nguyen, H.S., Nguyen, S.H., Synak, P., Wróblewski, J.: Rough Set Algorithms in Classification Problem. In: L. Polkowski, S. Tsumoto and T.Y. Lin (eds.): Rough Set Methods and Applications. Physica-Verlag, Heidelberg, New York (2000) 49 - 88 4. Jelonek, J., Krawiec, K., Stefanowski, J.: Comparative Study of Feature Subset Selection Techniques for Machine Learning Tasks. Proc. of IIS ’98, Malbork, Poland (1998) 68–77 5. John, H.G., Kohavi, R., Pfleger, K.: Irrelevant Features and the Subset Selection Problem. In: Machine Learning: Proc. of the Eleventh International Conference, Morgan Kaufmann Publishers, San Francisco, CA, (1994) 121–129 6. Kohavi, R., Frasca, B.: Useful Feature Subsets and Rough Set Reducts. In: Proc. of the Third International Workshop on Rough Sets and Soft Computing, San Jose, CA (1994) 7. Kryszkiewicz, M.: The Algorithms of Knowledge Reduction in Information Systems, Ph.D. Thesis, Warsaw University of Technology, Institute of Computer Science (1994) 8. Kryszkiewicz, M., Rybinski, H.: Finding Reducts in Composed Information Systems. Fundamenta Informaticae Vol. 27, No. 2–3 (1996) 183–196 9. Kryszkiewicz, M.: Strong Rules in Large Databases. In: Proc. of IPMU’ 98, Paris, France, Vol. 2 (1998) 1520–1527 10. Kryszkiewicz M., Rybinski H.: Knowledge Discovery from Large Databases using Rough Sets. In: Proc. of EUFIT ’98, Aachen, Germany, Vol. 1 (1998) 85-89

Towards Scalable Algorithms for Discovering Rough Set Reducts

143

11. Kryszkiewicz, M.: Comparative Study of Alternative Types of Knowledge Reduction in Inconsistent Systems. International Journal of Intelligent Systems, Wiley, Vol. 16, No. 1 (2001) 105–120 12. Kryszkiewicz, M.: Rough Set Approach to Rules Generation from Incomplete Information Systems. In: The Encyclopedia of Computer Science and Technology. Marcel Dekker, Inc., New York, Vol. 44 (2001) 319-346 13. Kryszkiewicz, M., Cichoń K.: Scalable Methods of Discovering Rough Sets Reducts. ICS Research Report 28/2003, Warsaw University of Technology (2003) 14. Lin, T.Y.: Rough Set Theory in Very Large Databases. In: Proc. of CESA IMACS ’96, Lille, France Vol. 2 (1996) 936-941 15. Modrzejewski, M.: Feature Selection using Rough Sets Theory. In: Proc. of the European Conference on Machine Learning (1993) 213–226 16. Nguyen, S.H., Skowron, A., Synak, P., Wróblewski, J.: Knowledge Discovery in Databases: Rough Set Approach. In: Proc. of IFSA ’97, Prague, Vol. II (1997) 204-209 17. Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer Academic Publishers, Vol. 9 (1991) 18. Pawlak, Z., Skowron, A.: A Rough Set Approach to Decision Rules Generation, ICS Research Report 23/93, Warsaw University of Technology (1993) 19. Romanski, S., Operations on Families of Sets for Exhaustive Search, Given a Monotonic Boolean Function. In: Proc. of Intl’ Conf. on Data and Knowledge Bases, Israel (1988) 20. Skowron, A., Rauszer, C.: The Discernibility Matrices and Functions in Information Systems. In: Intelligent Decision Support: Handbook of Applications and Advances of Rough Sets Theory. Kluwer Academic Publishers (1992) 331-362 21. Skowron, A., Swiniarski, R.W.: Information Granulation and Pattern Recognition. In: S.K. Pal, L. Polkowski, A. Skowron (eds.): Rough-Neural Computing. Techniques for Computing with Words. Heidelberg: Springer-Verlag (2004) 22. Slezak, D.: Approximate Reducts in Decision Tables. In: Proc. of IPMU ’96, Granada, Spain, Vol. 3 (1996) 1159-1164 23. Slezak, D.: Searching for Frequential Reducts in Decision Tables with Uncertain Objects. In: Proc. of RSCTC ’98, Warsaw. Springer-Verlag, Berlin (1998) 52–59 24. Slowiński, R. (ed.): Intelligent Decision Support, Handbook of Applications and Advances of the Rough Sets Theory. Kluwer Academic Publishers, Vol 11 (1992) 25. Stepaniuk, J.: Approximation Spaces, Reducts and Representatives. In: Skowron, A., Polkowski, L. (eds.): Rough Sets in Data Mining and Knowledge Discovery, SpringerVerlag, Berlin (1998) 26. Susmaga, R.: Experiments in Incremental Computation of Reducts. In: Skowron, A., Polkowski, L., (eds.): Rough Sets in Data Mining and Knowledge Discovery, SpringerVerlag, Berlin (1998) 27. Susmaga, R.: Parallel Computation of Reducts. In: Proc. of RSCTC ’98, Warsaw. SpringerVerlag, Berlin (1998) 450–457 28. Susmaga, R.: Computation of Shortest Reducts. In: Foundations of Computing and Decision Sciences, Poznan, Poland, Vol. 2, No. 23 (1998) 29. Susmaga, R.: Effective Tests for Inclusion Minimality in Reduct Generation. In: Foundations of Computing and Decision Sciences, Vol. 4, No. 23 (1998) 219–240 30. Tannhäuser, M.: Efficient Reduct Computation. M.Sc. Thesis, Institute of Mathematics, Warsaw University, Warsaw (1994) 31. Wroblewski, J.: Finding Minimal Reducts Using Genetic Algorithms. In: Proc. of the 2nd Annual Join Conference on Information Sc., Wrightsville Beach, NC, (1995) 186–189

Variable Precision Fuzzy Rough Sets Alicja Mieszkowicz-Rolka and Leszek Rolka Department of Avionics and Control Rzesz´ ow University of Technology ul. W. Pola 2, 35-959 Rzesz´ ow, Poland {alicjamr,leszekr}@prz.edu.pl

Abstract. In this paper the variable precision fuzzy rough sets (VPFRS) concept will be considered. The notions of the fuzzy inclusion set and the α-inclusion error based on the residual implicators will be introduced. The level of misclassiﬁcation will be expressed by means of α-cuts of the fuzzy inclusion set. Next, the use of the mean fuzzy rough approximations will be postulated and discussed. The concept of VPFRS will be deﬁned using the extended version of the variable precision rough sets (VPRS) model, which utilises a general allowance for levels of misclassiﬁcation expressed by two parameters: lower (l) and upper (u) limit. Remarks concerning the variable precision rough fuzzy sets (VPRFS) idea will be given. An example will illustrate the proposed VPFRS model.

1

Introduction

The rough sets theory [15] was originally based on the notions of classical sets theory. Dubois and Prade [3] and Nakamura [14] were among the ﬁrst who showed that the basic idea of rough set given in the form of lower and upper approximation can be extended in order to approximate fuzzy sets deﬁned in terms of membership functions. This makes it possible to analyse information systems with fuzzy attributes. The idea of fuzzy rough sets was pursued and investigated in many papers e.g. [1, 2, 4–6, 9, 16, 17]. An important extension of the rough sets theory, helpful in analysis of inconsistent decision tables, is the variable precision rough sets model (VPRS). It seems natural and valuable to combine the concepts of VPRS and fuzzy rough sets. The motivation for doing this is supported by the fact that the extended fuzzy rough approximations deﬁned by Dubois and Prade have the same disadvantages as their counterparts in the original (crisp) rough set theory [12]. Even a relative small inclusion error of a similarity class results in rejection (membership value equal to zero) of that class from the lower approximation. A small inclusion degree can also lead to an excessive increase of the upper approximation. These properties can be important especially in case of large universes, e.g. generated from dynamic processes. In order to overcome the described drawbacks we generalised the idea of Ziarko for expressing the inclusion error of one fuzzy set in another. If we want to determine the lower and upper approximation using real data sets, then we must take into account the quality of the data which is usually J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 144–160, 2004. c Springer-Verlag Berlin Heidelberg 2004

Variable Precision Fuzzy Rough Sets

145

inﬂuenced by noise and errors. The VPRS concept admits some level of misclassiﬁcation, but we go one step further and propose additionally an alternative way of evaluating the variable precision fuzzy rough approximations. We suggest determination of the mean membership degree. This is contrary to using only limit values of membership functions and disregarding the statistical properties of analysed large information system. We start by recalling the basic notions of VPRS.

2

Variable Precision Rough Sets Model

The concept of VPRS has proven to be particularly useful in analysis of inconsistent decision tables obtained from dynamic control processes [10]. We have utilised it in order to identify the decision model of a military pilot [11]. The idea of VPRS is based on a changed relation of set inclusion given in (1) and (2) [18], deﬁned for any nonempty crisp subsets A and B of the universe X. We say that the set A is included in the set B with an inclusion error β: β

A ⊆ B ⇐⇒ e(A, B) ≤ β e(A, B) = 1 −

card(A ∩ B) . card(A)

(1) (2)

The quantity e(A, B) is called the inclusion error of A in B. The value of β should be limited: 0 ≤ β < 0.5. Katzberg and Ziarko proposed later [7] an extended version of VPRS with asymmetric bounds l and u for required inclusion degree instead of admissible inclusion error β which satisfy the following inequality: 0≤l 0 x∈X i=1,...,n

and the property of disjointness [3] ∀i, j, i = j,

sup min(µFi (x), µFj (x)) < 1 .

(10)

x∈X

We should use a mapping ω from the domain Φ into the domain of the universe X, if we want to express in X the membership functions of the lower and upper approximation given by (7) and (8). Assuming that Φ is equal to the quotient set of X by a fuzzy similarity relation R, we can determine the membership functions of the fuzzy extension of the lower and upper approximation of a fuzzy set F by R [3]: ∀x ∈ X,

µω(RF ) (x) = inf µR (x, y) → µF (y)

(11)

∀x ∈ X,

µω(RF ) (x) = sup µR (x, y) ∗ µF (y) .

(12)

y∈X

y∈X

In such a case the fuzzy extension ω(A) of a fuzzy set A on X/R can be expressed as follows: µω(A) (x) = µA (Fi ),

if µFi (x) = 1 .

(13)

We use later in this paper a fuzzy compatibility relation which is symmetric and reﬂexive. One can easy show in this case that the deﬁnitions (7) and (8) are equivalent to the deﬁnitions (11) and (12). Indeed, by using a symmetric and reﬂexive fuzzy relation we obtain a family of fuzzy compatibility classes. Any elements x and y of the universe X, for which µR (x, y) = 1, belong with a membership degree equal to 1 to the same fuzzy compatibility class. In order to determine the membership degrees (11) and (12) for some x, we can take merely the membership degrees (7) and (8) obtained for that compatibility class, to which x belongs with a membership degree equal to 1.

Variable Precision Fuzzy Rough Sets

147

Another general approach was given by Greco, Matarazzo and Slowi´ nski, who proposed [5] approximation of fuzzy sets by means of fuzzy relations which are only reﬂexive. An important issue is the choice of implicators used in the deﬁnitions (7), (8), (11) and (12). Apart from applying S-implications, Dubois and Prade considered also the R-implication variant of fuzzy rough sets [3]. A comprehensive study concerning a general concept of fuzzy rough sets was done more recently by Radzikowska and Kerre in [17]. They analysed the properties of fuzzy rough approximations based on three classes of fuzzy implicators: S-implicators, R-implicators and QL-implicators. As we state below, R-implicators constitute a good base for constructing the variable precision fuzzy rough sets model.

4

Variable Precision Fuzzy Rough Sets Model

An extension of the fuzzy rough sets concept in the sense of Ziarko requires a method of determination of the lower and upper approximation, in which only a signiﬁcant part of the approximating set is taken into account. In other words, we should evaluate the membership degree of the approximating set in the lower or upper approximation by regarding only those of its elements, which are included to a suﬃciently high degree in the approximated set. This way we allow some level of misclassiﬁcation. Before we try to express the inclusion error of one fuzzy set in another we will ﬁrst recall the classical deﬁnition of fuzzy set inclusion [8]. For any fuzzy sets A and B deﬁned on the universe X, we say that the set A is included in the set B: A ⊆ B ⇐⇒ ∀x ∈ X,

µA (x) ≤ µB (x) .

(14)

If the condition (14) is satisﬁed, then we should say that the degree of inclusion of A in B is equal to 1 (the inclusion error is equal to 0). In our approach we want to evaluate the inclusion degree of a fuzzy set A in a fuzzy set B regarding particular elements of A. We obtain in such a way a new fuzzy set, which we call the fuzzy inclusion set of A in B and denote by AB . To this end we apply an implication operator → as follows: µA (x) → µB (x) if µA (x) > 0 µAB (x) = (15) 0 otherwise Only the proper elements of A (support of A) are considered as relevant. The deﬁnition (15) is based on implication operator → in order to maintain the compatibility between the the approach of Dubois and Prade and the VPFRS model in limit cases. This will be stated later in this section by the propositions 2 and 3. Examples of inclusion sets are given in the section 7. The Table 2 contains the membership functions of the approximating set X1 , the approximated set F1 and the inclusion sets X1F1 that are evaluated using implication operators of Gaines and L ukasiewicz (discussed below).

148

Alicja Mieszkowicz-Rolka and Leszek Rolka

We should consider the choice of a suitable implication operator →. Basing on (14) we put a requirement on the degree of inclusion of A in B with respect to any element x belonging to the support of the set A (µA (x) > 0). We assume that the degree of inclusion with respect to x should always be equal to 1, if the inequality µA (x) ≤ µB (x) for that x is satisﬁed: µA (x) → µB (x) = 1,

if µA (x) ≤ µB (x) .

(16)

In general, not all implicators satisfy this requirement. For example, by applying the Kleene-Dienes S-implicator: x → y = max(1 − x, y) we obtain the value 0.6, and for Early Zadeh QL-implicator: x → y = max(1 − x, min(x, y)) the value 0.5, if we take x = 0.5 < y = 0.6. Let us consider the deﬁnition of R-implicators (residual implicators) which are based on a t-norm ∗ x → y = sup{λ ∈ [0, 1] : x ∗ λ ≤ y} .

(17)

One can easy prove that any R-implicator satisﬁes the requirement (16). In the last section we demonstrate an example where two popular R-implicators were used: - the implicator of L ukasiewicz: x → y = min(1, 1 − x + y), - the Gaines implicator: x → y = 1 if x ≤ y and y/x otherwise. Radzikowska and Kerre proved that fuzzy rough approximations based on the L ukasiewicz implicator satisﬁed all properties which were considered in [17]. This is because the L ukasiewicz implicator is both an S-implicator and a residual implicator. In order to extend the idea of Ziarko on fuzzy sets we should express the error that would be made, when the weakest elements of approximating set, in the sense of their membership in the fuzzy inclusion set AB , were discarded. We apply to this end the well known notion of α-cut [8], by which for any given fuzzy set A, a crisp set Aα is obtained as follows: Aα = {x ∈ X : µA (x) ≥ α}

(18)

where α ∈ [0, 1]. We introduce the measure of α-inclusion error eα (A, B) of any nonempty fuzzy set A in a fuzzy set B: eα (A, B) = 1 −

power(A ∩ AB α) . power(A)

(19)

Power denotes here the cardinality of a fuzzy set. For any ﬁnite fuzzy set F deﬁned on X n µF (xi ) . (20) power(F ) = i=1

Now, we show that the measure of inclusion error (2) given by Ziarko is a special case of the proposed measure (19).

Variable Precision Fuzzy Rough Sets

149

Proposition 1. For any nonempty crisp sets A and B, and for α ∈ (0, 1] the α-inclusion error eα (A, B) is equivalent to the inclusion error e(A, B). Proof. First, we show that for any crisp sets A and B the inclusion set AB is equal to the intersection A ∩ B. For any crisp set C 1 for x ∈ C (21) µC (x) = 0 for x ∈ /C Every implicator → is a function satisfying: 1 → 0 = 0, and 1 → 1 = 1, 0 → 1 = 1, 0 → 0 = 1. Thus, applying the deﬁnition (15), we get 1 if x ∈ A and x ∈ B µAB (x) = µA∩B (x) = 0 otherwise

(22)

Taking into account (20) and (21), we get for any ﬁnite crisp set C power(C) = card(C) .

(23)

Furthermore, applying (18) for any α ∈ (0, 1] we obtain Cα = C .

(24)

By the equations (22), (23) and (24), we ﬁnally have power(A ∩ (A ∩ B)α ) card(A ∩ B) power(A ∩ AB α) = = . power(A) power(A) card(A) Hence, we obtain eα (A, B) = e(A, B) for any α ∈ (0, 1].

The use of α-cuts gives us the possibility to change gradually the level, at which some of the members of the approximating set are discarded. The evaluation of the membership degree of the whole approximating set in the lower and upper approximation will then be done by respecting only the remaining elements of the approximating set. The level α can adopt any value from the inﬁnite set (0, 1]. In practice, only a ﬁnite subset of (0, 1] will be applied. In our illustrative examples we used values of α obtained with a resolution equal to 0.01. Let us now consider a partition of the universe X which is generated by a fuzzy compatibility relation R. We denote by Xi some compatibility class on X, where i = 1 . . . n. Any given fuzzy set F deﬁned on the universe X can be approximated by the obtained compatibility classes. The u-lower approximation of the set F by R is a fuzzy set on X/R with the membership function which we deﬁne as follows: fiu if ∃αu = sup{α ∈ (0, 1] : eα (Xi , F ) ≤ 1 − u} µRu F (Xi ) = (25) 0 otherwise

150

Alicja Mieszkowicz-Rolka and Leszek Rolka

where fiu = inf µXi (x) → µF (x) x∈Siu

Siu = supp(Xi ∩ XiFαu ) . The set Siu contains those elements of the approximating class Xi that are included in F at least to the degree αu , provided that such αu exists. The membership fiu is then determined using the “better” elements from Siu instead of the whole class Xi . The given deﬁnition helps to prevent the situation when a few “bad” elements of a large class Xi signiﬁcantly reduce the lower approximation of the set F . Furthermore, we suggest the use of R-implicators both for evaluation of eα (Xi , F ) and in place of the operator → in (25). The l-upper approximation of the set F by R can be deﬁned similarly, as a fuzzy set on X/R with the membership function given by: fil if ∃αl = sup{α ∈ (0, 1] : eα (Xi , F ) < 1 − l} µRl F (Xi ) = (26) 0 otherwise where fil = sup µXi (x) ∗ µF (x) x∈Sil

Sil = supp(Xi ∩ (Xi ∩ F )αl ) eα (Xi , F ) = 1 −

power(Xi ∩ (Xi ∩ F )α ) . power(Xi )

For the l-upper approximation a similar explanation as for the u-lower approximation can be given. Conversely, we want to prevent the situation when a few “good” elements of a large class Xi signiﬁcantly increase the upper approximation of F . The inclusion error is now based on the intersection Xi ∩F (t-norm operator ∗) and denoted by eα (Xi , F ). It can be shown in the same way, as for the inclusion error eα , that eα (A, B) = e(A, B) for any nonempty crisp sets A and B and α ∈ (0, 1]. Now, we demonstrate that the fuzzy rough sets of Dubois and Prade constitute a special case of the proposed variable precision fuzzy rough sets, if no inclusion error is allowed (u = 1 and l = 0). Proposition 2. µR1 F (Xi ) = µRF (Xi ) for any fuzzy set F and Xi ∈ X/R. Proof. For u = 1, it is required that eα1 (Xi , F ) = 0. This means that no elements of an approximating compatibility class Xi can be discarded. I. Assume that µRF (Xi ) = inf µXi (x) → µF (x) = c ∈ (0, 1] . x∈X

In that case there exists α1 = c which is the largest possible value of α for that eα (Xi , F ) = 0. This is because the same function µXi (x) → µF (x) is used for

Variable Precision Fuzzy Rough Sets

151

determination of the inclusion set XiF . We evaluate fi1 using the set Si1 , which is equal to the class Xi since no elements of Xi are discarded. Hence, we have µR1 F (Xi ) = µRF (Xi ) = c. II. Assume now that µRF (Xi ) = inf µXi (x) → µF (x) = 0 . x∈X

There does not exist α ∈ (0, 1] for which eα (Xi , F ) = 0. Any α ∈ (0, 1] would cause discarding some x ∈ Xi . In consequence, we get µR1 F (Xi ) = µRF (Xi ) = 0 according to the deﬁnition (25).

Similarly, one can prove the next proposition which holds for the l-upper fuzzy rough approximation. Proposition 3. µR0 F (Xi ) = µRF (Xi ) for any fuzzy set F and Xi ∈ X/R. The fuzzy rough approximations based on limit values of membership functions are not always suitable for analysis of real data. This can be particulary justiﬁed in case of large universes. The obtained results should correspond to the statistical properties of analysed information systems. We need an approach that takes into account the overall set inclusion, and not merely uses a single value of membership function (often determined from noisy data). Therefore, we propose additionally an alternative deﬁnition of fuzzy rough approximations, in which the mean value of membership (in the fuzzy inclusion set) for all used elements of the approximating class is utilised. The mean u-lower approximation of the set F by R is a fuzzy set on X/R with the membership function which we deﬁne as follows: fiu if ∃αu = sup{α ∈ (0, 1] : eα (Xi , F ) ≤ 1 − u} (27) µRu F (Xi ) = 0 otherwise where fiu =

power(XiF ∩ XiFαu ) card(XiFαu )

.

The mean l-upper approximation of the set F by R is a fuzzy set on X/R with the membership function deﬁned by: fil if ∃αl = sup{α ∈ (0, 1] : eα (Xi , F ) < 1 − l} µRl F (Xi ) = (28) 0 otherwise where fil =

power(XiF ∩ XiFα ) l

card(XiFα )

.

l

The quantities fiu and fil express the mean value of inclusion degree of Xi in F , determined by using only those elements of Xi , which are included in F at least to the degree αu and αl respectively.

152

Alicja Mieszkowicz-Rolka and Leszek Rolka

Observe that we admit only α ∈ (0, 1]. If the admissible inclusion error (1−u) is equal to 0 and there exists any x with µXi (x) > 0 for that µXi (x) → µF (x) = 0, then the α-inclusion error eα (Xi , F ) = 0 only for α = 0. The use of α = 0 would result in the same value of the membership function (27) for the admissible inclusion error equal to 0 and for some value of it greater than 0. Moreover, by avoiding α = 0 we achieve full accordance with the original deﬁnitions of Ziarko in case of crisp sets and crisp equivalence relation R. In such a case the values of fiu and fil are always equal to 1. Proposition 4. For any crisp set A and crisp equivalence relation R the mean variable precision fuzzy rough approximations of A by R are equal to the variable precision rough approximations of A by R. Proof. The equivalence relation R generates a partition of the universe X into crisp equivalence classes Xi , i = 1 . . . n. By the proposition 1 and its proof: eα (Xi , A) = e(Xi , A), XiAαu = XiA , power(XiA ) = card(XiA ) for α ∈ (0, 1]. Thus, we get for the mean u-lower approximation of A by R fiu =

power(XiA ∩ XiAαu ) card(XiAαu )

µRu A (Xi ) =

1 0

=

card(XiA ) =1 card(XiA )

if e(Xi , A) ≤ 1 − u} otherwise

(29)

and for the mean l-upper approximation of A by R fil =

power(XiA ∩ XiAα ) l

card(XiAα ) l

µRl A (Xi ) =

1 0

=

card(XiA ) =1 card(XiA )

if e(Xi , A) < 1 − l} otherwise

(30)

Taking into account all approximating equivalence classes Xi and applying (13) we obtain from (29) and (30) the VPRS approximations (5) and (6) on the domain X.

5

Variable Precision Rough Fuzzy Sets Model

The idea of rough fuzzy sets was introduced by Dubois and Prade in order to approximate fuzzy concepts by means of equivalence classes Xi , i = 1 . . . n, generated by a crisp equivalence relation R deﬁned on X. The lower and upper approximations of a fuzzy set F by R are fuzzy sets on X/R with membership functions deﬁned as follows [3]: µRF (Xi ) = inf{µF (x): x ∈ Xi }

(31)

µRF (Xi ) = sup{µF (x): x ∈ Xi } .

(32)

The pair of sets (RF, RF ) is called a rough fuzzy set [3].

Variable Precision Fuzzy Rough Sets

153

Proposition 5. For every implication operator →, every t-norm ∗, and crisp equivalence relation R fuzzy rough sets are equivalent to rough fuzzy sets. Proof. Since we use crisp equivalence classes Xi we have µXi (x) = 1 for all elements x ∈ Xi . Every R-implicator, S-implicator, and QL-implicator is a border implicator [17] which satisﬁes the condition: 1 → x = x for all x ∈ [0, 1]. Every t-norm ∗ satisﬁes the boundary condition: 1 ∗ x = x. Thus, we get µXi (x) → µF (x) = µF (x) µXi (x) ∗ µF (x) = µF (x) . Therefore, the deﬁnitions (31) and (32) are special case of (7) and (8).

Basing on the proposition 5 we can easy adopt the variable precision fuzzy rough approximations from the previous section in order to obtain a simpler form of the variable precision rough fuzzy approximations. In [12] we proposed a concept of variable precision rough fuzzy sets in case of symmetrical bounds (admissible inclusion error β). We deﬁned [12] the β-lower and β-upper approximation of a fuzzy set F respectively as follows: inf{µF (x): x ∈ Si } if es (Xi , F ) ≤ β µRβ F (Xi ) = (33) 0 otherwise sup{µF (x): x ∈ Si } if es (Xi , F ) < 1 − β (34) µRβ F (Xi ) = 0 otherwise where Si = supp(Xi ∩ F ) is the support set of the intersection of Xi and F , and es is the support inclusion error, which can be deﬁned for any nonempty fuzzy sets A and B card(supp(A ∩ B)) . (35) es (A, B) = 1 − card(supp(A)) The mean rough fuzzy β-approximations were deﬁned [12] as follows: fi if es (Xi , F ) ≤ β µRβ F (Xi ) = 0 otherwise fi if es (Xi , F ) < 1 − β µRβ F (Xi ) = 0 otherwise

(36) (37)

power(Xi ∩ F ) . (38) card(supp(Xi ∩ F )) By comparing (33),(34), (36) and (37) with the deﬁnitions (25), (26), (27) and (28) respectively and taking into account the proposition 5, one can easy show that the former deﬁnitions constitute a restricted version of the new ones. This can be done by setting u = 1 − β and l = β, and by narrowing the interval (0,1] of α so that only those elements of the approximating crisp class Xi are eliminated which do not belong to the fuzzy set F at all. This is the worst case (1 → 0), in which implication produces the value of 0 by deﬁnition. We will use further only the reﬁned variable precision fuzzy rough approximations given in the current paper. fi =

154

6

Alicja Mieszkowicz-Rolka and Leszek Rolka

Decision Tables with Fuzzy Attributes

In order to analyse decision tables with fuzzy attributes we deﬁned in [12] a fuzzy compatibility relation R. We introduced furthermore the notion of fuzzy information system S with the following formal description S = X, Q, V, f

(39)

where: X – a nonempty set, called the universe, Q – a ﬁnite set of attributes, V – a set of fuzzy values of attributes. V = q∈Q Vq , where: Vq is the fuzzy domain of the attribute q, Vq is the fuzzy (linguistic) value given by a membership function µVq deﬁned on the original domain Uq of the attribute q, f – an information function, f : X × Q → V, f (x, q) ∈ Vq , ∀q ∈ Q, and ∀x ∈ X. A compatibility relation R, for comparing any elements x, y ∈ X with fuzzy values of attributes, is deﬁned as follows [12]: µR (x, y) = min sup min(µVq (x) (u), µVq (y) (u)) q∈Q u∈Uq

(40)

where Vq (x), Vq (y) are fuzzy values of the attribute q for x and y respectively. The relation given by (40) is reﬂexive and symmetric (tolerance relation). If the intersection of any two diﬀerent fuzzy values of each attribute equals to an empty fuzzy set, then the relation (40) is additionally transitive (fuzzy similarity relation). In such a case the decision table can be analysed using the original measures of the rough sets theory. For crisp attributes the relation (40) is an equivalence relation. Another form of fuzzy decision tables was considered by Bodjanova [1]. In that approach the attributes represented degree of membership in fuzzy condition and fuzzy decision concepts. An important measure, often used for evaluating the consistence of decision tables, is the approximation quality, which was originally deﬁned for a given family of crisp sets Y = {Y1 , Y2 , . . . , Yn } and a crisp indiscernibility relation R: card(PosR (Y )) card(X) PosR (Y ) = RYi .

γR (Y ) =

(41) (42)

Yi ∈Y

We modiﬁed the measure of approximation quality in order to deal with fuzzy sets and fuzzy relations [12]. For a family Φ = {F , F , . . . , Fn } of fuzzy sets and a fuzzy compatibility relation R the approximation quality of Φ by R is deﬁned as follows: γR (Φ) =

power(PosR (Φ)) card(X)

(43)

Variable Precision Fuzzy Rough Sets

PosR (Φ) =

ω(RFi ) .

155

(44)

Fi ∈Φ

The equation (43) is a generalised deﬁnition of approximation quality (mapping ω is explained in the section 3). If the family Φ and the relation R are crisp, then the generalised approximation quality (43) is equivalent to (41). In the next section we will need the measure (43) for evaluating the quality of approximation of compatibility classes obtained with respect to fuzzy decision attributes by compatibility classes obtained with respect to fuzzy condition attributes. Because the positive area of classiﬁcation (44) in the VPFRS model will be obtained by allowing some inclusion error (1−u) we use a measure, which is called u-lower approximation quality.

7

Examples

In the following example we apply the proposed concept of variable precision fuzzy rough approximations to analysis of a decision table with fuzzy attributes (Table 1). We use a compatibility relation (40) for comparing elements of the universe. Table 1. Decision table with fuzzy attributes x

c1

c2

c3

d

x1 x2 x3 x4 x5 x6 x7 x8 x9 x10

A1 A2 A1 A1 A1 A2 A1 A1 A1 A1

B1 B2 B2 B1 B1 B2 B2 B1 B2 B1

C1 C2 C2 C1 C1 C2 C1 C1 C1 C1

D1 D2 D2 D1 D3 D2 D3 D1 D3 D1

For all attributes typical triangular fuzzy membership functions were chosen. The intersection levels of diﬀerent linguistic values for attributes are assumed as follows: for A1 and A2 : 0.3, for D1 and D2 : 0.2, otherwise: 0.

for B1 and B2 : 0.2, for D2 and D3 : 0.2,

for C1 and C2 : 0.25,

We obtain a family Φ = {F , F , F } of compatibility classes with respect to the fuzzy decision attribute d:

156

Alicja Mieszkowicz-Rolka and Leszek Rolka

F1 = { 1.00/x1 , 1.00/x8 , F2 = { 0.20/x1 , 0.20/x8 , F3 = { 0.00/x1 , 0.00/x8 ,

0.20/x2 , 0.00/x9 , 1.00/x2 , 0.20/x9 , 0.20/x2 , 1.00/x9 ,

0.20/x3 , 1.00/x10 1.00/x3 , 0.20/x10 0.20/x3 , 0.00/x10

1.00/x4, 0.00/x5 , 0.20/x6 , 0.00/x7 , }, 0.20/x4, 0.20/x5 , 1.00/x6 , 0.20/x7 , }, 0.00/x4, 1.00/x5 , 0.20/x6 , 1.00/x7 , },

and the following family Ψ = {X , X , X , X } of compatibility classes with respect to the fuzzy condition attributes c1 , c2 , c3 : X1 = { 1.00/x1 , 1.00/x8 , X2 = { 0.20/x1 , 0.20/x8 , X3 = { 0.20/x1 , 0.20/x8 , X4 = { 0.20/x1 , 0.20/x8 ,

0.20/x2 , 0.20/x9 , 1.00/x2 , 0.25/x9 , 0.30/x2 , 0.25/x9 , 0.25/x2 , 1.00/x9 ,

0.20/x3 , 1.00/x10 0.30/x3 , 0.20/x10 1.00/x3 , 0.20/x10 0.25/x3 , 0.20/x10

1.00/x4, }, 0.20/x4, }, 0.20/x4, }, 0.20/x4, }.

1.00/x5 , 0.20/x6 , 0.20/x7 , 0.20/x5 , 1.00/x6 , 0.25/x7 , 0.20/x5 , 0.30/x6 , 0.25/x7 , 0.20/x5 , 0.25/x6 , 1.00/x7 ,

Table 2. Membership functions of X1 , F1 , X1F1 x x1 x2 x3 x4 x5 x6 x7 x8 x9 x10

µX1 (x) 1.00 0.20 0.20 1.00 1.00 0.20 0.20 1.00 0.20 1.00

µF1 (x) 1.00 0.20 0.20 1.00 0.00 0.20 0.00 1.00 0.00 1.00

µX F1 (x)

µX F1 (x)

(G)

(L)

1.00 1.00 1.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

1.00 1.00 1.00 1.00 0.00 1.00 0.80 1.00 0.80 1.00

1

1

The Table 2 contains the membership functions of the fuzzy inclusion sets X1F1 obtained for the Gaines R-implicator and the L ukasiewicz RS-implicator respectively. We can observe for x7 and x9 a big diﬀerence between the values of the membership functions µX F1 (x) obtained for the Gaines and the L ukasiewicz 1 implicator. If µF1 (x) = 0 the Gaines implicator (x → y = 1 if x ≤ y and y/x otherwise) produces always 0. The implicator of L ukasiewicz (x → y = min(1, 1− x + y)) is more suitable in that case because its value is proportional to the diﬀerence x − y, when x > y. It will be easier, on a later stage, to obtain the largest possible α-cut of the fuzzy inclusion set for a given value of the admissible inclusion error, if we apply the L ukasiewicz implicator.

Variable Precision Fuzzy Rough Sets

157

Table 3. u-lower approximation of F1 Method u

G-inf

G-mean

L -inf

L -mean

µRu F1 (X1 )

1 0.8 0.75

0.00 0.00 1.00

0.00 0.00 1.00

0.00 0.80 1.00

0.00 0.96 1.00

µRu F1 (X2 )

1 0.8 0.75

0.00 0.20 0.20

0.00 0.72 0.72

0.20 0.20 0.20

0.76 0.76 0.76

µRu F1 (X3 )

1 0.8 0.75

0.00 0.00 0.20

0.00 0.00 0.79

0.20 0.20 0.20

0.83 0.83 0.83

µRu F1 (X4 )

1 0.8 0.75

0.00 0.00 0.00

0.00 0.00 0.00

0.00 0.00 0.00

0.00 0.00 0.00

The results for the u-lower approximation of F1 by the family Ψ are given in the Table 3. Let us analyse for example the case, where the upper limit u = 0.80. The admissible inclusion error is equal to 1 − u = 0.20. We see that the membership degrees µF1 (X1 ) for the Gaines implicator are equal to 0, whereas for the L ukasiewicz implicator we obtain µF1 (X1 ) = 0.80 for the inﬁmum and µF1 (X1 ) = 0.96 for the mean u-lower approximation. Only by using a larger value of the admissible inclusion error 1 − u = 0.25 we obtain better results for the Gaines implicator: µF1 (X1 ) = 1 for both the inﬁmum and the mean u-lower approximation. The results for the u-lower approximation of F2 and F3 are given in the Tables 4 and 5. The u-lower approximations of the whole family Φ are given in the Tables 6, 7 and 8. The obtained diﬀerences between the Gaines and the L ukasiewicz implicator have signiﬁcant inﬂuence on the approximation quality for the considered fuzzy information system (see the Table 9) especially for the inﬁmum u-lower approximation. We obtain smaller diﬀerences for the Gaines and L ukasiewicz implicator in case of the mean u-lower approximation. The results given in the Table 9 validate the necessity and usefulness of the introduced VPFRS model. Allowing some level of misclassiﬁcation leads to a signiﬁcant increase of the u-approximation quality (important measure used in analysis of information systems). The mean based VPFRS model produce higher values of the u-approximation quality than the limit based VPFRS model. It must be emphasised here that the strength of the variable precision rough set model can be observed especially for large universes. We had to choose larger values of the admissible inclusion error in the above example, in order to show

158

Alicja Mieszkowicz-Rolka and Leszek Rolka Table 4. u-lower approximation of F2 Method u

G-inf

G-mean

L -inf

L -mean

µRu F2 (X1 )

1 0.75

0.20 0.20

0.60 0.60

0.20 0.20

0.60 0.60

µRu F2 (X2 )

1 0.85

0.80 1.00

0.96 1.00

0.95 1.00

0.99 1.00

µRu F2 (X3 )

1 0.8

0.80 1.00

0.96 1.00

0.95 1.00

0.99 1.00

µRu F2 (X4 )

1 0.75

0.20 0.20

0.84 0.84

0.20 0.20

0.84 0.84

Table 5. u-lower approximation of F3 Method u

G-inf

G-mean

L -inf

L -mean

µRu F3 (X1 )

1 0.75

0.00 0.00

0.00 0.00

0.00 0.00

0.00 0.00

µRu F3 (X2 )

1 0.75

0.00 0.20

0.00 0.68

0.20 0.20

0.75 0.75

µRu F3 (X3 )

1 0.75

0.00 0.00

0.00 0.00

0.20 0.20

0.82 0.82

µRu F3 (X4 )

1 0.75

0.00 0.80

0.00 0.90

0.80 0.95

0.91 0.98

Table 6. u-lower approximation of Φ for u = 1 G-inf G-mean L -inf L -mean

{ { { {

0.20/X1 , 0.60/X1 , 0.20/X1 , 0.60/X1 ,

0.80/X2 , 0.96/X2 , 0.95/X2 , 0.99/X2 ,

0.80/X3 , 0.96/X3 , 0.95/X3 , 0.99/X3 ,

0.20/X4 0.84/X4 0.80/X4 0.91/X4

} } } }

Table 7. u-lower approximation of Φ for u = 0.8 G-inf G-mean L -inf L -mean

{ { { {

0.20/X1 , 0.60/X1 , 0.80/X1 , 0.96/X1 ,

1.00/X2 , 1.00/X2 , 1.00/X2 , 1.00/X2 ,

1.00/X3 , 1.00/X3 , 1.00/X3 , 1.00/X3 ,

0.20/X4 0.84/X4 0.80/X4 0.91/X3

} } } }

Variable Precision Fuzzy Rough Sets

159

Table 8. u-lower approximation of Φ for u = 0.75 G-inf G-mean L -inf L -mean

{ { { {

1.00/X1 , 1.00/X1 , 1.00/X1 , 1.00/X1 ,

1.00/X2 , 1.00/X2 , 1.00/X2 , 1.00/X2 ,

1.00/X3 , 1.00/X3 , 1.00/X3 , 1.00/X3 ,

0.80/X4 0.90/X4 0.95/X4 0.98/X4

} } } }

Table 9. u-approximation quality of Φ Method

γR (Φ)

u

G-inf

G-mean

L -inf

L -mean

1 0.8 0.75

0.380 0.440 0.960

0.756 0.768 0.980

0.553 0.860 0.990

0.779 0.962 0.996

the properties of the proposed approach. Nevertheless, the admissible inclusion error of about 0.2 turned out to be reasonable for analysing large universes obtained from dynamic processes [10, 11, 13].

8

Conclusions

In this paper a concept of variable precision fuzzy rough sets model (VPFRS) was proposed. The VPRS model with asymmetric bounds (l, u) was used. The starting point of the VPFRS idea was introduction of the notion of fuzzy inclusion set that should be based on R-implicators. A generalised notion of α-inclusion error was deﬁned, expressed by means of α-cuts of the fuzzy inclusion set. The idea of mean fuzzy rough approximations was proposed, which helps to obtain results that better correspond to the statistical properties of analysed large information systems. We suggest to use it particularly for small values of the admissible inclusion error. Furthermore, it turns out that application of the L ukasiewicz R-implicator is a good choice for determination of fuzzy rough approximations. The presented generalised approach to VPFRS can be helpful especially in case of analysing fuzzy information systems obtained from real (dynamic) processes. In future work we will concentrate on axiomatisation and further development of the proposed VPFRS model.

References 1. Bodjanova, S.: Approximation of Fuzzy Concepts in Decision Making. Fuzzy Sets and Systems, Vol. 85 (1997) 2. Chakrabarty, K., Biswas, R., Nanda, S.: Fuzziness in Rough Sets. Fuzzy Sets and Systems, Vol. 110 (2000) 3. Dubois, D., Prade H.: Putting Rough Sets and Fuzzy Sets Together. In: Slowi´ nski, R. (ed.): Intelligent Decision Support. Handbook of Applications and Advances of the Rough Sets. Kluwer Academic Publishers, Boston Dordrecht London (1992)

160

Alicja Mieszkowicz-Rolka and Leszek Rolka

4. Greco, S., Matarazzo B., Slowi´ nski R.: The use of rough sets and fuzzy sets in MCDM. In: Gal, T., Stewart, T., Hanne, T. (eds.): Advances in Multiple Criteria Decision Making. Kluwer Academic Publishers, Boston Dordrecht London (1999) 5. Greco, S., Matarazzo B., Slowi´ nski R.: Rough set processing of vague information using fuzzy similarity relations. In: Calude, C.S., Paun, G. (eds.): Finite Versus Inﬁnite – Contributions to an Eternal Dilemma. Springer-Verlag, Berlin Heidelberg New York (2000) 6. Inuiguchi, M., Tanino, T.: New Fuzzy Rough Sets Based on Certainty Qualiﬁcation. In: Pal, S. K., Polkowski, L., Skowron, A. (eds.): Rough-Neuro-Computing: Techniques for Computing with Words. Springer-Verlag, Berlin Heidelberg New York (2002) 7. Katzberg, J.D., Ziarko, W.: Variable Precision Extension of Rough Sets. Fundamenta Informaticae, Vol. 27 (1996) 8. Klir, J., Folger, T. A.: Fuzzy Stets Unertainty and Information. Prentice Hall, Englewood, New Jersey (1988) 9. Lin, T.Y.: Topological and Fuzzy Rough Sets. In: Slowi´ nski, R. (ed.): Intelligent Decision Support. Handbook of Applications and Advances of the Rough Sets. Kluwer Academic Publishers, Boston Dordrecht London (1992) 10. Mieszkowicz-Rolka, A., Rolka, L.: Variable Precision Rough Sets in Analysis of Inconsistent Decision Tables. In: Rutkowski, L., Kacprzyk, J. (eds.): Advances in Soft Computing. Physica-Verlag, Heidelberg (2003) 11. Mieszkowicz-Rolka, A., Rolka, L.: Variable Precision Rough Sets. Evaluation of Human Operator’s Decision Model. In: Soldek, J., Drobiazgiewicz, L. (eds.): Artiﬁcial Intelligence and Security in Computing Systems. Kluwer Academic Publishers, Boston Dordrecht London (2003) 12. Mieszkowicz-Rolka, A., Rolka, L.: Fuziness in Information Systems. Electronic Notes in Theoretical Computer Science, Vol. 82, Issue No. 4. http://www.elsevier.nl/locate/entcs/volume82.html 13. Mieszkowicz-Rolka, A., Rolka, L.: Studying System Properties with Rough Sets. Lectures Notes in Computer Science, Vol. 2657. Springer Verlag, Berlin Heidelberg New York (2003) 14. Nakamura, A.: Application of Fuzzy-Rough Classiﬁcations to Logics. In: Slowi´ nski, R. (ed.): Intelligent Decision Support. Handbook of Applications and Advances of the Rough Sets. Kluwer Academic Publishers, Boston Dordrecht London (1992) 15. Pawlak, Z.: Rough Sets. Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Boston Dordrecht London (1991) 16. Polkowski, L.: Rough Sets: Mathematical Foundations. Physica-Verlag, Heidelberg (2002) 17. Radzikowska, A.M., Kerre, E.E.: A Comparative Study of Fuzzy Rough Sets. Fuzzy Sets and Systems, Vol. 126 (2002) 18. Ziarko, W.: Variable Precision Rough Sets Model. Journal of Computer and System Sciences, Vol. 40 (1993)

Greedy Algorithm of Decision Tree Construction for Real Data Tables Mikhail Ju. Moshkov1,2 1

Faculty of Computing Mathematics and Cybernetics, Nizhny Novgorod State University 23, Gagarina Ave., Nizhny Novgorod, 603950, Russia [email protected] 2 Institute of Computer Science, University of Silesia 39, B¸edzi´ nska St., Sosnowiec, 41-200, Poland

Abstract. In the paper a greedy algorithm for minimization of decision tree depth is described and bounds on the algorithm precision are considered. This algorithm is applicable to data tables with both discrete and continuous variables which can have missing values. Under some natural assumptions on the class N P and on the class of considered tables, the algorithm is, apparently, close to best approximate polynomial algorithms for minimization of decision tree depth. Keywords: data table, decision table, decision tree, depth

1

Introduction

Decision trees are widely used in diﬀerent applications as algorithms for task solving and as a way for knowledge representation. Problems of decision tree optimization are very complicated. In this paper we consider approximate algorithm for decision tree depth minimization which can be applied to real data tables with both discrete and continuous variables having missing values. First, we transform given data table into a decision table, possibly, with many-valued decisions (i.e. pass to the model which is usual for rough set theory [7, 8]). Later, we apply to this table a greedy algorithm which is similar to algorithms for decision tables with one-valued decisions [3], but uses more complicated uncertainty measure. We obtain bounds on precision for this algorithm and, based on results from [2], show that under some natural assumptions on the class N P and on the class of considered tables, the algorithm is, apparently, close to best approximate polynomial algorithms for minimization of decision tree depth. Note that [6] contains some similar results without proofs. The results of the paper were obtained partially in the frameworks of joint research project of Intel Nizhny Novgorod Laboratory and Nizhny Novgorod State University. J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 161–168, 2004. c Springer-Verlag Berlin Heidelberg 2004

162

2

Mikhail Ju. Moshkov

Data Tables and Attributes

A data table D is a rectangular table with t columns which correspond to variables x1 , . . . , xt . The rows of D are t-tuples of variable x1 , . . . , xt values. Values of some variables in some rows can be missed. The table D can contain equal rows. The variables are separated into discrete and continuous. A discrete variable xi takes values from an unordered ﬁnite set Ai . A continuous variable xj takes values from the set IR of real numbers. Each row r of the table D is labelled by an element y(r) from a ﬁnite set C. One can interpret these elements as values of a new variable y. The problem connected with the table D is to predict the value of y using variables x1 , . . . , xt . To this end we will not use values of x1 , . . . , xt directly. We will use values of some attributes depending on variables from the set {x1 , . . . , xt }. An attribute is a function f depending on variables xi1 , . . . , xim ∈ {x1 , . . . , xt } and taking values from the set E = {0, 1, ∗}. Let r be a row of D. If values of all variables xi1 , . . . , xim are deﬁnite in r then for this row the value of f (xi1 , . . . , xim ) belongs to the set {0, 1}. If value of at least one of variables xi1 , . . . , xim is missed in r then for this row the value of f (xi1 , . . . , xim ) is equal to ∗. Consider some examples of attributes. In the system CART [1] the attributes are considered (in the main) each of which depends on one variable xi . Let xi be a continuous variable, and a be a real number. Then the considered attribute takes value 0 if xi < a, takes value 1 if xi ≥ a, and takes value ∗ if the value of xi is missed. Let xi be a discrete variable, which takes values from the set Ai , and / B, takes B be a subset of Ai . Then the considered attribute takes value 0 if xi ∈ value 1 if xi ∈ B, and takes value ∗ if the value of xi is missed. It is possible to consider attributes depending on many variables. For example, let ϕ be a polynomial depending on continuous variables xi1 , . . . , xim . Then the considered attribute takes value 0 if ϕ(xi1 , . . . , xim ) < 0, takes value 1 if ϕ(xi1 , . . . , xim ) ≥ 0, and takes value ∗ if the value of at least one of variables xi1 , . . . , xim is missed. Let F = {f1 , ..., fk } be a set of attributes which will be used for prediction of the variable y value. We will say that two rows r1 and r2 are equivalent relatively F if each attribute fi from F takes the same value on r1 and r2 . The considered equivalence relation divides the set of rows of the table D into equivalence classes S1 , . . . , Sq . Let j ∈ {1, . . . , q}. The rows from the equivalence class Sj are indiscernible from the point of view of values of attributes from F . So when we will predict the value y using only attributes from F we will give the same answer (element from C) for any row from Sj . Denote by C(Sj ) the set of elements d from C such that |{r : r ∈ Sj , y(r) = d}| = max |{r : r ∈ Sj , y(r) = c}| . c∈C

It is clear that only answers from the set C(Sj ) minimize the number of mistakes for rows from the class Sj . For any r ∈ Sj denote C(r) = C(Sj ). Now we can formulate exactly the problem Pred(D, F ) of prediction of the variable y value: for a given row r of the data table D we must ﬁnd a number from the set C(r) using values of attributes from F .

Greedy Algorithm of Decision Tree Construction for Real Data Tables

163

Note that in [5] another setting of the problem of prediction was considered: for a given row r of the data table D we must ﬁnd the set {y(r ) : r ∈ Sj }, where Sj is the equivalence class containing r.

3

Decision Trees

As algorithms for the problem Pred(D, F ) solving we will consider decision trees with attributes from the set F . Such decision tree is ﬁnite directed tree with the root in which each terminal node is labelled either by an element from the set C or by nothing, each non-terminal node is labelled by an attribute from the set F . Three edges start in each non-terminal node. These edges are labelled by 0, 1 and ∗ respectively. The functioning of a decision tree Γ on a row of the data table D is deﬁned in the natural way. We will say that the decision tree Γ solves the problem Pred(D, F ) if for any row r of D the computation is ﬁnished in a terminal node of Γ which is labelled by an element of the set C(r). The depth of a decision tree is the maximal length of a path from the root to a terminal node of the tree. We denote by h(Γ ) the depth of a decision tree Γ . By h(D, F ) we denote the minimal depth of a decision tree with attributes from F which solves the problem Pred(D, F ).

4

Decision Tables with Many-Valued Decisions

We will assume that the information about the problem P (D, F ) is represented in the form of a decision table T = T (D, F ). The table T has k columns corresponding to the attributes f1 , . . . , fk and q rows corresponding to the equivalence classes S1 , . . . , Sq . The value fj (ri ) is on the intersection of a row Si and a column fj , where ri is an arbitrary row from the equivalence class Si . For i = 1, . . . , q the row Si of the table T is labelled by the subset C(Si ) of the set C. We will consider sub-tables of the table T which can be obtained from T by removal of some rows. Let T be a sub-table of T . of rows of the table T . The table T will be called Denote by Row(T ) the set degenerate if Row(T ) = ∅ or Si ∈Row(T ) C(Si ) = ∅. Let i1 , . . . , im ∈ {1, . . . , k} and δ1 , . . . , δm ∈ E = {0, 1, ∗}. We denote by T (i1 , δ1 ) . . . (im , δm ) the sub-table of the table T that consists of rows each of which on the intersections with columns fi1 , . . . , fim has elements δ1 , . . . , δm respectively. We deﬁne the parameter M (T ) of the table T as follows. If T is a degenerate table then M (T ) = 0. Let T be a non-degenerate table. Then M (T ) is minimal natural m such that for any (δ1 , . . . , δk ) ∈ E k there exist numbers i1 , . . . , in ∈ {1, . . . , k}, for which T (i1 , δi1 ) . . . (in , δin ) is a degenerate table, and n ≤ m. set if A nonempty subset B of the set Row(T ) will be called boundary C(S ) = ∅ and C(S ) = ∅ for any nonempty subset B of the set i i Si ∈B Si ∈B B such that B = B. We denote by R(T ) the number of boundary subsets of the set Row(T ). It is clear that R(T ) = 0 if and only if T is a degenerate table.

164

5

Mikhail Ju. Moshkov

Algorithm U for Decision Tree Construction

For decision table T = T (D, F ) we construct a decision tree U (T ) which solves the problem Pred(D, F ). We begin the construction from the tree that consists of one node v which is not labelled. If T has no rows then we ﬁnish the construction. Let T have rows and Si ∈Row(T ) C(Si ) = ∅. Then we mark the node v by an element from the set Si ∈Row(T ) C(Si ) and ﬁnish the construction. Let T have rows and Si ∈Row(T ) C(Si ) = ∅. For i = 1, . . . , k we compute the value Qi = max{R(T (i, δ)) : δ ∈ E}. We mark the node v by the attribute fi0 where i0 is the minimal i for which Qi has minimal value. For each δ ∈ E we add to the tree the node v(δ), draw the edge from v to v(δ), and mark this edge by element δ. For the node v(δ) we will make the same operations as for the node v, but instead of the table T we will consider the table T (i0 , δ), etc.

6

Bounds on Algorithm U Precision

If T is a degenerate table then the decision tree U (T ) consists of one node. The depth of this tree is equal to 0. Consider now the case when T is a non-degenerate table. Theorem 1. Let the decision table T = T (D, F ) be non-degenerate. Then h(U (T )) ≤ M (T ) ln R(T ) + 1 . Later we will show (see Lemma 3) that M (T ) ≤ h(D, F ). So we have the following Corollary 1. Let the decision table T = T (D, F ) be non-degenerate. Then h(U (T )) ≤ h(D, F ) ln R(T ) + 1 . Let t be a natural number. Denote by Tab(t) the set of decision tables T such that |C(Si )| ≤ t for any row Si ∈ Row(T ). Let T ∈ Tab(t). One can show that each boundary subset of the set Row(T ) has at most t + 1 rows. Using this fact it is not diﬃcult to show that the algorithm U has polynomial time complexity on the set Tab(t). Using results from [2] on precision of approximate polynomial algorithms for set covering problem it is possible to prove that if N P ⊆ DT IM E(nO(log log n) ) then for any ε, 0 < ε < 1, there is no polynomial algorithm which for a given decision table T = T (D, F ) from Tab(t) constructs a decision tree Γ such that Γ solves the problem Pred(D, F ) and h(Γ ) ≤ (1 − ε)h(D, F ) ln R(T ). We omit the proof of this statement. Proof of a similar result can be found in [4]. Using Corollary 1 we conclude that if N P ⊆ DT IM E(nO(log log n) ) then the algorithm U is, apparently, close to best (from the point of view of precision) approximate polynomial algorithms for minimization of decision tree depth for decision tables from Tab(t) (at least for small values of t).

Greedy Algorithm of Decision Tree Construction for Real Data Tables

7

165

Proof of Precision Bounds

Lemma 1. Let Γ be a decision tree which solves the problem Pred(D, F ), T = T (D, F ) and τ be a path of the length n from the root to a terminal node of Γ , in which non-terminal nodes are labelled by attributes fi1 , ..., fin , and edges are labelled by elements δ1 , ..., δn . Then T (i1 , δ1 ) . . . (in , δn ) is a degenerate table. Proof. Assume the contrary: let the table T = T (i1 , δ1 ) . . . (in , δn ) be nondegenerate. Let the terminal node v of the path τ be labelled by an element c ∈ C. Since T is a non-degenerate table, it has a row (equivalence class) Si such that c ∈ / C(Si ). Evidently, c ∈ / C(r) for any row r ∈ Si . It is clear that for any row r ∈ Si the computation in the tree Γ moves along the path τ and ﬁnishes in the node v which is impossible since Γ is a decision tree solving the problem Pred(D, F ), and c ∈ / C(r). Therefore T (i1 , δ1 ) . . . (in , δn ) is a degenerate table. Lemma 2. Let T = T (D, F ) and T1 be a sub-table of T . Then M (T1 ) ≤ M (T ). Proof. Let i1 , . . . , in ∈ {1, . . . , k} and δ1 , . . . , δn ∈ E. If T (i1 , δ1 ) . . . (in , δn ) is a degenerate table then T1 (i1 , δ1 ) . . . (in , δn ) is a degenerate table too. From here and from the deﬁnition of the parameter M the statement of the lemma follows. Lemma 3. Let T = T (D, F ). Then h(D, F ) ≥ M (T ). Proof. Let T be a degenerate table. Then, evidently, M (T ) = 0 and h(D, F ) = 0. Let T be a non-degenerate table, and Γ be a decision tree, which solves the problem Pred(D, F ) and for which h(Γ ) = h(D, F ). Consider a tuple (δ1 , . . . , δk ) ∈ E k , which satisﬁes the following condition: if i1 , . . . , in ∈ {1, . . . , k} and T (i1 , δi1 ) . . . (in , δin ) is a degenerate table then n ≥ M (T ). The existence of such tuple follows from the deﬁnition of the parameter M (T ). Consider a path τ from the root to a terminal node of Γ , which satisﬁes the following conditions. Let the length of τ be equal to m and non-terminal nodes of τ be labelled by attributes fi1 , . . . , fim . Then the edges of τ are labelled by elements δi1 , . . . , δim respectively. From Lemma 1 follows that T (i1 , δi1 ) . . . (im , δim ) is a degenerate table. Therefore m ≥ M (T ), and h(Γ ) ≥ M (T ). Since h(D, F ) = h(Γ ), we obtain h(D, F ) ≥ M (T ). Lemma 4. Let T = T (D, F ), T1 be be a sub-table of T , i, i1 , . . . , im ∈ {1, . . . , k} and δ, δ1 , . . . , δm ∈ E. Then R(T1 ) − R(T1 (i, δ)) ≥ R(T1 (i1 , δ1 ) . . . (im , δm )) −R(T1 (i1 , δ1 ) . . . (im , δm )(i, δ)) . Proof. Let T2 = T1 (i1 , δ1 ) . . . (im , δm ). We denote by P1 (respectively by P2 ) the set of boundary sets of rows from T1 (respectively from T2 ) in each of which at least one row has in the column fi an element, which is not equal to δ. One can show that P2 ⊆ P1 , |P1 | = R(T1 ) − R(T1 (i, δ)) and |P2 | = R(T2 ) − R(T2 (i, δ)).

166

Mikhail Ju. Moshkov

Proof (of Theorem 1). Consider a most long path in the tree U (T ) from the root to a terminal node. Let its length be equal to n, its non-terminal nodes be labelled by attributes fl1 , . . . , fln , and its edges be labelled by elements δ1 , . . . , δn . Consider the tables T1 , . . . , Tn+1 , where T1 = T and Tp+1 = Tp (lp , δp ) for p = 1, . . . , n. Let us prove that for any p ∈ {1, . . . , n} the following inequality holds: R(Tp+1 ) ≤

M (Tp ) − 1 R(Tp ) . M (Tp )

(1)

From the description of the algorithm U follows that Tp is a non-degenerate table. Therefore M (Tp ) > 0. For i = 1, . . . , k we denote by σi an element from E such that R(Tp (i, σi )) = max{R(Tp (i, σ)) : σ ∈ E}. From the description of the algorithm U follows that lp is the minimal number from {1, . . . , k} such that R(Tp (lp , σlp )) = min{R(Tp (i, σi )) : i = 1, . . . , k} . Consider the tuple (σ1 , . . . , σk ). From the deﬁnition of M (Tp ) follows that there exists numbers i1 , . . . , im ∈ {1, . . . , k} for which m ≤ M (Tp ) and Tp (i1 , σi1 ) . . . (im , σim ) is a degenerate table. Therefore R(Tp (i1 , σi1 ) . . . (im , σim )) = 0. Hence R(Tp ) − [R(Tp ) − R(Tp (i1 , σi1 ))] − [R(Tp (i1 , σi1 )) −R(Tp (i1 , σi1 )(i2 , σi2 ))] − . . . − [R(Tp (i1 , σi1 ) . . . (im−1 , σim−1 )) −R(Tp (i1 , σi1 ) . . . (im , σim ))] = R(Tp (i1 , σi1 ) . . . (im , σim )) = 0 . Using Lemma 4 we conclude that for j = 1, . . . , m the inequality R(Tp (i1 , σi1 ) . . . (ij−1 , σij−1 ))−R(Tp (i1 , σi1 ) . . . (ij , σij )) ≤ R(Tp )−R(Tp (ij , σij )) holds. Therefore R(Tp ) − m j=1 (R(Tp ) − R(Tp (ij , σij )) ≤ 0 and m

R(Tp (ij , σij )) ≤ (m − 1)R(Tp ) .

j=1

Let s ∈ {1, . . . , m} and R(Tp (is , σis )) = min{R(Tp (ij , σij )) : j = 1, . . . , m}. Then mR(Tp (is , σis )) ≤ (m − 1)R(Tp ) and R(Tp (is , σis )) ≤ m−1 m R(Tp ). Taking into account that R(Tp (lp , σlp )) ≤ R(Tp (is , σis )) and m ≤ M (Tp ) we obtain M (Tp ) − 1 R(Tp ) . (2) R(Tp (lp , σlp )) ≤ M (Tp ) From the inequality R(Tp (lp , δp )) ≤ R(Tp (lp , σlp )) and from (2) follows that the inequality (1) holds. From the inequality (2) in the case p = 1 and from description of the algorithm U follows that if M (T ) = 1 then h(U (T )) = 1, and the statement of the theorem holds. Let M (T ) ≥ 2. From (1) follows that R(Tn ) ≤ R(T1 )

M (Tn−1 ) − 1 M (T1 ) − 1 M (T2 ) − 1 · · ...· . M (T1 ) M (T2 ) M (Tn−1 )

(3)

Greedy Algorithm of Decision Tree Construction for Real Data Tables

167

From the description of the algorithm U follows that Tn is a non-degenerate table. Consequently, R(Tn ) ≥ 1 . (4) From Lemma 2 follows that for p = 1, . . . , n − 1 the inequality M (Tp ) ≤ M (T ) holds. From (3)-(5) follows that 1 ≤ R(T ) 1+

1 M (T ) − 1

M(T )−1 M(T )

(5) n−1

. Therefore

n−1 ≤ R(T ) .

If we take the natural logarithm of both sides of this inequality we conclude 1 that (n − 1) ln 1 + M(T )−1 ≤ ln R(T ). It is known that for any natural m the inequality ln(1 +

1 1 m ) > m+1 holds. Taking (n−1) M(T ) < ln R(T ). Hence n

into account that M (T ) ≥ 2, we

< M (T ) ln R(T ) + 1. Taking into obtain the inequality account that h(U (T )) = n we obtain h(U (T )) < M (T ) ln R(T ) + 1.

8

Conclusion

A greedy algorithm for minimization of decision tree depth is described. This algorithm is applicable to real data tables, which are transformed into decision tables. The structure of the algorithm is so simple that it is possible to obtain bounds on precision of this algorithm. These bounds show that under some natural assumptions on the class N P and on the class of considered decision tables the algorithm is close, apparently, to best approximate polynomial algorithms for minimization of decision tree depth. The second peculiarity of the algorithm is the way of the work with missing values: if we compute the value of an attribute f (xi1 , . . . , xim ), and the value of at least one of variables xi1 , . . . , xim is missed then the computation will go along the special edge labelled by ∗. This peculiarity may be helpful if we will see on the constructed decision tree as on a way for representation of knowledge about data table D.

References 1. Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Wadsworth & Brooks, 1984 2. Feige, U.: A threshold of ln n for approximating set cover (Preliminary version). Proceedings of 28th Annual ACM Symposium on the Theory of Computing (1996) 314–318 3. Moshkov, M.Ju.: Conditional tests. Problems of Cybernetics 40, Edited by S.V. Yablonskii. Nauka Publishers, Moscow (1983) 131–170 (in Russian)

168

Mikhail Ju. Moshkov

4. Moshkov, M.Ju.: About works of R.G. Nigmatullin on approximate algorithms for solving of discrete extremal problems. Discrete Analysis and Operations Research (Series 1) 7(1) (2000) 6–17 (in Russian) 5. Moshkov, M.Ju.: Approximate algorithm for minimization of decision tree depth. Proceedings of the Ninth International Conference Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing. Chongqing, China. Lecture Notes in Computer Science 2639, Springer-Verlag (2003) 611–614 6. Moshkov, M.Ju.: On minimization of decision tree depth for real data tables. Proceedings of the Workshop Concurrency Specification and Programming. Czarna, Poland (2003) 7. Pawlak, Z.: Rough Sets - Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Dordrecht, Boston, London, 1991 8. Skowron, A., Rauszer, C.: The discernibility matrices and functions in information systems. Intelligent Decision Support. Handbook of Applications and Advances of the Rough Set Theory. Edited by R. Slowinski. Kluwer Academic Publishers, Dordrecht, Boston, London (1992) 331–362

Consistency Measures for Conflict Profiles Ngoc Thanh Nguyen and Michal Malowiecki Department of Information Systems, Wroclaw University of Technology Wyb. Wyspianskiego 27, 50-370 Wroclaw, Poland {thanh,malowiecki}@pwr.wroc.pl

Abstract. The formal definition of conflict was formulated and analyzed by Pawlak. In Pawlak’s works the author presented the concept and structure of conflicts. In this concept a conflict may be represented by an information system (U,A), where U is a set of agents taking part in conflict and A is a set of attributes representing conflict issues. On the basis of the information system Pawlak defined also various measures describing conflicts, for example the measure of military potential of the conflict sites. Next the concept has been developed by other authors. In their works the authors defined a multi-valued structure of conflict and proposed using consensus methods to their solving. In this paper the authors present the definition of consistency functions which should enable to measure the degree of consistency of conflict profiles. A conflict profile is defined as a set of opinions of agents referring to the subject of the conflict. Using this degree one could make choice of the method for solving the conflict, for example, a negotiation method or a consensus method. A set of postulates for consistency functions are defined and analyzed. Besides, some concrete consistency functions are formulated and their properties referring to postulates are included.

1 Introduction In Pawlak’s concept [20] a conflict is defined by an information system (U,A) in which U is the set of agents being in conflict, A is a set of attributes representing conflict issues, and the information table contains the conflict content, i.e. the opinions of the agents on particular issues. Each agent for each issue has three possibilities for presenting his opinion: (+) - yes, (−) - no, and (0) - neutral. For example Table 1 below represents the content of a conflict [20]. Within a conflict one can determine several conflict profiles. A conflict profile is the set of opinions generated by the agents on an issue. In the conflict represented by Table 1 we have 5 profiles Pa, Pb, Pc, Pd and Pe, where for example Pb = {+,+,−,−,−,+} and Pc = {+,−,−,−,−,−}. Referring to opinions belonging to these profiles one can observe that the opinions of a certain profile are more similar to each other (that is more convergent or more consistent) than opinions of some other profile. For example, opinions in profile Pc seem to be more consistent than opinions in profile Pb. Below we present another, more practical example. J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 169–186, 2004. © Springer-Verlag Berlin Heidelberg 2004

170

Ngoc Thanh Nguyen and Michal Malowiecki Table 1. The content of a conflict. U 1 2 3 4 5 6

a − + + 0 + 0

b + + − − − +

c + − − − − −

d + − − 0 − 0

e + − 0 − − +

Inconsistency often occurs in testimonies of crime witnesses. Witnesses often have different versions of the same event or on the subject of the same suspect. In this example we consider an investigator who has gathered the following testimonies from four witnesses describing a suspect: • Witness A said: It was a very high and black man with the long black hairs; • Witness B said: It was a high and brown eyes man with the medium long hairs; • Witness C said: Skin: dark; Hairs: short and dark; Height: medium; Eyes: blue; • Witness D said: The suspect was a bald and short man, his skin was brown. As we can noticed, the opinions of the witnesses are not identical, thus they are in conflict. With reference to deposition of witnesses we can create 5 issues of the conflict: Colour of the skin; Colour of eyes; Colour of hairs; Length of hairs and Height. The following conflict profiles are determined: P1. Colour of the skin: {A: black, C: dark, D: brown} P2. Colour of eyes: {B: brown, C: blue} P3. Colour of hairs: {A: black, C: dark} P4. Length of hairs: {A: long, B: medium long, C: short, D: bald} P5. Height: {A: very high, B: high, C: high, D: short} Let us notice that in each profile the opinions are different, thus knowledge of the investigator about the suspect is inconsistent. He may not be sure of the proper value in each subject of the description, but the degrees of his uncertainty are not identical referring to all the conflict issues. It seems that the more consistent are witnesses’ opinions the smaller is the uncertainty degree. In the profiles presented above one may conclude that the elements of profile P3 are most consistent because black and dark colours are very similar, while the elements of profile P4 are the least consistent. In this profile the four witnesses mention all four possible values for the length of hairs, thus the uncertainty degree of the investigator should be large in this case. These examples show that it is needed to determine a value which would represent the degree (or level) of consistency (or inconsistency) of a conflict profile. This value may be very useful in evaluating if a conflict is “solvable” or not. In this paper we propose to define this parameter of conflict profiles as consistency functions. We show a set of postulates which should be satisfied by these functions. We define also several consistency functions and show which postulates they fulfil. The paper is organized as follows. After Introduction, Section 2 presents some aspects of knowledge consistency and inconsistency. Section 3 outlines conflict theories which are the base of consistency measures. Section 4 includes an overview of con-

Consistency Measures for Conflict Profiles

171

sensus methods applied for conflict solving. Definition, postulates and their analysis for consistency functions are given in Section 5. Section 6 presents the definitions of four concrete consistency functions and their analysis referring to defined postulates. Section 7 describes several practical aspects of consistency measures, and some conclusions and the description of the future works are given in Section 8.

2 Consistency and Inconsistency of Knowledge The “consistency” term seems to be well-known and often used notion. This is caused by the fact of intuitive using of this word. Many authors use this term to describe some divergences in various occurrences. The consistency of knowledge notion appears more seldom in knowledge engineering context, but in most of cases it is still used intuitively in order to name some divergences in scientific research. Authors usually use the term, but they do not define what it means. Thus the following questions may arise: What does it mean that the knowledge is consistent or inconsistent? Is there any way to compare levels (or degrees) of consistency or inconsistency? Is there any way to measure up them? During creating publications authors often ignore answers on these questions, because all they need is divalent definition. Either all versions of knowledge are identical (that is it is consistent), or not. However, there exist situations, in which it is needed to know the knowledge consistency level. One of these situations is related to solving knowledge conflicts in multiagent environments. In this kind of conflicts the consistency level (or degree) is very useful because it could help to decide what to do with the agent knowledge states. If these states are different in small degree (the consistency level is high) then the agents could make a compromise (consensus) for reconciling their knowledge. If they are different in large degree (that is the consistency level is low), then it is necessary to gather other knowledge states for more precise reconciling. The need for measures of knowledge consistency has been announced earlier in the aspect, if looking for consensus as the way to solve knowledge conflict of agents is rational [14]. Indeed in multiagent systems, where sources of knowledge acquisition are as various as methods of its acquisition, the inconsistency of knowledge leads to conflict arising. In 1990 Ng and Abramson [12] asked: Is it possible to perform consistency checking on knowledge bases mechanically? If so, how? They claim that this is very important that the knowledge base is consistent because inconsistent knowledge bases may provide inconsistent conclusions. Getting back to conflicts, we can notice that there have been worked out a lot of methods of solving conflicts invented [4][14][15][17], but level of divergence has been usually described in divalent way. Either something was consistent or not. Notified need has led to raising some measures [2]. For looking up for the consensus or the knowledge consistency it is necessary to agree on some formal representation of this knowledge based on distance functions between knowledge states [14]. A multiagent system is a case of distributed environments. Knowledge possessed by agents usually comes from different sources and the problem of their integration may arise. The knowledge of agents may be not only true or false, but also undefined or inconsistent [8]. Knowledge is undefined if there is a case in which the agents do not have any information (absence of information) referring to some subject and the

172

Ngoc Thanh Nguyen and Michal Malowiecki

knowledge is inconsistent if the agents have different versions of it. For integrating this kind of knowledge Loyer, Spyratos and Stamate [8] use Belnap’s four-valued logics which uses true, false, undefined and inconsistent values. They use multivalued logic for describing acquired knowledge and its reasoning. However, the knowledge consistency is divalent. If all versions of knowledge are identical then the knowledge is consistent, else it is inconsistent. The notion of consistency is also very important in enforcing consistency in knowledge-based systems. The rule-based consistency enforcement in knowledgebased systems has been presented by Eick and Werstein [5]. These authors deal with enforcing the consistency constraints which are also called semantic integrity constraints. Actually consistency is one of the most important issues in database system, but it is considered in other sense and there is no need to measure it up. In literature the term of knowledge consistency has been used very often, but we still need an answer to the question about a definition of knowledge consistency. We found some measures, by means of which we can estimate it and we can use it to solve many problems, but we still need a definition, which will translate intuitive approach into formal definition. This definition is provided by Neiger [10]: Knowledge consistency is a property that a knowledge interpretation has with respect to a particular system. Neiger refers to the definition of internal knowledge consistency defined by Helpern and Moses [6]. He formalizes this and other forms of knowledge consistency. After giving the definition, Neiger shows some cases in which knowledge consistency can be applied in distributed systems. In this way he shows how consistent knowledge interpretation can be used to simplify the design of knowledgebased protocols. In other paper [11] Neiger presents how to use knowledge consistency for useful suspension of disbelief. He considers alternative interpretation of knowledge and explores the notion of consistent interpretation. Neiger shows how it can be used to circumvent the known impossibility results in a number of cases. There are of course a lot of applications of knowledge consistency. The authors use this term in each case where it is some kind of divergence, for example between some pieces of knowledge. But there are some situations where we need to know exactly the level of consistency or inconsistency. So we need good measures and tools to estimate the quality of these parameters. These tools have been introduced in [9] and in this paper we are going to present some results of their analysis.

3 Outline of Conflict Theories The simplest conflict takes place when two bodies generate different opinions on the same subject. In works [18][19][20] Pawlak specifies the following elements of a conflict: a set of agents, a set of issues, and a set of opinions of these agents on these issues. The agents and the issues are related with one another in some social or political context. Then we say that a conflict should take place if there are at least two agents whose opinions on some issue differ from each other. Generally, one can distinguish the following 3 constrains of a conflict: • Conflict body: specifies the direct participants of the conflict. • Conflict subject: specifies to whom (or what) the conflict refers and its topic. • Conflict content: specifies the opinions of the participants on the conflict topic.

Consistency Measures for Conflict Profiles

173

In Pawlak's approach the body of conflict is a set of agents, the conflict subject consists of contentious issues and the conflict content is a collection of tuples representing the participants' opinions. Information system tools [4][21] seem to be very good for representing conflicts. In works [14][15] the authors have defined conflicts in distributed systems in the similar way. However, we have built a system which can include more than one conflict, and within one conflict values of the attributes representing agents' opinions should more precisely describe their opinions. This aim has been realized by assuming that values of attributes representing conflict contents are not atomic as in Pawlak's approach, but sets of elementary values, where an elementary value is not necessarily an atomic one. Thus we accept the assumption that attributes are multi-valued, similarly like in Pawlak's concept of multi-valued information systems. Besides, the conflict content in our model is partitioned into three groups. The first group includes opinions of type “Yes, the fact should take place”, the second includes opinions of type “No, the following fact should not take place”, and to the last group contains the opinions of type “I do not know if the fact takes place”. For example, making the forecast of sunshine for tomorrow a meteorological agent can present its opinion as “(Certainly) it will sunny between 10a.m. and 12a.m. and will be cloudy between 3p.m. and 6p.m.”, that means during the rest of the day the agent does not know if it will be sunny or not. This type of knowledge should be taken into account in the system because the set of all possible states of the real world in which the system is placed, is large and an agent having limited possibilities is not assumed to “know everything”. We call the above three kinds of knowledge as positive, negative and uncertain, respectively. In Pawlak's approach positive knowledge is represented by value “+”, and negative knowledge by value “−”. Certain difference occurs between the semantics of Pawlak's “neutrality” and the semantics of “uncertainty” of agents presented in mentioned works. Namely, most often neutrality appears in voting processes and does not mean uncertainty, while uncertainty means that an agent is not competent to present its opinions on some matter. It is worth to note that rough set theory is a very useful tool for conflict analysis. In works [4][21] the authors present an enhancement of the model proposed by Pawlak. With using rough sets tools they explain the nature of conflict and define the conflict situation model in such way that encapsulates the conflict components. Such approach also enables to choose consensus as the conflicts solution, although it is still assumed that attribute values are atomic. In the next section we present an approach to conflict solving, which is based on determining consensus for conflict profiles.

4 The Roles of Consensus Methods in Solving Conflicts Consensus theory has a root in choice theory. A choice from some set A of alternatives is based on a relation α called a preference relation. Owing to it the choice function may be defined as follows: C(A) = {x∈A:(∀y∈A)((x,y)∈α)}

174

Ngoc Thanh Nguyen and Michal Malowiecki

Many works have dealt with the special case, where the preference relation is determined on the basis of a linear order on A. The most popular were the Condorcet choice functions. A choice function is called a Condorcet function if: x∈C(A) ⇔ (∀y∈A)(x∈C({x,y})) In the consensus-based approaches, however, it is assumed that the chosen alternatives do not have to be included in the set presented for choice, thus C(A) need not be a subset of A. On the beginning of this research the authors have dealt only with simple structures of the set A (named macrostructure), such as linear or partial order. Later with the development of computing techniques the structure of each alternative (named microstructure) have been also investigated. Most often the authors assume that all the alternatives have the same microstructure [3]. On the basis of the microstructure one can determine a macrostructure of the set A. Among others, following microstructures have been investigated: linear orders, ordered set partitions, non−ordered set partitions, n−trees, time intervals. The following macrostructures have been considered: linear orders and distance (or similarity) functions. Consensus of the set A is most often determined on the basis of its macrostructure by some optimality rules. If the macrostructure is a distance (or similarity) function then the Kemeny's median [1] is very often used to choose the consensus. According to Kemeny's rule the consensus should be nearest to the elements of the set A. Now, we are trying to analyze what are the roles of consensus in conflict resolution in distributed environments. Before the analysis we should consider what is represented by the conflict content (i.e. the opinions generated by the conflict participations). We may notice that the opinions included in the conflict content represent a unknown solution of some problem. The following two cases may take place [16]: 1. The solution is independent on the opinions of the conflict participants. As an example of this kind of conflicts we can consider different forecasts generated by different meteorological stations referring to the same region for a period of time. The problem is then relied on determining the proper scenario of weather which is unambiguous and really known only when the time comes, and is independent on given forecasts. A conflict in which the solution is independent on opinions of the conflict participants is called independent conflict. In independent conflicts the independence means that the solution of the problem exists but it is not known for the conflict participants. The reasons of this phenomenon may follow from many aspects, among others, from the ignorance of the conflict participations or the random characteristics of the solution which may make the solution impossible to be calculated in a deterministic way. Thus the content of the solution is independent on the conflict content and the conflict participations for some interest have to “guess” it. In this case their solutions have to reflect the proper solution but it is not known if in a valid and complete way. In this case the natural solution of the conflict is relied on determining the proper version of data on the basis of given opinions of the participants. This final version should satisfy the following condition: It should best reflect the given versions. The above defined condition should be suitable to this kind of conflicts because the versions given by the conflict participations reflect the “hidden” and independent solution but it is not known in what degree. Thus in advance each of them is treated as partially valid and partially invalid (which of its part is valid and which of its part is

Consistency Measures for Conflict Profiles

175

invalid – it is not known). The degree in which an opinion is treated as valid is the same for each opinion. This degree may not be equal to 100%. The reason for which all the opinions should be taken into account is that it is not known how large is the degree. It is known only to be greater than 0% and smaller than 100%. In this way the consensus should at best reflect these opinions. In other words, it should at best represent them. For independent conflicts resolution the solution of the problem my be determined by consensus methods. Here for consensus calculation one should use the criterion for minimal sum of distances between the consensus and elements of the profile representing opinions of the conflict participants. This criterion guarantees satisfying the condition mentioned above. 2. The solution is dependent on the opinions of the conflict participants. Conflicts of this kind are called dependent conflicts. In this case this is the opinions of conflict participants, which decide about the solution. As an example let us consider votes at an election. The result of the election is determined only on the basis of these votes. In general this case has a social or political character and the diversity between opinions of the participants most often follows from differences of choice criteria or their hierarchy. For dependent conflicts the natural resolution is relied on determining a version of data on the basis of given opinions. This final version (consensus) should satisfy the following conditions: It should be a good compromise which could be acceptable by the conflict participants. Thus consensus should not only at best represent the opinions but also should reflect each of them in the same degree (with the assumption that each of them is treated in the same way). The condition “acceptable compromise” means that any of opinions should neither be “harmed” nor “favored”. Consider the following example: From a set of candidates (denoted by symbols X, Y, Z...) 4 voters have to choose a committee (as a subset of the candidates’ set). In this aim each of voter votes on such committee which in his opinion is the best one. Assume that the votes are the following: {X, Y, Z}, {X, Y, Z}, {X, Y, Z} and {T}. Let the distance between 2 sets of candidates is equal to the cardinality of their symmetrical difference. If the consensus choice is made only by the first condition then committee {X, Y, Z} should be determined because the sum of distances between it and the votes is minimal. However, one can note that it prefers the first 3 votes while totally ignoring the fourth (the distances from this committee to the votes are: 0, 0, 0 and 4, respectively). Now, if we take committee {X, Y, Z, T} as the consensus then the distances would be 1, 1, 1 and 3, respectively. In this case the consensus neither is too far from the votes nor “harms” any of them. It has been proved that these conditions in general may not be satisfied simultaneously [13]. It is true that the choice based on the criterion of minimization of the sum of squared distances between consensus and the profile' elements gives a consensus more uniform than the consensus chosen by minimization of the distances' sum. Therefore, the criterion of the minimal sum of squared distances is also very important. However, the squared distances' minimal sum criterion often generates computationally complex problems (NP-hard problems), which demand working out heuristic algorithms. Figure 1 below presents the scheme of using consensus methods in the above mentioned cases.

176

Ngoc Thanh Nguyen and Michal Malowiecki A profile X representing a conflict (a unknown solution should be determined)

The solution is independent on the opinions of conflict participants

The solution is dependent on the opinions of conflict participants

The consensus should at best represent the given opinions

The consensus should be a compromise acceptable by the conflict participants

The criterion for minimizing the sum of distances between the consensus and the profile's elements should be used

The criterion for minimizing the sum of the squares of distances between the consensus and the profile's elements should be used

Fig. 1. The scheme for using consensus methods.

5 Postulates for Consistency Measures Formally, let U denote a finite universe of objects (alternatives), and Π(U) denote the ˆ (U) we denote the set of k-element subsets (with repetiset of subsets of U. By ∏ k

ˆ (U)= U ∏ ˆ (U). Each element of set ∏ ˆ (U) is tions) of the set U for k∈N, and let ∏ k k >0

called a profile. In this work we do not use the formalism often used in the consensus theory [1], in which the domain of consensus is defined as U*= U Uk, where Uk is the k >0

k-fold Cartesian product of U. In this way we specify how many times an object can occur in a profile and ensure that the order of profile elements is not important. We also accept in this paper an algebra of sets with repetitions (multisets) given by Lipski and Marek [7]. Some of its elements are as follows: An expression A=(x,x,y,y,y,z) is called a set with repetitions with cardinality equal to 6. In the set A element x appears 2 times, y 3 times and z one time. Set A can also be written as A=(2∗x,3∗y,1∗z). The & and is defined in the followsum of sets with repetitions is denoted by the symbol ∪ ing way: if element x appears in set A n times and in B n' times, then in their sum

Consistency Measures for Conflict Profiles

177

& B the same element should appear n+n' times. The difference of sets with partiA∪ tions is denoted by symbol “–”, its definition follows from the following example: (6∗x,5∗y,1∗z) –(2∗x,3∗y,1∗z) = (4∗x,2∗y). For example, if A=(2∗x,3∗y,1∗z) and & B=(6∗x,5∗y,1∗z). A set A with repetitions is a subset of a set B B=(4∗x,2∗y), then A ∪ with repetitions (A⊆B) if each element from A does not have a greater number of occurrences than it has got in set B. For example (2∗x,3∗y,1∗z) ⊆ (2∗x,4∗y,1∗z). In this paper we only assume that the macrostructure of the set U is known as a dis-

tance function d: U×U → ℜ, which is a) Nonnegative: (∀x,y∈U)[d(x,y) ≥ 0] b) Reflexive:

(∀x,y∈U)[d(x,y) = 0 iff x = y]

c) Symmetrical:

(∀x,y∈U)[d(x,y) = d(y,x)].

For normalization process we can assume that values of function d belong to interval [0,1] and the maximal distance between elements of universe U is equal 1. Let us notice, that the above conditions are only a part of metric conditions. Metric is a good measure of distance, but its conditions are too strong. A space (U,d) defined in this way does not need to be a metric space. Therefore we will call it a distance space [13]. A profile X is called homogeneous if all its elements are identical, that is X={n∗x} for some x∈U and n being a natural number. A profile jest heterogeneous if it is not homogeneous. A profile is called distinguishable if all its elements are different from each other. A profile X is multiple referring to a profile Y (or X is a multiple of Y), if X={n∗x1,..., n∗xk} and Y = {x1,...,xk}. A profile X is regular if it is a multiple of some distinguishable profile. By symbol c we denote the consistency function of profiles. This function has the following signature: ˆ (U) → [0,1]. c: ∏ where [0,1] is the closed interval of real numbers between 0 and 1. The idea of this function is relied on measuring the consistency degree of profile’s elements. The consistency degree of a profile mentions the degrees of indiscernibility (discernibility) defined for an information system [22]. However, they are different conceptions. The difference is based on that the consistency degree represents the coherence level of the profile elements and for its measuring one should firstly define the distances between these elements. The requirements for consistency are expressed in the following postulates: P1a. Postulate for maximal consistency: If X is a homogeneous profile then c(X)=1. P1b. Extended postulate for maximal consistency: For X (n) = {n∗x, k1∗x1, ..., km∗xm} being a profile such that element x occurs n times, and element xi occurs ki times, where ki is a constant for i=1,2,…,m. The following equation should be true: lim c( X ( n) ) = 1 .

n → +∞

178

Ngoc Thanh Nguyen and Michal Malowiecki

P2a. Postulate for minimal consistency: If X={a,b} and d(a,b)= max d (x, y ) then c(X)=0. x, y∈U

P2b. Extended postulate for minimal consistency: For X (n) = {n∗a, k1∗x1, ..., km∗xm, n∗b} being a profile such that elements a and b occur n times, element xi occurs ki times, where ki is a constant for i=1,2,…,m and d(a,b)= max d (x, y ) . The following equation should be true: x, y∈U

lim c( X ( n) ) = 0 .

n → +∞

P2c. Alternative postulate for minimal consistency: If X=U then c(X)=0. P3. Postulate for non-zero consistency: If there exist a,b∈X such that d(a,b) < max d (x, y ) then c(X)>0. x, y∈U

P4. Postulate for heterogeneous profiles: If X is a heterogeneous profile then c(X)1 because if n=1 then function c could be indefinite for Y. Let a’ be such element of universe U that d(a’,Y) = min (D(Y)). It implies that d(a’,Y) ≤ d(a,Y). Besides, from d(a,b) = max d (a, x ) it implies that (n−1)⋅d(a,b) ≥ d(a,Y). Then we have x∈ X

min ( D(Y )) min ( D( X )) d ( a , X ) d ( a , Y ) + d ( a , b) d (a, Y ) d (a ' , Y ) = = ≥ ≥ = . card ( X ) card (Y ) n n n −1 n −1

Because function c satisfies postulate P6 then there should be c(X) ≤ c(Y). This property allows to improve the consistency by removing from the profile the element which generates the maximal distance to the element with minimal sum of distances to the profile’s elements. It also shows that if a consistency function satisfies postulate P6 then it should also partially satisfy postulate P7a.

Consistency Measures for Conflict Profiles

181

Proposition 2. Let c∈CP6, and let a be such an element of universe U that d(a,X) = min (D(X)). The following dependence is true & {a}). c(X) ≤ c(X ∪

Proof. & {a}. From d(a,X) = min (D(X)) it implies that d(a,Y) = min (D(Y)). BeLet Y=X ∪ min ( D( X )) min ( D(Y )) ≥ because card (Y) = sides, d(a,X) = d(a,Y), thus card ( X ) card (Y ) & {a}). card(X)+1. Using the assumption that c∈CP6 we have c(X) ≤ c(X ∪ This property allows to improve the consistency by adding to the profile an element which generates the minimal sum of distances to the profile’s elements. It also shows that if a consistency function satisfies postulate P6 then it should also partially satisfy postulate P7b. Propositions 3-5 below show the independence of postulates P7a and P7b from some other postulates. Proposition 3. Postulates P1a and P2a are inconsistent with postulate P7a, that is CP1a ∩ CP2a ∩ CP7a = ∅. Proof. We show that if a consistency function c satisfies postulates P1a and P2a then it can not satisfy postulate P7a. Let c∈CP1a∩CP2a, let X=U={a,b} and d(a,b) = max d (x, y ) >0, then c(X)=0 according postulate P2a. Because c satisfies postulate x, y∈U

P1a we have c(X− {a}) = c({b}) = 1. Besides, we have d(a,X) = min (D(X)) and c(X− {a})=1 > c(X)=0, so function c can not satisfy postulate P7a. That means postulate P7a is independent on postulates P1a and P2a. Proposition 4. Postulates P1a and P4 are inconsistent with postulate P7a, that is CP1a ∩ CP4 ∩ CP7a = ∅. Proof. We show that if a consistency function c satisfies postulates P1a and P4 then it can not satisfy postulate P7a. Let c∈CP1a∩CP4, let X=U={a,b} and d(a,b) = max d (x, y ) >0, then c(X) c(X), so function c can not satisfy postulate P7a. That means postulate P7a is independent on postulates P1a and P4. Proposition 5. Postulates P2a and P3 are inconsistent with postulate P7b, that is CP2a ∩ CP3 ∩ CP7b = ∅. Proof. We show that if a consistency function c satisfies postulates P2a and P3 then it can not satisfy postulate P7b. Let c∈CP2a∩CP3, let X=U={a,b} and d(a,b) = max d (x, y ) >0, we have d(a,X) = min (D(X)) and d(b,X) = max (D(X)). Then c(X) x, y∈U

182

Ngoc Thanh Nguyen and Michal Malowiecki

& {b}) = c({a,b,b}) >0 because d(b,b)=0 and function c = 0 because c∈CP2a. But c(X ∪ satisfies postulate P3, so it may not satisfy postulate P7b. Here we have the independence of postulate P7b on postulates P2a and P3.

6 Consistency Functions Analysis In this section we present the analysis of 4 consistency functions. These functions are defined as follows: Let X={x1, …, xM} be a profile. We assume that M>1, because if M=1 then the profile X is a homogeneous one. We introduce the following parameters: • The matrix of distances between the elements of profile X: D

X

=

[ ] d ijX

d ( x1, x1) K d ( x1, xM ) , = M O M d ( xM , x1) L d ( x M , xM )

• The vector of average distances between an element to the rest:

[ ]

1 M X 1 M X 1 M X W X = wiX = d j1 , d j2 , K, ∑ ∑ ∑ d jM M − 1 j =1 M − 1 j =1 M − 1 j =1

,

• Diameters of sets X and U:

Diam( X ) = max d (x, y ), x, y∈ X

Diam(U ) = max d (x, y ) = 1, x, y∈U

and the maximal element of vector WX:

( )

Diam W X = max wiX , 1≤ i ≤ M

representing the element of profile X, which generate the maximal sum of distances to other elements, • The average distance in profile X: d(X ) =

M M 1 1 M ∑ ∑ d ijX = ∑ wiX , M ( M − 1) i =1 j =1 M i =1

• The sum of distances between an element x of universe U and the elements of set X: d(x,X) = Σy∈X d(x,y), • The set of all sums of distances: D(X) = {d(x,X): x∈U}, • The minimal sum of distances from an object to the elements of profile X: dmin(X) = min (D(X)).

Consistency Measures for Conflict Profiles

183

These parameters are now applied for the defining the following consistency functions: c1 ( X ) = 1 − Diam( X ),

( )

c2 ( X ) = 1 − Diam W X , c3 ( X ) = 1 − d ( X ), c4 ( X ) = 1 −

1 d min ( X ). M

Values of functions c1, c2, c3 and c4 reflect accordingly: - c1(X) – the maximal distance between two elements of profile. The intuitive sense of this function is based on the fact that if this maximal distance is equal 0 then consistency is maximal (that is 1). - c2(X) – the maximal average distance between an element of profile X and other elements of this profile. If the value of this maximal average distance is small, that is the elements of profile X are near from each other, then the consistency should be high. - c3(X) – the average distance between elements of X. This parameter seems to be most representative for consistency. The larger is this value the smaller is the consistency and vice versa. - c4(X) – the minimal average distance between an element of universe U and elements of X. The element of universe U, which generates the minimal average distance to elements of profile X, may be the consensus for this profile. The profile have a good consensus (that is a good solution for the conflict) if this consensus generates small average distance to the elements of the profile. In this case the consistency should be large. Table 2 presented below shows result of analysing functions. The columns represent postulates and the rows represent the defined functions. Symbol ‘+’ means that presented function satisfies the postulate, symbol ‘−’ means that presented function does not satisfy the postulate and symbol ± means partial satisfying given postulate. From these results it implies that function c4 satisfies partially postulates P7a and P7b. The reason is based on the fact that function c4 satisfies postulate P6 and Propositions 1 and 2. Table 2. Results of consistency functions analysis. c1 c2 c3 c4

P1a + + + +

P1b − − + +

P2a + + + −

P2b + − − −

P2c + − − −

P3 − − + +

P4 + + + +

P5 + + − +

P6 − − − +

P7a − + + ±

P7b − + + ±

184

Ngoc Thanh Nguyen and Michal Malowiecki

Satisfying some postulates and non-satisfying other postulates of each consistency function show many its properties. Below we present some another property of functions c2 and c3 [2]. Proposition 6. If X’⊆X, I is the set of indexes of elements from X’ and wiX = Diam( W X ) for i∈I then profile X\X’ should not have smaller consistency than X, that is c(X\X’) ≥ c(X) where c∈{c2, c3}. Proof. a) For function c2 the proof follows immediately from the notice that Diam( W X ) ≤ Diam( W X \ X ' ). b) For function c3 we have d ( X ) =

1 M X ∑ wi , with the assumption that wiX = M i =1

Diam( W X ) for i∈I it follows d ( X ) ≥ d ( X \ X ' ) , that is c3(X\X’) ≥ c3(X). This property shows a way to improve the consistency by moving from the profile these elements which generate maximal average distance. This way for consistency improvement is simple and therefore is a useful property of functions c2, c3. The way for consistency improvement using function c4 which satisfies postulate P6, has been presented by means of Propositions 1 and 2.

7 Practical Aspects of Consistency Measures One of the practical aspects of consistency measures is their applications for choice of the best method for solving conflicts in distributed environments. Some methods for conflict solving have been developed, each of them is suitable for a given kind of conflicts. But before one decides to select a method for a conflict, he should takes into account the degree of consistency of the opinions which occur in the conflict. This measure should be useful in evaluating if the conflict is already “mature” for solving or not yet. Let us consider an example which illustrates the statement that it is good to know the consistency level before using consensus algorithms. Assume that one is collecting information from meteorological institutes about the weather for the city of Zakopane during the weekend. He want to know if it will be snowing during the weekend. Five institutes say yes and five another institutes say no. Thus a conflict appears. The profile of the conflict looks as follows: X = {5∗yes, 5∗no}. The consensus, if determined, should be yes or no. However, neither yes nor no seems to be a good conflict solution. The consistency of the profile is low, according to postulates P2a and P5 it should be equal 0. It is the reason why the solution is not good. Consensus algorithms are usually very complex. Therefore, it is worth to check out the consistency of conflict profile before determining the consensus. Evaluating consistency measure before using consensus algorithms may eliminate these situations in which the consensus is not a good conflict solution. This will surely increase the ef-

Consistency Measures for Conflict Profiles

185

fectiveness of solving conflicts systems. In the above example there is no good conflict solution at all and one has to collect more information or choose other conflict solution method. Consistency measures are also very helpful for investigators during investigations. Witnesses’ opinions about a suspect can be very inconsistent. The consistency degree of suspect evidences may by used for determining the reliability of witness. Another interesting application of consistency measures is some kind of explorative system, where the menstruation results are collected in some interval of time. The results may be inconsistent, but when the consistency of results equals 1 then the alert can be sent. In this way we can measure for example the concentration of sulfur oxide. The scheme for application of a consistency measure in a conflict situation may look as follows: • First we should define the universe of all possible opinions on some subject, • Then we should determine a conflict profile on this subject, • After this, we have to find proper distance function, and calculate the distances between elements in the created profile, • Next, we choose the most proper consistency measure, which depends on the postulates that we want them to be satisfied, • We use chosen measure to calculate the consistency degree, • Now, we can use this level in decision process. As a matter of fact there is a lot of practical aspects of consistency measures. We can use the consistency degrees in multiagent systems and in all kinds of information systems where knowledge is processed by autonomous programs; in distributed database systems where the data consistency is a one of the key factors, and also in reasoning systems and many others.

8 Conclusions In this paper the concept of measuring consistency degrees of conflict profiles is presented. The authors formulate the conditions (postulates) which should be satisfied by consistency functions. These postulates are independent on the structure of conflict profiles. Some consistency functions have been defined and analyzed referring to the postulates. The future works should concern the solid analysis of presented postulates, which should allow to choose appropriate consistency functions for concrete practical conflict situations. Besides, some implementation should be performed for justifying the sense of introduced postulates and consistency functions.

References 1. Barthelemy, J.P., Janowitz M.F.: A Formal Theory of Consensus. SIAM J. Discrete Math. 4 (1991) 305-322. 2. Danilowicz, C., Nguyen, N.T., Jankowski, Ł.: Methods for selection of representation of agent knowledge states in multi-agent systems. Wroclaw University of Technology Press (2002) (in Polish).

186

Ngoc Thanh Nguyen and Michal Malowiecki

3. Day, W.H.E.: Consensus Methods as Tools for Data Analysis. In: Bock, H.H. (ed.): Classification and Related Methods for Data Analysis. North-Holland (1988) 312-324. 4. Deja, R.: Using Rough Set Theory in Conflicts Analysis, Ph.D. Thesis (Advisor: A. Skowron), Institute of Computer Science, Polish Academy of Sciences, Warsaw 2000. 5. Eick, C.F., Werstein, P.: In: Rule-Based Consistency Enforcement for Knowledge-Based Systems, IEEE Transactions on Knowledge and Data Engineering 5 (1993) 52-64. 6. Helpern, J. Y., Moses, Y.: Knowledge and common knowledge in distributed environment. Journal of the Association for Computing Machinery 37 (2001) 549-587. 7. Lipski, W., Marek, W.: Combinatorial Analysis. WNT Warsaw (1986). 8. Loyer, Y., Spyratos, N., Stamate, D.: Integration of Information in Four-Valued Logics under Non-Uniform Assumption. In: Proceedings of 30th IEEE International Symposium on Multiple-Valued Logic (2000) 180-193. 9. Malowiecki, M., Nguyen, N.T.: Consistency Measures of Agent Knowledge in Multiagent Systems. In: Proceedings of 8th National Conference on Knowledge Engineering and Expert Systems, Wroclaw Univ. of Tech. Press vol. 2 (2003) 245-252. 10. Neiger, G.: Simplifying the Design of Knowledge-based Algorithms Using Knowledge Consistency. Information & Computation 119 (1995) 283-293. 11. Neiger, G.: Knowledge Consistency: A Useful Suspension of Disbelief. In: Proceedings of the Second Conference on Theoretical Aspects of Reasoning about Knowledge. Morgan Kaufmann. Los Altos, CA, USA (1988) 295-308. 12. Ng, K. Ch., Abramson, B.: Uncertainty Management in Expert Systems. In: IEEE Expert: Intelligent Systems and Their Applications (1990) 29-48. 13. Nguyen, N.T.: Using Distance Functions to Solve Representation Choice Problems. Fundamenta Informaticae 48(4) (2001) 295-314. 14. Nguyen, N.T.: Consensus Choice Methods and their Application to Solving Conflicts in Distributed Systems. Wroclaw University of Technology Press (2002) (in Polish). 15. Nguyen, N.T.: Consensus System for Solving Conflicts in Distributed Systems. Journal of Information Sciences 147 (2002) 91-122. 16. Nguyen, N.T., Sobecki, J.: Consensus versus Conflicts – Methodology and Applications. In: Proceedings of RSFDGrC 2003, Lecture Notes in Artificial Intelligence 2639, 565-572. 17. Nguyen, N.T.: Susceptibility to Consensus of Conflict Profiles in Consensus Systems. Bulletin of International Rough Sets Society 5(1/2) (2001) 217-224. 18. Pawlak, Z.: On Conflicts, Int. J. Man-Machine Studies 21 (1984) 127-134. 19. Pawlak, Z.: Anatomy of Conflicts, Bull. EATCS 50 (1993) 234-246. 20. Pawlak, Z.: An Inquiry into Anatomy of Conflicts, Journal of Information Sciences 109 (1998) 65-78. 21. Skowron, A., Deja, R.: On Some Conflict Models and Conflict Resolution. Romanian Journal of Information Science and Technology 5(1-2) (2002) 69-82. 22. Skowron, A., Rauszer, C.: The Discernibility Matrices and Functions in Information Systems. In: E. Słowi ski (ed.): Intelligent Decision Support, Handbook of Applications and Advances of the Rough Sets Theory. Kluwer Academic Publishers (1992) 331-362.

Layered Learning for Concept Synthesis Sinh Hoa Nguyen1 , Jan Bazan2 , Andrzej Skowron3, and Hung Son Nguyen3 1

Polish-Japanese Institute of Information Technology Koszykowa 86, 02-008, Warsaw, Poland 2 Institute of Mathematics,University of Rzesz´ ow Rejtana 16A, 35-959 Rzesz´ ow, Poland 3 Institute of Mathematics, Warsaw University Banacha 2, 02-097 Warsaw, Poland {hoa,bazan,skowron,son}@mimuw.edu.pl

Abstract. We present a hierarchical scheme for synthesis of concept approximations based on given data and domain knowledge. We also propose a solution, founded on rough set theory, to the problem of constructing the approximation of higher level concepts by composing the approximation of lower level concepts. We examine the eﬀectiveness of the layered learning approach by comparing it with the standard learning approach. Experiments are carried out on artiﬁcial data sets generated by a road traﬃc simulator. Keywords: Concept synthesis, hierarchical schema, layered learning, rough sets.

1

Introduction

Concept approximation is an important problem in data mining [10]. In a typical process of concept approximation we assume that there is given information consisting of values of conditional and decision attributes on objects from a ﬁnite subset (training set, sample) of the object universe and on the basis of this information one should induce approximations of the concept over the whole universe. In many practical applications, this standard approach may show some limitations. Learning algorithms may go wrong if the following issues are not taken into account: Hardness of Approximation: A target concept, being a compositions of some simpler one, is too complex, and cannot be approximated directly from feature value vectors. The simpler concepts may be either approximated directly from data (by attribute values) or given as domain knowledge acquired from experts. For example, in the hand-written digit recognition problem, the raw input data are n × n images, where n ∈ [32, 1024] for typical applications. It is very hard to ﬁnd an approximation of the target concept (digits) directly from values of n2 pixels (attributes). The most popular approach to this problem is based on deﬁning some additional, e.g., basic shapes, skeleton graph. These features must be easily extracted from images, and they are used to describe the target concept. J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 187–208, 2004. c Springer-Verlag Berlin Heidelberg 2004

188

Sinh Hoa Nguyen et al.

Eﬃciency: The fact that the complex concept can be decomposed into simpler one allows to decrease the complexity of the learning process. Each component can be learned separately on a piece of a data set and independent components can be learned in parallel. Moreover, dependencies between component concepts and their consequences can be approximated using domain knowledge and experimental data. Expressiveness: Sometime, one can increase the readability of concept description by introducing some additional concepts. The description is more understandable, if it is expressed in natural language. For example, one can compare the readability of the following decision rules: if car speed is high and a distance to a preceding car is small then a traﬃc situation is dangerous

if car speed(X) > 176.7km/h and distance to f ront car(X) < 11.4m then a traﬃc situation is dangerous

Layered learning [25] is an alternative approach to concept approximation. Given a hierarchical concept decomposition, the main idea is to synthesize a target concept gradually from simpler ones. One can imagine the decomposition hierarchy as a treelike structure (or acyclic graph structure) containing the target concept in the root. A learning process is performed through the hierarchy, from leaves to the root, layer by layer. At the lowest layer, basic concepts are approximated using feature values available from a data set. At the next layer more complex concepts are synthesized from basic concepts. This process is repeated for successive layers until the target concept is achieved. The importance of hierarchical concept synthesis is now well recognized by researchers (see, e.g., [15, 14, 12]). An idea of hierarchical concept synthesis, in the rough mereological and granular computing frameworks has been developed (see, e.g., [15, 17, 18, 21]) and problems related to approximation compound concept are discussed, e.g., in [18, 22, 5, 24]. In this paper we concentrate on concepts that are speciﬁed by decision classes in decision systems [13]. The crucial factor in inducing concept approximations is to create the concepts in a way that makes it possible to maintain the acceptable level of precision all along the way from basic attributes to ﬁnal decision. In this paper we discuss some strategies for concept composing founded on the rough set approach. We also examine eﬀectiveness of the layered learning approach by comparison with the standard rule-based learning approach. The quality of the new approach will be veriﬁed relative to the following criteria: generality of concept approximation, preciseness of concept approximation, computation time required for concept induction, and concept description lengths. Experiments are carried out on an artiﬁcial data set generated by a road traﬃc simulator.

2

Concept Approximation Problem

In many real life situations, we are not able to give an exact deﬁnition of the concept. For example, frequently we are using adjectives such as “good”, “nice”, “young”, to describe some classes of peoples, but no one can give their exact

Layered Learning for Concept Synthesis

189

deﬁnition. The concept “young person” appears be easy to deﬁne by age, e.g., with the rule: if age(X) ≤ 30 then X is young, but it is very unnatural to explain that “Andy is not young because yesterday was his 30th birthday”. Such uncertain situations are caused either by the lack of information about the concept or by the richness of natural language. Let us assume that there exists a concept X deﬁned over the universe U of objects (X ⊆ U). The problem is to ﬁnd a description of the concept X, that can be expressed in a predeﬁned descriptive language L consisting of formulas that are interpretable as subsets of U. In general, the problem is to ﬁnd a description of a concept X in a language L (e.g., consisting of boolean formulae deﬁned over subset of attributes) assuming the concept is deﬁnable in another language L (e.g., natural language, or deﬁned by other attributes, called decision attributes). Inductive learning is one of the most important approaches to concept approximation. This approach assumes that the concept X is speciﬁed partially, i.e., values of characteristic function of X are given only for objects from a training sample U ⊆ U. Such information makes it possible to search for patterns in a given language L deﬁned on the training sample sets included (or suﬃciently included) into a given concept (or its complement). Observe that the approximations of a concept can not be deﬁned uniquely from a given sample of objects. The approximations of the whole concept X are induced from given information on a sample U of objects (containing some positive examples from X ∩ U and negative examples from U − X). Hence, the quality of such approximations should be veriﬁed on new testing objects. One should also consider uncertainty that may be caused by methods of object representation. Objects are perceived by some features (attributes). Hence, some objects become indiscernible with respect to these features. In practice, objects from U are perceived by means of vectors of attribute values (called information vectors or information signature). In this case, the language L consists of boolean formulas deﬁned over accessible attributes such that their values are eﬀectively measurable on objects. We assume that L is a set of formulas deﬁning subsets of U and boolean combinations of formulas from L are expressible in L. Due to bounds on expressiveness of language L in the universe U, we are forced to ﬁnd some approximate rather than exact description of a given concept. There are diﬀerent approaches to deal with uncertain and vague concepts like multi-valued logics, fuzzy set theory, or rough set theory. Using those approaches, concepts are deﬁned by “multi-valued membership function” instead of classical “binary (crisp) membership relations” (set characteristic functions). In particular, rough set approach oﬀers a way to establish membership functions that are data-grounded and signiﬁcantly diﬀerent from others. In this paper, the input data set is represented in a form of information system or decision system. An information system [13] is a pair S = (U, A), where U is a non-empty, ﬁnite set of objects and A is a non-empty, ﬁnite set, of attributes. Each a ∈ A corresponds to the function a : U → Va called an

190

Sinh Hoa Nguyen et al.

evaluation function, where Va is called the value set of a. Elements of U can be interpreted as cases, states, patients, or observations. For a given information system S = (U, A), we associate with any non-empty set of attributes B ⊆ A the B-information signature for any object x ∈ U by inf B (x) = {(a, a(x)) : a ∈ B}. The set {infA (x) : x ∈ U } is called the A-information set and it is denoted by IN F (S). The above formal deﬁnition of information systems is very general and it covers many diﬀerent systems such as database systems, or information table which is a two–dimensional array (matrix). In an information table, we usually associate its rows with objects (more precisely information vectors of objects), its columns with attributes and its cells with values of attributes. In supervised learning, objects from a training set are pre-classiﬁed into several categories or classes. To deal with this type of data we use a special information systems called decision systems that are information systems of the form S = (U, A, dec), where dec ∈ / A is a distinguished attribute called decision. The elements of attribute set A are called conditions. In practice, decision systems contain a description of a ﬁnite sample U of objects from a larger (may be inﬁnite) universe U. Usually decision attribute is a characteristic function of an unknown concept or concepts (in the case of several classes). The main problem of learning theory is to generalize the decision function (concept description) partially deﬁned on the sample U , to the universe U. Without loss of generality for our considerations we assume that the domain Vdec of the decision dec is equal to {1, . . . , d}. The decision dec determines a partition U = CLASS1 ∪ . . . ∪ CLASSd of the universe U , where CLASSk = {x ∈ U : dec(x) = k} is called the k th decision class of S for 1 ≤ k ≤ d.

3

Concept Approximation Based on Rough Set Theory

Rough set methodology for concept approximation can be described as follows (see [5]). Deﬁnition 1. Let X ⊆ U be a concept and let U ⊆ U be a ﬁnite sample of U. Assume that for any x ∈ U there is given information whether x ∈ X ∩ U or x ∈ U − X. A rough approximation of the concept X in a given language L (induced by the sample U ) is any pair (LL , UL ) satisfying the following conditions: 1. LL ⊆ UL ⊆ U, 2. LL , UL are expressible in the language L, i.e., there exist two formulas φL , φU ∈ L such that LL = {x ∈ U : x satisﬁes φL } and UL = {x ∈ U : x satisﬁes φU }, 3. LL ∩ U ⊆ X ∩ U ⊆ UL ∩ U ; 4. the set LL (UL ) is maximal (minimal) in the family of sets deﬁnable in L satisfying (3).

Layered Learning for Concept Synthesis

191

The sets LL and UL are called the lower approximation and the upper approximation of the concept X ⊆ U, respectively. The set BN = UL \ LL is called the boundary region of approximation of X. The set X is called rough with respect to its approximations (LL , UL ) if LL = UL , otherwise X is called crisp in U. The pair (LL , UL ) is also called the rough set (for the concept X). Condition (3) in the above list can be substituted by inclusion to a degree to make it possible to induce approximations of higher quality of the concept on the whole universe U. In practical applications the last condition in the above deﬁnition can be hard to satisfy. Hence, by using some heuristics we construct sub-optimal instead of maximal or minimal sets. Also, since during the process of approximation construction we only know U it may be necessary to change the approximation after we gain more information about new objects from U. The rough approximation of concept can be also deﬁned by means of a rough membership function. Deﬁnition 2. Let X ⊆ U be a concept and let U ⊆ U be a ﬁnite sample. A function f : U → [0, 1] is called a rough membership function of the concept X ⊆ U if and only if (Lf , Uf ) is an approximation of X (induced from sample U ) where Lf = {x ∈ U : f (x) = 1} and Uf = {x ∈ U : f (x) > 0}. Note that the proposed approximations are not deﬁned uniquely from information about X on the sample U . They are obtained by inducing the concept X ⊆ U approximations from such information. Hence, the quality of approximations should be veriﬁed on new objects and information about classiﬁer performance on new objects can be used to improve gradually concept approximations. Parameterizations of rough membership functions corresponding to classiﬁers make it possible to discover new relevant patterns on the object universe extended by adding new (testing) objects. In the following sections we present illustrative examples of such parameterized patterns. By tuning parameters of such patterns one can obtain patterns relevant for concept approximation on the extended training sample by some testing objects. 3.1

Case-Based Rough Approximations

For case-base reasoning methods, like kNN (k nearest neighbors) classiﬁer [1, 6], we deﬁne a distance (similarity) function between objects δ : U × U → [0, ∞). The problem of determining the distance function from given data set is not trivial, but in this paper, we assume that such a distance function has been already deﬁned for all object pairs. In kNN classiﬁcation methods (kNN classiﬁers), the decision for a new object x ∈ U − U is made by decisions of k objects from U that are nearest to x with respect to the distance function δ. Usually, k is a parameter deﬁned by an expert or automatically constructed by experiments from data. Let us denote by N N (x; k) the set of k nearest neighbors of x, and by ni (x) = |N N (x; k) ∩ CLASSi | the number of objects from N N (x; k) that belong to ith decision class. The kNN classiﬁers often use a voting algorithm for decision making, i.e., dec(x) = V oting( n1 (x), . . . , nd (x) ) = arg max ni (x), i

192

Sinh Hoa Nguyen et al.

In case of imbalanced data, the vector n1 (x), . . . , nd (x) can be scaled with respect to the global class distribution before applying the voting algorithm. Rough approximation based on the set N N (x; k), that is, an extension of a kNN classiﬁer can be deﬁned as follows. Assume that 0 ≤ t1 < t2 < k and let us consider for ith decision class CLASSi ⊆ U a function with parameters t1 , t2 deﬁned on any object x ∈ U by: if ni (x) ≥ t2 1 t1 ,t2 ni (x)−t1 (1) µCLASSi (x) = if ni (x) ∈ (t1 , t2 ) t2−t1 0 if ni (x) ≤ t1 , where ni (x) is the ith coordinate in the class distribution ClassDist(N N (x; k)) = n1 (x), . . . , nd (x) of N N (x; k). Let us assume that parameters to1 , to2 have been chosen in such a way that the above function satisﬁes for every x ∈ U the following conditions: to ,to

1 2 (x) = 1 then [x]A ⊆ CLASSi ∩ U, if µCLASS i o to 1 ,t2

if µCLASSi (x) = 0 then [x]A ∩ (CLASSi ∩ U ) = ∅,

(2) (3)

where [x]A = {y ∈ U : inf A (x) = inf A (y)} denotes the indiscernibility class deﬁned by x relatively to a ﬁxed set of attributes A. o to 1 ,t2 Then the function µCLASS considered on U can be treated as the rough i membership function of the ith decision class. It is the result of induction on U of the rough membership function of ith decision class restricted to the sample o to 1 ,t2 deﬁnes a rough approximations LkN N (CLASSi ) and U . The function µCLASS i UkN N (CLASSi ) of ith decision class CLASSi . For any object x ∈ U we have x ∈ LkN N (CLASSi ) ⇔ ni (x) ≥ to2 and x ∈ UkN N (CLASSi ) ⇔ ni (x) ≥ to1 . Certainly, one can consider in conditions (2)-(3) an inclusion to a degree and equality to a degree instead of the crisp inclusion and the crisp equality. Such degrees parameterize additionally extracted patterns and by tuning them one can search for relevant patterns. As we mentioned above kNN methods have some drawbacks. One of them is caused by the assumption that the distance function is deﬁned a priori for all pairs of objects, that is not the case for many complex data sets. In the next section we present an alternative way to deﬁne rough approximations from data. 3.2

Rule-Based Rough Approximations

In this section we describe the rule-based rough set approach to approximations. Let S = (U, A, dec) be a decision table. A decision rule for the k th decision class is any expression of the form (ai1 = v1 ) ∧ ... ∧ (aim = vm ) ⇒ (dec = k),

(4)

where aij ∈ A and vj ∈ Vaij . Any decision rule r of the form (4) can be characterized by following parameters:

Layered Learning for Concept Synthesis

– – – –

193

length(r): the number of descriptors in the the left hand side of implication; [r] = carrier of r, i.e., the set of objects satisfying the premise of r, support(r) = card([r] ∩ CLASSk ), conf idence(r) introduced to measure the truth degree of the decision rule: conf idence(r) =

support(r) , card([r])

(5)

The decision rule r is called consistent with S if conf idence(r) = 1. Among decision rule generation methods developed using the rough set approach one of the most interesting is related to minimal consistent decision rules. Given a decision table S = (U, A, dec), the rule r is called a minimal consistent decision rule (with S) if is consistent with S and any decision rule r created from r by removing any of descriptors from the left hand side of r is not consistent with S. The set of all minimal consistent decision rules for a given decision table S, denoted by M in Cons Rules(S), can be computed by extracting from the decision table object oriented reducts (called also local reducts relative to objects) [3, 9, 26]. The elements of M in Cons Rules(S) can be treated as interesting, valuable and useful patterns in data and used as a knowledge base in classiﬁcation systems. Unfortunately, the number of such patterns can be exponential with respect to the size of a given decision table [3, 9, 26, 23]. In practice, we must apply some heuristics, like rule ﬁltering or object covering, for selection of subsets of decision rules Given a decision table S = (U, A, dec). Let us assume that RULES(S) is a set of decision rules induced by some rule extraction method. For any object x ∈ U, let M atchRules(S, x) be the set of rules from RULES(S) supported by x. One can deﬁne the rough membership function µCLASSk : U → [0, 1] for the concept determined by CLASSk as follows: 1. Let Ryes (x) be the set of all decision rules from M atchRules(S, x) for k th class and let Rno (x) ⊂ M atchRules(S, x) be the set of decision rules for other classes. 2. We deﬁne two real values wyes (x), wno (x), called “for” and “against” weights for the object x by: wyes (x) = strength(r) wno (x) = strength(r) (6) r∈Ryes (x)

r∈Rno (x)

where strength(r) is a normalized function depending on length, support, conf idence of r and some global information about the decision table S like table size, class distribution (see [3]). 3. One can deﬁne the value of µCLASSk (x) by undetermined if max(wyes (x), wno (x)) < ω 0 if wno (x) − wyes (x) ≥ θ and wno (x) > ω µCLASSk (x) = 1 if wyes (x) − wno (x) ≥ θ and wyes (x) > ω θ+(wyes (x)−wno (x)) in other cases 2θ

194

Sinh Hoa Nguyen et al.

where ω, θ are parameters set by user. These parameters make it possible in a ﬂexible way to control the size of boundary region for the approximations established according to Deﬁnition 2. Let us assume that for θ = θo > 0 the above function satisﬁes for every x ∈ U the following conditions: o if µθCLASS (x) = 1 then [x]A ⊆ CLASSk ∩ U, k

(7)

o if µθCLASS (x) = 0 then [x]A ∩ (CLASSk ∩ U ) = ∅, k

(8)

where [x]A = {y ∈ U : inf A (x) = inf A (y)} denotes the indiscernibility class deﬁned by x with respect to the set of attributes A. o Then the function µθCLASS considered on U can be treated as the rough k membership function of the k th decision class. It is the result of induction on U of the rough membership function of k th decision class restricted to the sample o U . The function µθCLASS deﬁnes a rough approximations Lrule (CLASSk ) and k th Urule (CLASSk ) of k decision class CLASSk , where Lrule (CLASSk ) = {x : wyes (x) − wno (x) ≥ θo } and Urule (CLASSk ) = {x : wyes (x) − wno (x) ≥ −θo }.

4

Hierarchical Scheme for Concept Synthesis

In this section we present general layered learning scheme for concept synthesizing. We recall the main principles of the layered learning paradigm [25]. 1. Layered learning is designed for domains that are too complex for learning a mapping directly from the input to the output representation. The layered learning approach consists of breaking a problem down into several task layers. At each layer, a concept needs to be acquired. A learning algorithm solves the local concept-learning task. 2. Layered learning uses a bottom-up incremental approach to hierarchical concept decomposition. Starting with low-level concepts, the process of creating new sub-concepts continues until the high-level concepts, that deal with the full domain complexity, are reached. The appropriate learning granularity and sub-concepts to be learned are determined as a function of the speciﬁc domain. Concept decomposition in layered learning is not automated. The layers and concept dependencies are given as a background knowledge of the domain. 3. Sub-concepts may be learned independently and in parallel. Learning algorithms may be diﬀerent for diﬀerent sub-concepts in the decomposition hierarchy. Layered learning is eﬀective for huge data sets and it is useful for adaptation when a training set changes dynamically. 4. The key characteristic of layered learning is that each learned layer directly aﬀects learning at the next layer. When using the layered learning paradigm, we assume that the target concept can be decomposed into simpler ones called sub-concepts. A hierarchy of

Layered Learning for Concept Synthesis

195

concepts has a treelike structure. A higher level concept is constructed from those concepts in lower levels. We assume that a concept decomposition hierarchy is given by domain knowledge [18, 21]. However, one should observe that concepts and dependencies among them represented in domain knowledge are expressed often in natural language. Hence, there is a need to approximate such concepts and such dependencies as well as the whole reasoning. This issue is directly related to the computing with words paradigm [27, 28] and to roughneural approach [12], in particular to rough mereological calculi on information granules (see, e.g., [15–19]). ...

3

C0

C

. .. .. .. .. .. .. .. .. ...

C1

...... .. . . . .. .. 7 6K....... .. K.... .. .. .. .. .. .. . . . .. .

...

...

Q k Q Q Q h : output of ALGk Qk ...

Ck

.. .. .. .. .. ... .. .... . .. .. .. .

O

... ←− the lth level

Ak : attributes for learning Ck Uk : training objects for learning Ck

Fig. 1. Hierarchical scheme for concept approximation

The goal of a layered learning algorithm is to construct a scheme for concept composition. This scheme is a structure consisting of levels. Each level consists of concepts (C0 , C1 , ..., Cn ). Each concept Ck is deﬁned as a tuple Ck = (Uk , Ak , Ok , ALGk , hk ),

(9)

where (Figure 1): – – – –

Uk is a set of objects used for learning the concept Ck , Ak is the set of attributes relevant for learning the concept Ck , Ok is the set of outputs used to deﬁne the concept Ck , ALGk is the algorithm used for learning the function mapping vector values over Ak into Ok , – hk is the hypothesis returned by the algorithm ALGk as a result of its run on the training set Uk .

The hypothesis hk of the concept Ck in a current level directly aﬀects the next level in the following ways: 1. hk is used to construct a set of training examples U of a concept C in the next level, if C is a direct ancestor of Ck in the decomposition hierarchy. 2. hk is used to construct a set of features A of a concept C in the next level, if C is a direct ancestor of Ck in the decomposition hierarchy.

196

Sinh Hoa Nguyen et al.

To construct a layered learning algorithm, for any concept Ck in the concept decomposition hierarchy, one must solve the following problems: 1. Deﬁne a set of training examples Uk used for learning Ck . A training set in the lowest level are subsets of an input data set. The training set Uk at the higher level is composed from training sets of sub-concepts of Ck . 2. Deﬁne an attribute set Ak relevant to approximate the concept Ck . In the lowest level the attribute set Ak is a subset of an available attribute set. In higher levels the set Ak is created from attribute sets of sub-concepts of Ck , from an attribute set of input data and/or they are newly created attributes. The attribute set Ak is chosen depending on the domain of the concept Ck . 3. Deﬁne an output set to describe the concept Ck . 4. Choose an algorithm to learn the concept Ck that is based on a diven object set and on the deﬁned attribute set. In the next section we discuss in detail methods for concept synthesis. The foundation of our methods is rough set theory. We have already presented some preliminaries of rough set theory as well as parameterized methods for basic concepts approximation. They are a generalization of existing rough set based methods. Let us describe strategies for concept composing from sub-concepts. 4.1

Approximation of Compound Concept

We assume that a concept hierarchy H is given. A training set is represented by decision table SS = (U, A, D). D is a set of decision attributes. Among them are decision attributes corresponding to all basic concepts and a decision attribute for the target concept. Decision values indicate if an object belong to basic concepts or to the target concept, respectively. Using information available from a concept hierarchy for each basic concept Cb , one can create a training decision system SCb = (U, ACb , decCb ), where ACb ⊆ A, and decCb ∈ D. To approximate the concept Cb one can apply any classical method (e.g., k-NN, supervised clustering, or rule-based approach [7, 11]) to the table SCb . For example, one can use a case-based reasoning approach presented in Section 3.1 or a rule-based reasoning approach proposed in Section 3.2 for basic concept approximation. In further discussion we assume that basic concepts are approximated by rule based classiﬁers derived from relevant decision tables. To avoid overly complicated notation let us limit ourselves to the case of constructing compound concept approximation on the basis of two simpler concept approximations. Assume we have two concepts C1 and C2 that are given to us in the form of rule-based approximations derived from decision systems SC1 = (U, AC1 , decC1 ) and SC1 = (U, AC1 , decC1 ). Henceforth, we are given two rough membership functions µC1 (x), µC2 (x). These functions are determined C1 C1 C2 C2 with use of parameter sets {wyes , wno , ω C1 , θC1 } and {wyes , wno , ω C2 , θC2 }, reC C , ω C , θC } spectively. We want to establish a similar set of parameters {wyes , wno for the target concept C, which we want to describe with use of rough membership function µC . As previously noted, the parameters ω, θ controlling of the

Layered Learning for Concept Synthesis

197

C C boundary region are user-conﬁgurable. But, we need to derive {wyes , wno } from data. The issue is to deﬁne a decision system from which rules used to deﬁne approximations can be derived. We assume that both simpler concepts C1 , C2 and the target concept C are deﬁned over the same universe of objects U. Moreover, all of them are given on the same sample U ⊂ U. To complete the construction of the decision system SC = (U, AC , decC ), we need to specify the conditional attributes from AC and the decision attribute decC . The decision attribute value decC (x) is given for any object x ∈ U. For conditional attributes, we assume that they are either rough membership functions for simpler concepts (i.e., AC = {µC1 (x), µC2 (x)}) C1 C1 C2 C2 , wno , wyes , wno }). The output or weights for simpler concepts (i.e., AC = {wyes set Oi for each concept Ci , where i = 1, 2, consists of one attribute that is Ci Ci , wno a rough membership function µCi in the ﬁrst case or two attributes wyes that describe ﬁtting degrees of objects to the concept Ci and its complement, respectively. The rule-based approximations of the concept C are created by extracting rules from SC . It is important to observe that such rules describing C use attributes that are in fact classiﬁers themselves. Therefore, in order to have a more readable and intuitively understandable description as well as more control over the quality of approximation (especially for new cases), it pays to stratify and interpret attribute domains for attributes in AC . Instead of using just a value of a membership function or weight we would prefer to use linguistic statements such as “the likelihood of the occurrence of C1 is low”. In order to do that we have to map the attribute value sets onto some limited family of subsets. Such subsets are then identiﬁed with notions such us “certain”, “low”, “high” etc. It is quite natural, especially in case of attributes being membership functions, to introduce linearly ordered subsets of attribute ranges, e.g., {negative, low, medium, high, positive}. That yields a fuzzy-like layout of attribute values. One may (and in some cases should) consider also the case when these subsets overlap. Then, there may be more linguistic value attached to attribute values since variables like low or medium appear. Stratiﬁcation of attribute values and introduction of a linguistic variable attached to the strata serves multiple purposes. First, it provides a way for representing knowledge in a more human-readable format since if we have a new situation (new object x∗ ∈ U \ U ) to be classiﬁed (checked against compliance with concept C) we may use rules like: If compliance of x∗ with C1 is high or medium and compliance of x∗ with C2 is high then x∗ ∈ C. Another advantage in imposing the division of attribute value sets lays in extended control over ﬂexibility and validity of system constructed in this way. As we may deﬁne the linguistic variables and corresponding intervals, we gain the ability of making a system more stable and inductively correct. In this way we control the general layout of boundary regions for simpler concepts that contribute to construction of the target concept. The process of setting the intervals for attribute values may be performed by hand, especially when additional back-

198

Sinh Hoa Nguyen et al.

ground information about the nature of the described problem is available. One may also rely on some automated methods for such interval construction, such as, e.g., clustering, template analysis and discretization. Some extended discussion on the foundations of this approach, which is related to rough-neural computing [12, 18] and computing with words can be found in [24, 20].

Algorithm 1 Layered learning algorithm Input: Decision system S = (U, A, d), concept hierarchy H; Output: Scheme for concept composition 1: begin 2: for l := 0 to max level do 3: for (any concept Ck at the level l in H) do 4: if l = 0 then 5: Uk := U ; 6: Ak := B; // where B ⊆ A is a set relevant to define Ck 7: else U; 8: Uk := 9: Ak = Oi ; // for all sub-concepts Ci of Ck , where Oi is the output vector of Ci 10: Generate a rule set RU LE(Ck ) to determine the approximation of Ck ; 11: for any object x ∈ Uk do Ck C (x), wnok (x)); 12: generate the output vector (wyes Ck (x) is a fitting degree of x to the concept Ck // where wyes Ci // and wno (x) is a fitting degree of x to the complement of Ck . 13: end for 14: end if 15: end for 16: end for 17: end

Algorithm 1 is the layered learning algorithm used in our experiments.

5

Experimental Results

To verify eﬀectiveness of layered learning approach, we have implemented Algorithm 1 for concept composition presented in Section 4.1. The experiments were performed on data generated by road traﬃc simulator. In the following section we present a description of the simulator. 5.1

Road Traﬃc Simulator

The road simulator is a computer tool that generates data sets consisting of recording vehicle movements on the roads and at the crossroads. Such data sets are used to learn and test complex concept classiﬁers working on information coming from diﬀerent devices and sensors monitoring the situation on the road.

Layered Learning for Concept Synthesis

199

Fig. 2. Left: the board of simulation.

A driving simulation takes place on a board (see Figure 2) that presents a crossroads together with access roads. During the simulation the vehicles may enter the board from all four directions that is east, west, north and south. The vehicles coming to the crossroads form the south and north have the right of way in relation to the vehicles coming from the west and east. Each of the vehicles entering the board has only one goal - to drive through the crossroads safely and leave the board. Both the entering and exiting roads of a given vehicle are determined at the beginning, that is, at the moment the vehicle enters the board. Each vehicle may perform the following maneuvers during the simulation like passing, overtaking, changing direction (at the crossroads), changing lane, entering the traﬃc from the minor road into the main road, stopping, pulling out. Planning each vehicle’s further steps takes place independently in each step of the simulation. Each vehicle, “observing” the surrounding situation on the road, keeping in mind its destination and its own parameters, makes an independent decision about its further steps; whether it should accelerate, decelerate and what (if any) maneuver should be commenced, continued, or stopped. We associate the simulation parameters with the readouts of diﬀerent measuring devices or technical equipment placed inside the vehicle or in the outside environment (e.g., by the road, in a helicopter observing the situation on the road, in a police car). These are devices and equipment playing the role of detecting devices or converters meaning sensors (e.g., a thermometer, range ﬁnder, video camera, radar, image and sound converter). The attributes taking the simulation parameter values, by analogy to devices providing their values will be called sensors. Here are exemplary sensors: distance from the crossroads (in screen units), vehicle speed, acceleration and deceleration, etc.

200

Sinh Hoa Nguyen et al.

Apart from sensors the simulator registers a few more attributes, whose values are determined based on the sensor’s values in a way determined by an expert. These parameters in the present simulator version take over the binary values and are therefore called concepts. Concepts deﬁnitions are very often in a form of a question which one can answer YES, NO or DOES NOT CONCERN (NULL value). In Figure 3 there is an exemplary relationship diagram for some concepts that are used in our experiments.

Fig. 3. The relationship diagram for exemplary concepts

During the simulation, when a new vehicle appears on the board, its so called driver’s proﬁle is determined and may not be changed until it disappears from the board. It may take one of the following values: a very careful driver, a careful driver and a careless driver. Driver’s proﬁle is the identity of the driver and according to this identity further decisions as to the way of driving are made. Depending on the driver’s proﬁle and weather conditions speed limits are determined, which cannot be exceeded. The humidity of the road inﬂuences the length of braking distance, for depending on humidity diﬀerent speed changes take place within one simulation step, with the same braking mode. The driver’s proﬁle inﬂuences the speed limits dictated by visibility. If another vehicle is invisible for a given vehicle, this vehicle is not taken into consideration in the independent planning of further driving by a given car. Because this may cause dangerous situations, depending on the driver’s proﬁle, there are speed limits for the vehicle. During the simulation data may be generated and stored in a text ﬁle. The generated data are in a form of information table. Each line of the board depicts the situation of a single vehicle and the sensors’ and concepts’ values are registered for a given vehicle and its neighboring vehicles. Within each simulation step descriptions of situations of all the vehicles are saved to ﬁle.

Layered Learning for Concept Synthesis

5.2

201

Experiment Description

A number of diﬀerent data sets have been created with the road traﬃc simulator. They are named by cxx syyy, where xx is the number of cars and yyy is the number of time units of the simulation process. The following data sets: c10 s100, c10 s200, c10 s300, c10 s400, c10 s500, c20 s500 have been generated for our experiments. Let us emphasize that the ﬁrst data set consists of about 800 situations, whereas the last data set is the largest one, which can be generated by the simulator. This data set consists of about 10000 situations. Every data set has 100 attributes and has imbalanced class distribution, i.e., about 6% ± 2% of situations are unsave. Every data set cxx syyy was divided randomly into two subsets cxx syyy.trn and cxx syyy.test with proportion 80% and 20%, respectively. The data sets of form cxx syyy.trn are used in learning the concept approximations. We consider two testing models called testing for similar situations and testing for new situations. They are described as follows: Model I: Testing for similar situations. This model uses the data sets of the form cxx syyy.test for testing the quality of approximation algorithms. The situations, which are used in this testing model, are generated from the same simulation process as the training situations. Model II: Testing for new situations. This model uses data from a new simulation process. In this model, we create new data sets using the simulator. They are named by c10 s100N, c10 s200N, c10 s300N, c10 s400N, c10 s500N, c20 s500N, respectively. We compare the quality of two learning approaches called RS rule-based learning (RS) and RS-layered learning (RS-L). In the ﬁrst approach, we employed the RSES system [4] to generate the set of minimal decision rules and classiﬁed the situations from testing data. The conﬂicts are resolved by simple voting strategy. The comparison analysis is performed with respect to the following criteria: 1. 2. 3. 4.

accuracy of classiﬁcation, covering rate of new cases (generality), computing time necessary for classiﬁer synthesis, and size of rule set used for target concept approximation.

In the layered learning approach, from training table we create ﬁve sub-tables to learn ﬁve basic concepts (see Figure 3): C1 : “safe distance from FL during overtaking,” C2 : “possibility of safe stopping before crossroads,” C3 : “possibility of going back to the right lane,” C4 : “safe distance from preceding car,” C5 : “forcing the right of way.” These tables are created using information available from the concept decomposition hierarchy. A concept in the next level is C6 : ”safe overtaking”. C6 is located over the concepts C1 , C2 and C3 in the concept decomposition hierarchy. To approximate concept C6 , we create a table with three conditional attributes. These attributes describe ﬁtting degrees of object to concepts C1 , C2 ,

202

Sinh Hoa Nguyen et al.

C3 , respectively. The decision attribute has three values Y ES, N O, or N U LL corresponding to the cases of overtaking made by car: safe, not safe, not applicable. The target concept C7 : “safe driving” is located at the third level of the concept decomposition hierarchy. The concept C7 is obtained by composition from concepts C4 , C5 and C6 . To approximate C7 we also create a decision table with three attributes, representing ﬁtting degrees of objects to the concepts C4 , C5 , C6 , respectively. The decision attribute has two possible values Y ES or N O if a car is satisfying global safety condition, or not, respectively. Classiﬁcation Accuracy. As we mentioned before, the decision class “safe driving = YES” is dominating in all training data sets. It takes above 90% of training sets. Sets of training examples belonging to the “NO” class are small relative to the training set size. Searching for approximation of the “NO” class with the high precision and generality is a challenge for learning algorithms. In experiments we concentrate on approximation of the “NO” class. In Table 1 we present the classiﬁcation accuracy of RS and RS-L classiﬁers for the ﬁrst of testing models. It means that training sets and test sets are disjoint and samples are chosen from the same simulation data set. Table 1. Classiﬁcation accuracy for the ﬁrst testing model Testing model I Total accuracy Accuracy of YES RS RS-L RS RS-L c10 s100 0.98 0.93 0.99 0.98 c10 s200 0.99 0.99 1 0.99 c10 s300 0.99 0.96 0.99 0.96 c10 s400 0.99 0.97 0.99 0.98 c10 s500 0.99 0.94 0.99 0.93 c20 s500 0.99 0.93 0.99 0.94 Average 0.99 0.95 0.99 0.96

Accuracy of NO RS RS-L 0.67 0 0.90 1 0.82 0.81 0.88 0.85 0.94 0.96 0.91 0.91 0.85 0.75

One can observe that the classiﬁcation accuracy of the testing model I is higher, because the testing the training sets are chosen from the same data set. Although accuracy of the “YES” class is better than the “NO” class but accuracy of the “NO” class is quite satisfactory. In those experiments, the standard classiﬁer shows a little bit better performance than hierarchical classiﬁer. One can observe that when training sets reach a suﬃcient size (over 2500 objects) the accuracy on the class “NO” of both classiﬁers are comparable. To verify if classiﬁer approximations are of high precision and generality, we use the second testing model, where training tables and testing tables are chosen from the new generated simulation data sets. One can observe that accuracy of the “NO” class strongly decreased. In this case the hierarchical classiﬁer shows much better performance. In Table 2 we present the accuracy of the standard classiﬁer and the hierarchical classiﬁer using the second testing model.

Layered Learning for Concept Synthesis

203

Table 2. Classiﬁcation accuracy for the second testing model Testing model II Total accuracy Accuracy of YES RS RS-L RS RS-L c10 s100N 0.94 0.97 1 1 c10 s200N 0.99 0.96 1 0.98 c10 s300N 0.99 0.98 1 0.98 c10 s400N 0.96 0.77 0.96 0.77 c10 s500N 0.96 0.89 0.99 0.90 c20 s500N 0.99 0.89 0.99 0.88 Average 0.97 0.91 0.99 0.92

Accuracy of NO RS RS-L 0 0 0.75 0.60 0 0.78 0.57 0.64 0.30 0.80 0.44 0.93 0.34 0.63

Covering Rate. Generality of classiﬁers usually is evaluated by the recognition ability of unseen objects. In this section we analyze covering rate of classiﬁers for new objects. In Table 3 we present coverage degrees using the ﬁrst testing model. One can observe that the coverage degrees of standard and hierarchical classiﬁers are comparable in this case. Table 3. Covering rate for the ﬁrst testing model Testing model I Total accuracy Accuracy of YES RS RS-L RS RS-L c10 s100 0.97 0.96 0.98 0.96 c10 s200 0.95 0.95 0.96 0.96 c10 s300 0.94 0.93 0.97 0.95 c10 s400 0.96 0.94 0.96 0.94 c10 s500 0.96 0.95 0.97 0.96 c20 s500 0.93 0.97 0.94 0.98 Average 0.95 0.95 0.96 0.96

Accuracy of NO RS RS-L 0.85 1 0.67 0.80 0.59 0.55 0.91 0.87 0.84 0.86 0.79 0.92 0.77 0.83

We also examined the coverage degrees using the second testing model. We obtained the similar scenarios to the accuracy degree. The coverage rate for the both decision classes strongly decreases. Again the hierarchical classiﬁer shows to be more stable than the standard classiﬁer. The results are presented in Table 4. Computing Speed. A time computation necessary for concept approximation synthesis is one of the important features of learning algorithms. Quality of learning approach should be assessed not only by quality of the classiﬁer. In many real-life situations it is necessary not only to make precise decisions but also to learn classiﬁers in a short time. The layered learning approach shows a tremendous advantage in comparison with the standard learning approach with respect to computation time. In the case of standard classiﬁer, computational time is measured as a time required for computing a rule set used for decision class approximation. In the case of

204

Sinh Hoa Nguyen et al. Table 4. Covering rate for the second testing model Testing model II Total accuracy Accuracy of YES RS RS-L RS RS-L c10 s100N 0.44 0.72 0.44 0.74 c10 s200N 0.72 0.73 0.73 0.74 c10 s300N 0.47 0.68 0.49 0.69 c10 s400N 0.74 0.90 0.76 0.93 c10 s500N 0.72 0.86 0.74 0.88 c20 s500N 0.62 0.89 0.65 0.89 Average 0.62 0.79 0.64 0.81

Accuracy of NO RS RS-L 0.50 0.38 0.50 0.63 0.10 0.44 0.23 0.35 0.40 0.69 0.17 0.86 0.32 0.55

Table 5. Time for standard and hierarchical classiﬁer generation Table names RS c10 s100 94 s c10 s200 714 s c10 s300 1450 s c10 s400 2103 s c10 s500 3586 s c20 s500 10209 s Average

RS-L Speed up ratio 2.3 s 40 6.7 s 106 10.6 s 136 34.4 s 60 38.9 s 92 98s 104 90

hierarchical classiﬁer computational time is equal to the total time required for all sub-concepts and target concept approximation. The experiments were performed on computer with processor AMD Athlon 1.4GHz. One can see in Table 5 that the speed up ratio of the layered learning approach over the standard one reaches from 40 to 130 times. Description Size. Now, we consider the complexity of concept description. We approximate concepts using decision rule sets. The size of a rule set is characterized by rule lengths and its cardinality. In Table 6 we present rule lengths and the number of decision rules generated by the standard learning approach. One can observe that rules generated by the standard approach are quite long. They contain above 40 descriptors (on average). Table 6. Rule set size for the standard learning approach Tables Rule length # Rule set c10 s100 34.1 12 c10 s200 39.1 45 c10 s300 44.7 94 c10 s400 42.9 85 c10 s500 47.6 132 c20 s500 60.9 426 Average 44.9

Layered Learning for Concept Synthesis

205

Table 7. Description length: C1 , C2 , C3 for the hierarchical learning approach Concept C1 Concept C2 Concept C3 Tables Ave. rule l. # Rules Ave. rule l. # Rules Ave. rule l. # Rules c10 s100 5.0 10 5.3 22 4.5 22 c10 s200 5.1 16 4.5 27 4.6 41 c10 s300 5.2 18 6.6 61 4.1 78 c10 s400 7.3 47 7.2 131 4.9 71 c10 s500 5.6 21 7.5 101 4.7 87 c20 s500 6.5 255 7.7 1107 5.8 249 Average 5.8 6.5 4.8 Table 8. Description length: C4 , C5 for the hierarchical learning approach Concept C4 Concept C5 Tables Rule length # Rule set Rule length # Rule set c10 s100 4.5 22 1.0 2 c10 s200 4.6 42 4.7 14 c10 s300 5.2 90 3.4 9 c10 s400 6.0 98 4.7 16 c10 s500 5.8 146 4.9 15 c20 s500 5.4 554 5.3 25 Average 5.2 4.0 Table 9. Description length: C6 , C7 , hierarchical learning approach Concept C6 Concept C7 Tables Rule length # Rule set Rule length # Rule set c10 s100 2.2 6 3.5 8 c10 s200 1.3 3 3.7 13 c10 s300 2.4 7 3.6 18 c10 s400 2.5 11 3.7 27 c10 s500 2.6 8 3.7 30 c20 s500 2.9 16 3.8 35 Average 2.3 3.7

The size of rule sets generated by layered learning approach are presented in Tables 7, 8 and 9. One can notice that rules approximating sub-concepts are short. The average rule length is from 4 to 6.5 for the basic concepts and from 2 to 3.7 for the super-concepts. Therefore rules generated by layered learning approach are more understandable and easier to interpret than rules induced by the standard learning approach. Two concepts C2 and C4 are more complex than the others. The rule set induced for C2 takes 28% and the rule set induced for C4 takes above 27% of the number of rules generated for all seven concepts in the traﬃc road problem.

206

6

Sinh Hoa Nguyen et al.

Conclusion

We presented a method for concept synthesis based on the layered learning approach. Unlike the traditional learning approach, in the layered learning approach the concept approximations are induced not only from accessed data sets but also from expert’s domain knowledge. In the paper, we assume that knowledge is represented by a concept dependency hierarchy. The layered learning approach proved to be promising for complex concept synthesis. Experimental results with road traﬃc simulation are showing advantages of this new approach in comparison to the standard learning approach. The main advantages of the layered learning approach can be summarized as follows: 1. 2. 3. 4. 5.

High precision of concept approximation. High generality of concept approximation. Simplicity of concept description. High computational speed. Possibility of localization of sub-concepts that are diﬃcult to approximate. It is important information, because is specifying a task on which we should concentrate to improve the quality of the target concept approximation.

In future we plan to investigate more advanced approaches for concept composition. One interesting possibility is to use patterns deﬁned by rough approximations of concepts deﬁned by diﬀerent kinds of classiﬁers in synthesis of compound concepts. We also would like to develop methods for rough-fuzzy classiﬁer’s synthesis (see Section 4.1). In particular, the method mentioned in Section 4.1 based on rough-fuzzy classiﬁers introduces more ﬂexibility for such composing because a richer class of patterns introduced by diﬀerent layers of rough-fuzzy classiﬁers can lead to improving of the classiﬁer quality [18]. On the other hand, such a process is more complex and eﬃcient heuristics for synthesis of rough-fuzzy classiﬁers should be developed. We also plan to apply the layered learning approach to real-life problems especially when domain knowledge is speciﬁed in natural language. This can make further links with the computing with words paradigm [27, 28, 12]. This is in particular linked with the rough mereological approach (see, e.g., [15, 17]) and with the rough set approach for approximate reasoning in distributed environments [20, 21], in particular with methods of information system composition [20, 2].

Acknowledgements The research has been partially supported by the grant 3T11C00226 from Ministry of Scientiﬁc Research and Information Technology of the Republic of Poland.

References 1. Aha, D.W.. The omnipresence of case-based reasoning in science and application. Knowledge-Based Systems, 11 (5-6) (1998) 261-273.

Layered Learning for Concept Synthesis

207

2. Barwise, J., Seligman, J., eds.: Information Flow: The Logic of Distributed Systems. Volume 44 of Tracts in Theoretical Computer Scienc. Cambridge University Press, Cambridge, UK (1997) 3. Bazan, J.G.: A comparison of dynamic and non-dynamic rough set methods for extracting laws from decision tables. In Polkowski, L., Skowron, A., eds.: Rough Sets in Knowledge Discovery 1: Methodology and Applications. Physica-Verlag, Heidelberg, Germany (1998) 321–365 4. Bazan, J.G., Szczuka, M.: RSES and RSESlib - a collection of tools for rough set computations. In Ziarko, W., Yao, Y., eds.: Second International Conference on Rough Sets and Current Trends in Computing RSCTC. LNAI 2005. Banﬀ, Canada, Springer-Verlag (2000) 106–113 5. Bazan, J., Nguyen, H.S., Skowron, A., Szczuka, M.: A view on rough set concept approximation. In Wang, G., Liu, Q., Yao, Y., Skowron, A., eds.: Proceedings of the Ninth International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing (RSFDGrC’2003),Chongqing, China. LNAI 2639. Heidelberg, Germany, Springer-Verlag (2003) 181–188 6. Cover, T.M. and Hart, P.E.: Nearest neighbor pattern classiﬁcation. IEEE Transactions on Information Theory, 13 (1967) 21-27. 7. Friedman, J., Hastie, T., Tibshirani, R.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer-Verlag, Heidelberg, Germany (2001) 8. Grzymala-Busse, J.: A new version of the rule induction system lers. Fundamenta Informaticae 31(1) (1997) 27–39 9. Komorowski, J., Pawlak, Z., Polkowski, L., Skowron, A.: Rough sets: a tutorial. In Pal, S.K., Skowron, A., eds.: Rough Fuzzy Hybridization: A New Trend in Decision-Making. Springer-Verlag, Singapore (1999) 3–98 ˙ 10. Kloesgen, W., Zytkow, J., eds.: Handbook of Knowledge Discovery and Data Mining. Oxford University Press, Oxford (2002) 11. Mitchell, T.: Machine Learning. Mc Graw Hill (1998) 12. Pal, S.K., Polkowski, L., Skowron, A., eds.: Rough-Neural Computing: Techniques for Computing with Words. Cognitive Technologies. Springer-Verlag, Heidelberg, Germany (2003) 13. Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning about Data. Volume 9 of System Theory, Knowledge Engineering and Problem Solving. Kluwer Academic Publishers, Dordrecht, The Netherlands (1991) 14. Poggio, T., Smale, S.: The mathematics of learning: Dealing with data. Notices of the AMS 50 (2003) 537–544 15. Polkowski, L., Skowron, A.: Rough mereology: A new paradigm for approximate reasoning. International Journal of Approximate Reasoning 15 (1996) 333–365 16. Polkowski, L., Skowron, A.: Rough mereological calculi of granules: A rough set approach to computation. Computational Intelligence 17 (2001) 472–492 17. Polkowski, L., Skowron, A.: Towards adaptive calculus of granules. In Zadeh, L.A., Kacprzyk, J., eds.: Computing with Words in Information/Intelligent Systems, Heidelberg, Germany, Physica-Verlag (1999) 201–227 18. Skowron, A., Stepaniuk, J.: Information granules and rough-neural computing. [12] 43–84 19. Skowron, A., Stepaniuk, J.: Information granules: Towards foundations of granular computing. International Journal of Intelligent Systems 16 (2001) 57–86 20. Skowron, A., Stepaniuk, J.: Information granule decomposition. Fundamenta Informaticae 47(3-4) (2001) 337–350

208

Sinh Hoa Nguyen et al.

21. Skowron, A.: Approximate reasoning by agents in distributed environments. In Zhong, N., Liu, J., Ohsuga, S., Bradshaw, J., eds.: Intelligent Agent Technology Research and Development: Proceedings of the 2nd Asia-Paciﬁc Conference on Intelligent Agent Technology IAT01, Maebashi, Japan, October 23-26. World Scientiﬁc, Singapore (2001) 28–39 22. Skowron, A.: Approximation spaces in rough neurocomputing. In Inuiguchi, M., Tsumoto, S., Hirano, S., eds.: Rough Set Theory and Granular Computing. Volume 125 of Studies in Fuzziness and Soft Computing. Springer-Verlag, Heidelberg, Germany (2003) 13–22 23. Skowron, A., Rauszer, C.: The discernibility matrices and functions in information systems. In Slowi´ nski, R., ed.: Intelligent Decision Support - Handbook of Applications and Advances of the Rough Sets Theory. Volume 11 of D: System Theory, Knowledge Engineering and Problem Solving. Kluwer Academic Publishers, Dordrecht, Netherlands (1992) 331–362 24. Skowron, A., Szczuka, M.: Approximate reasoning schemes: Classiﬁers for computing with words. In: Proceedings of SMPS 2002. Advances in Soft Computing, Heidelberg, Canada, Springer-Verlag (2002) 338–345 25. Stone, P.: Layered Learning in Multi-Agent Systems: A Winning Approach to Robotic Soccer. The MIT Press, Cambridge, MA (2000) 26. Wr´ oblewski, J.: Covering with reducts - a fast algorithm for rule generation. In Polkowski, L., Skowron, A., eds.: Proceedings of the First International Conference on Rough Sets and Current Trends in Computing (RSCTC’98), Warsaw, Poland. LNAI 1424, Heidelberg, Germany, Springer-Verlag (1998) 402–407 27. Zadeh, L.A.: Fuzzy logic = computing with words. IEEE Transactions on Fuzzy Systems 4 (1996) 103–111 28. Zadeh, L.A.: A new direction in AI: Toward a computational theory of perceptions. AI Magazine 22 (2001) 73–84

Basic Algorithms and Tools for Rough Non-deterministic Information Analysis Hiroshi Sakai and Akimichi Okuma Department of Computer Engineering Kyushu Institute of Technology Tobata, Kitakyushu 804, Japan [email protected]

Abstract. Rough non-deterministic inf ormation analysis is a framework for handling the rough sets based concepts, which are deﬁned in not only DISs (Deterministic Inf ormation Systems) but also N ISs (N on-deterministic Inf ormation Systems), on computers. N ISs were proposed for dealing with information incompleteness in DISs. In this paper, two modalities, i.e., the certainty and the possibility, are deﬁned for each concept like the deﬁnability of a set, the consistency of an object, data dependency, rule generation, reduction of attributes, criterion of rules support, accuracy and coverage. Then, each algorithm for computing two modalities is investigated. An important problem is how to compute two modalities depending upon all derived DISs. A simple method, such that two modalities are sequentially computed in all derived DISs, is not suitable. Because the number of all derived DISs increases in exponential order. This problem is uniformly solved by means of applying either inf and sup information or possible equivalence relations. An information analysis tool for N ISs is also presented.

1

Introduction

Rough set theory oﬀers a new mathematical approach to vagueness and uncertainty, and the rough sets based concepts have been recognized to be very useful [1,2,3,4]. This theory usually handles tables with deterministic information, which we call Deterministic Inf ormation Systems (DISs). Many applications of this theory to data mining, rule generation, machine learning and knowledge discovery have been investigated [5–11]. N on-deterministic Inf ormation Systems (N ISs) and Incomplete Inf ormation Systems have been proposed for handling information incompleteness in DISs, like null values, unknown values, missing values and etc. [12–16]. For any N IS, we usually suppose that there exists a DIS with unknown real information in a set of all derived DISs. Let DIS real denote this deterministic information system from N IS. Of course, it is impossible to know DIS real itself without additional information. However, if a formula α holds in every derived DIS from a N IS, α also holds in DIS real . This formula α is not inﬂuenced by the information incompleteness in N IS. If a formula α holds in some derived DISs J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 209–231, 2004. c Springer-Verlag Berlin Heidelberg 2004

210

Hiroshi Sakai and Akimichi Okuma

from a N IS, there exists such a possibility that α holds in DIS real . We call the former the certainty (of the formula α for DIS real ) and the latter the possibility, respectively. In N ISs, such two modalities for DIS real have been employed, and several work on logic in N ISs has been studied [12,14,15,17]. Very few work deals with algorithms for handling N ISs on computers. In [15,16], Lipski showed a question-answering system besides an axiomatization of logic. In [18,19], Grzymala-Busse surveyed the unknown attribute values, and studied the learning from examples with unknown attribute values. In [20,21,22], Kryszkiewicz investigated rules in incomplete information systems. These are the most important work for handling information incompleteness in DISs on computers. This paper follows these two modalities for DIS real , and focuses on the issues in the following. (1) The deﬁnability of a set in N ISs and an algorithm for handling it on computers. (2) The consistency of an object in N ISs and an algorithm for handling it on computers. (3) Data dependency in N ISs and an algorithm for handling it on computers. (4) Rules in N ISs and an algorithm for handling them on computers. (5) Reduction of attributes in N ISs and an algorithm for handling it on computers. An important problem is how to compute two modalities depending upon all derived DISs from a N IS. A simple method, such that every deﬁnition is sequentially computed in all derived DISs from a N IS, is not suitable. Because the number of derived DISs from a N IS increases in exponential order. This problem is uniformly solved by means of applying either inf and sup information or possible equivalence relations in the subsequent sections. In Preliminary, deﬁnitions in DISs and rough sets based concepts are surveyed. Then, each algorithm for ﬁve issues is sequentially examined. Tool programs for these issues are also implemented, which are presented in appendixes.

2

Preliminary

This section surveys some deﬁnitions in DISs, and connects these deﬁnitions with equivalence relations. 2.1

Some Definitions in DISs

A Deterministic Information System (DIS) is a quadruplet (OB, AT, {V ALA | A ∈ AT }, f ), where OB is a ﬁnite set whose elements are called objects, AT is a ﬁnite set whose elements are called attributes, V ALA is a ﬁnite set whose elements are called attribute values and f is such a mapping that f : OB×AT → ∪A∈AT V ALA which is called a classif ication f unction. For AT R={A1 , · · · , An } ⊆ AT , we call (f (x, A1 ), · · · , f (x, An )) a tuple (for AT R) of x ∈ OB. If f (x, A)=f (y, A) holds for every A ∈ AT R ⊆ AT , we see there is a relation between x and y for AT R. This relation is an equivalence

Basic Algorithms for Rough Non-deterministic Information Analysis

211

relation over OB. Let eq(AT R) denote this equivalence relation, and let [x]AT R ∈ eq(AT R) denote an equivalence class {y ∈ OB|f (y, A)=f (x, A) for every A ∈ AT R}. Now, let us show some rough sets based concepts deﬁned in DISs [1,3]. (D-i) The Definability of a Set: If a set X ⊆ OB is the union of some equivalence classes in eq(AT R), we say X is def inable (for AT R) in DIS. Otherwise, we say X is rough (for AT R) in DIS. (D-ii) The Consistency of an Object: Let us consider two disjoint sets CON ⊆ AT which we call condition attributes and DEC ⊆ AT which we call decision attributes. An object x ∈ OB is consistent (with any other object y ∈ OB in the relation from CON to DEC), if f (x, A)=f (y, A) holds for every A ∈ CON implies f (x, A)=f (y, A) holds for every A ∈ DEC. (D-iii) Dependencies among Attributes: We call a ratio deg(CON, DEC)= |{x ∈ OB| x is consistent in the relation from CON to DEC }|/|OB| the degree of dependency from CON to DEC. Clearly, deg(CON, DEC)=1 holds if and only if every object x ∈ OB is consistent. (D-iv) Rules and Criteria (Support, Accuracy and Coverage): For any object x ∈ OB, let imp(x, CON, DEC) denote a formula called an implication: ∧A∈CON [A, f (x, A)] ⇒ ∧A∈DEC [A, f (x, A)], where a formula [A, f (x, A)] implies that f (x, A) is the value of the attribute A. This is called a descriptor in [15,22]. In most of work on rule generation, a rule is deﬁned by an implication τ : imp(x, CON, DEC) satisfying some constraints. A constraint, such that deg(CON, DEC)=1 holds from CON to DEC, has been proposed in [1]. Another familiar constraint is deﬁned by three values in the following: support(τ )= |[x]CON ∩[x]DEC |/|OB|, accuracy(τ )=|[x]CON ∩[x]DEC |/|[x]CON | and coverage (τ )=|[x]CON ∩ [x]DEC |/|[x]DEC | [9]. (D-v) Reduction of Condition Attributes in Rules: Let us consider such an implication imp(x, CON, DEC) that x is consistent in the relation from CON to DEC. An attribute A ∈ CON is dispensable in CON , if x is consistent in the relation from CON − {A} to DEC. These are the deﬁnitions of rough sets based concepts in DISs. Several tools for DISs have been realized according to these deﬁnitions [5,6,7,8,9,10,11]. 2.2

Definitions from D-i to D-v and Equivalence Relations over OB

Rough set theory makes use of equivalence relations for solving problems. Each deﬁnition from D-i to D-v is solved by means of applying equivalence relations. As for the deﬁnability of a set X ⊆ OB, X is deﬁnable (for AT R) in a DIS, if ∪x∈K [x]AT R =X holds for a set K ⊆ X ⊆ OB. According to this deﬁnition, it is possible to derive such a necessary and suﬃcient condition that a set X is deﬁnable if and only if ∪x∈X [x]AT R =X holds. Now, let us show the most important proposition, which connects two equivalence classes [x]CON and [x]DEC with the consistency of x.

212

Hiroshi Sakai and Akimichi Okuma

Proposition 1 [1]. For any DIS, (1) and (2) in the following are equivalent. (1) An object x ∈ OB is consistent in the relation from CON to DEC. (2) [x]CON ⊆ [x]DEC . According to Proposition 1, the degree of dependency from CON to DEC is equal to |{x ∈ OB|[x]CON ⊆ [x]DEC }|/|OB|. As for criteria support, accuracy and coverage, they are deﬁned by equivalence classes [x]CON and [x]DEC . As for the reduction of attributes values in rules, let us consider such an implication imp(x, CON, DEC) that x is consistent in the relation from CON to DEC. Here, an attribute A ∈ CON is dispensable, if [x]CON −{A} ⊆ [x]DEC holds. In this way, deﬁnitions from D-i to D-v are uniformly computed by means of applying equivalence relations in DISs.

3

A Framework of Rough Non-deterministic Information Analysis

This section gives deﬁnitions in N ISs and two modalities due to the information incompleteness in N ISs. Then, a framework of rough non-deterministic information analysis is proposed. 3.1

A Proposal of Rough Non-deterministic Information Analysis

A N on-deterministic Inf ormation System (N IS) is also a quadruplet (OB, AT, {V ALA |A ∈ AT }, g), where g : OB × AT → P (∪A∈AT V ALA ) (a power set of ∪A∈AT V ALA ). Every set g(x, A) is interpreted as that there is a real value in this set but this value is not known [13,15,21]. Especially if the real value is not known at all, g(x, A) is equal to V ALA . This is called the null value interpretation [12]. Definition 1. Let us consider a N IS=(OB, AT, {V ALA |A ∈ AT }, g), a set AT R ⊆ AT and a mapping h : OB×AT R → ∪A∈AT R V ALA such that h(x, A) ∈ g(x, A). We call a DIS=(OB, AT R, {V ALA |A ∈ AT R}, h) a derived DIS (for AT R) from N IS. Example 1. Let us consider N IS1 in Table 1, which is automatically produced by means of applying a random number program. There are 2176782336(=212 × 312 ) derived DISs for AT R={A, B, C, D, E, F }. As for AT R={A, B, C}, there are 1118744(=27 × 34 ) derived DISs. Definition 2. Let us consider a N IS. There exists a derived DIS with unknown real attribute values due to the interpretation of g(x, A). So, let DIS real denote a derived DIS with unknown real attribute values. Of course, it is impossible to know DIS real without additional information. However, some information based on DIS real may be derived. Let us consider a relation from CON ={A, B} to DEC={C} and object 2 in N IS1 . The tuple of object 2 is either (2,2,2) or (4,2,2). In both cases, object 2 is consistent. Thus, it is possible to conclude that object 2 is consistent in DIS real , too. In order to handle

Basic Algorithms for Rough Non-deterministic Information Analysis

213

Table 1. A Table of N IS1 OB 1 2 3 4 5 6 7 8 9 10

A B C {3} {1, 3, 4} {3} {2, 4} {2} {2} {1, 2} {2, 4, 5} {2} {1, 5} {5} {2, 4} {3, 4} {4} {3} {3, 5} {4} {1} {1, 5} {4} {5} {4} {2, 4, 5} {2} {2} {5} {3} {2, 3, 5} {1} {2}

D {2} {3, 4} {3} {2} {1, 2, 3} {2, 3, 5} {1, 4} {1, 2, 3} {5} {3}

E {5} {1, 3, 4} {4, 5} {1, 4, 5} {1} {5} {3, 5} {2} {4} {1}

F {5} {4} {5} {5} {2, 5} {2, 3, 4} {1} {1, 2, 5} {2} {1, 2, 3}

such information based on DIS real , two modalities certainty and possibility are usually deﬁned in most of work handling information incompleteness. (Certainty). If a formula α holds in every derived DIS from a N IS, α also holds in DIS real . In this case, we say α certainly holds in DIS real . (Possibility). If a formula α holds in some derived DISs from a N IS, there exists such a possibility that α holds in DIS real . In this case, we say α possibly holds in DIS real . According to two modalities for DIS real , it is possible to extend deﬁnitions from D-i to D-v in DISs to the deﬁnitions in N ISs. In the subsequent sections, we sequentially give deﬁnitions from N-i to N-v in N ISs. We name information analysis, which depends upon deﬁnitions from N-i to N-v and other extended deﬁnitions in N ISs, Rough N on-deterministic Inf ormation Analysis (RN IA) from now on. 3.2

Incomplete Information Systems and NISs

Incomplete information systems in [21,22] and N ISs seem to be the same, but there exist some distinct diﬀerences. Example 2 clariﬁes the diﬀerence between incomplete information systems and N ISs. Example 2. Let us consider an incomplete information system in Table 2. Table 2. A Table of an Incomplete Information System OB 1 2

A ∗ 3

B 2 3

214

Hiroshi Sakai and Akimichi Okuma Table 3. A Table of a N IS OB 1 2

A {1, 2} 3

B 2 3

Here, let us suppose V ALA be {1, 2, 3}, CON ={A} and DEC={B}. The attribute value of object 1 is not deﬁnite, and the ∗ symbol is employed for describing it. In this case, the null value interpretation is applied to this ∗, and 3 ∈ V ALA may occur instead of ∗. Therefore, object 2 is not consistent in this case. According to the deﬁnition in [22], a formula [A, 3] ⇒ [B, 3] is a possible rule. Now, let us consider a N IS in Table 3. The attribute value of object 1 is not deﬁnite, either. However in this N IS, object 2 is consistent in every derived DIS. So, a formula [A, 3] ⇒ [B, 3] is a certain rule according to the deﬁnition in [22]. Thus, the meaning of the formula [A, 3] ⇒ [B, 3] in Table 2 is diﬀerent from that in Table 3. In incomplete information systems, each indeﬁnite value is uniformly identiﬁed with unknown value ∗. However in N ISs, each indeﬁnite value is identiﬁed with a subset of V ALA (A ∈ AT ). Clearly, N ISs are more informative than incomplete information systems. 3.3

The Core Problem for RNIA and the Purpose of This Work

Deﬁnitions from N-i to N-v, which are sequentially given in the following sections, depend upon every derived DISs from a N IS. Therefore, it is necessary to compute deﬁnitions from D-i to D-v in every derived DIS. The number of all derived DISs, which is the product x∈OB,A∈AT R |g(x, A)| for AT R ⊆ AT , increases in exponential order. Even though each deﬁnition from D-i to D-iv can be solved in the polynomial time order for input data size [23], each deﬁnition from N-i to N-iv depends upon all derived DISs. The complexity for ﬁnding a minimal reduct in a DIS is also proved to be NP-hard [3]. Namely for handling N ISs with large number of derived DISs, it may take much execution time without eﬀective algorithms. This is the core problem for RN IA. This paper proposes the application of inf and sup information and possible equivalence relations, which are deﬁned in the next subsection, to solving the above core problem. In Section 2.2, the connection between deﬁnitions from Di to D-v and equivalence relations is proved. Analogically, we think about the connection between deﬁnitions from N-i to N-v and possible equivalence relations [24,25]. 3.4

Basic Definitions for RNIA

Now, we give some basic deﬁnitions, which appear through this paper. Definition 3. Let us consider a derived DIS (for AT R) from a N IS. We call an equivalence relation eq(AT R) in DIS a possible equivalence relation (perelation) in N IS. We also call every element in eq(AT R) a possible equivalence class (pe-class) in N IS.

Basic Algorithms for Rough Non-deterministic Information Analysis

215

For AT R={C} in N IS1 , there exist two derived DISs and two pe-relations i.e., {{1, 5, 9}, {2, 3, 4, 8, 10}, {6}, {7}} and {{1, 5, 9}, {2, 3, 8, 10}, {4}, {6}, {7}}. Every element in two pe-relations is a pe-class for AT R={C} in N IS1 . Definition 4. Let us consider a N IS and a set AT R={A1, · · · , An } ⊆ AT . For any x ∈ OB, let P T (x, AT R) denote the Cartesian product g(x, A1 ) × · · · × g(x, An ). We call every element a possible tuple (f or AT R) from x. For a possible tuple ζ=(ζ1 , · · · , ζn ) ∈ P T (x, AT R), let [AT R, ζ] denote a formula 1≤i≤n [Ai , ζi ]. Furthermore for disjoint sets CON, DEC ⊆ AT , and two possible tuples ζ=(ζ1 , · · · , ζn ) ∈ P T (x, CON ) and η=(η1 , · · · , ηm ) ∈ P T (x, DEC), let (ζ, η) denote a possible tuple (ζ1 , · · · , ζn , η1 , · · · , ηm ) ∈ P T (x, CON ∪ DEC). Definition 5. Let us consider a N IS and a set AT R ⊆ AT . For any ζ ∈ P T (x, AT R), let DD(x, ζ, AT R) denote a set {ϕ| ϕ is such a derived DIS for AT R that the tuple of x in ϕ is ζ}. Furthermore in this DD(x, ζ, AT R), we deﬁne (1) and (2) below. (1) inf (x, ζ, AT R)={y ∈ OB|P T (y, AT R)={ζ}}, (2) sup(x, ζ, AT R)={y ∈ OB|ζ ∈ P T (y, AT R)}. For object 1 and AT R={A, B} in N IS1 , P T (1, {A, B})={(3, 1), (3, 3), (3, 4)} holds. The possible tuple (3, 1) ∈ P T (1, {A, B}) appears 1/3 derived DISs for AT R={A, B}. The number of elements in DD(1, (3, 1), {A, B}) is 1728(=26 × 33 ). In this set DD(1, (3, 1), {A, B}), inf (1, (3, 1), {A, B})={1} and sup(1, (3, 1), {A, B})={1, 10} hold. These inf and sup in Deﬁnition 5 are key information for RN IA, and each algorithm in the following depends upon these two sets. The set sup is semantically equal to a set deﬁned by the similarity relation SIM in [20,21]. In [20,21], some theorems are presented based on the relation SIM , and our theoretical results are closely related to those theorems. However, the set inf causes new properties, which hold just in N ISs. Now, let us consider a relation between a pe-class [x]AT R and two sets inf and sup. In every DIS, P T (y, AT R) is a singleton set, so [x]AT R =inf (x, ζ, AT R)= sup(x, ζ, AT R) holds. However in every N IS, [x]AT R depends upon derived DISs, and {x} ⊆ inf (x, ζ, AT R) ⊆ [x]AT R ⊆ sup(x, ζ, AT R) holds. Proposition 2 in the following connects a pe-class [x]AT R with inf (x, ζ, AT R) and sup(x, ζ, AT R). Proposition 2 [25]. For a N IS, an object x, AT R ⊆ AT and ζ ∈ P T (x, AT R), conditions (1) and (2) in the following are equivalent. (1) X is an equivalence class [x]AT R in some ϕ ∈ DD(x, ζ, AT R). (2) inf (x, ζ, AT R) ⊆ X ⊆ sup(x, ζ, AT R).

4

Algorithms and Tool Programs for the Deﬁnability of a Set in NISs

This section proposes algorithms and tool programs for the deﬁnability of a set. It is possible to obtain distinct pe-relations as a side eﬀect of an algorithm. An algorithm for merging pe-relations is also proposed.

216

4.1

Hiroshi Sakai and Akimichi Okuma

An Algorithm for Checking the Definability of a Set in NISs

The deﬁnability of a set in N ISs is given, and an algorithm is proposed. Definition 6. (N-i. The Definability of a Set) We say X ⊆ OB is certainly def inable for AT R ⊆ AT in DIS real , if X is deﬁnable (for AT R) in every derived DIS. We say X ⊆ OB is possibly def inable for AT R ⊆ AT in DIS real , if X is deﬁnable (for AT R) in some derived DISs. In a DIS, it is enough to check a formula ∪x∈X [x]AT R =X for the deﬁnability of X ⊆ OB. However in every N IS, [x]AT R depends upon a derived DIS, and inf (x, ζ, AT R) ⊆ [x]AT R ⊆ sup(x, ζ, AT R) holds. Algorithm 1 in the following checks the formula ∪x∈X [x]AT R =X according to these inclusion relations, and ﬁnds a subset of a pe-relation which makes the set X deﬁnable. Algorithm 1. Input: A N IS, a set AT R ⊆ AT and a set X ⊆ OB. Output: The deﬁnability of a set X for AT R. (1) X ∗ =X, eq=∅, count=0 and total= x∈X,A∈AT R |g(x, A)|. (2) For any x ∈ X ∗ , ﬁnd [x]AT R satisfying constraints (CL-1) and (CL-2). (CL-1) [x]AT R ⊆ X ∗ , (CL-2) eq ∪ {[x]AT R } is a subset of a pe-relation. (2-1) If there is a set [x]AT R , eq=eq ∪ {[x]AT R } and X ∗ =X ∗ − [x]AT R . If X ∗ = ∅, go to (2). If X ∗ =∅, X is deﬁnable in a derived DIS. Set count=count+1, and backtrack. (2-2) If there is no [x]AT R , backtrack. (3) After ﬁnishing the search, X is certainly deﬁnable for AT R in DIS real , if count=total. X is possibly deﬁnable for AT R in DIS real , if count ≥ 1. Algorithm 1 tries to ﬁnd a set of pe-classes, which satisfy constraints (CL-1) and (CL-2). Whenever X ∗ =∅ holds in Algorithm 1, a subset of a pe-relation is stored in the variable eq. At the same time, a derived DIS (restricted to the set X) from N IS is also detected [24,25]. Because X=∪K∈eq K holds for eq, X is deﬁnable in this detected DIS. In order to count such a case that X ∗ =∅, the variable count is employed. At the end of execution, if count is equal to the number of derived DISs (restricted to the set X), it is possible to conclude X is certainly deﬁnable. The constraints (CL-1) and (CL-2) keep the correctness of this search. For example in Table 1, inf (1, (3), {A})={1} and sup(1, (3), {A})={1, 5, 6, 10} hold. So, {1} ⊆ [1]{A} ⊆ {1, 5, 6, 10} holds. Let us suppose [1]{A} ={1, 5, 10}. Since 6 ∈ [1]{A} holds in this case, the tuple from object 6 is not (3). In a branch with [1]{A} ={1, 5, 10}, the tuple from object 6 is implicitly ﬁxed to (5) ∈ P T (6, {A})= {(3), (5)}. The details of (CL-1), (CL-2) and an illustrative example based on a previous version of Algorithm 1 are presented in [25]. Algorithm 1 is a solution to handle deﬁnition N-i. Algorithm 1 is extended to Algorithm 2 in the subsequent sections. A real execution of a tool, which simulates Algorithm 1, is shown in Appendix 1.

Basic Algorithms for Rough Non-deterministic Information Analysis

4.2

217

The Definability of a Set and Pe-relations in NISs

In Algorithm 1, let X be OB. Since every pe-relation is an equivalence relation over OB, OB is deﬁnable in every derived DIS. Thus, OB is certainly deﬁnable in DIS real . In Algorithm 1, every pe-relation is asserted in the variable eq whenever X ∗ =∅ is derived. In this way, it is possible to obtain all pe-relations. However in this case, the number of the search branches with X ∗ =∅ is equal to the number of all derived DISs. Therefore, it is hard to apply Algorithm 1 directly to N ISs with large number of derived DISs. We solve this problem by means of applying Proposition 3 in the following, which shows us a way to merge equivalence relations. Proposition 3 [1]. Let us suppose eq(A) and eq(B) be equivalence relations for A, B ⊆ AT in a DIS. The equivalence relation eq(A ∪ B) is {M ⊆ OB|M = [x]A ∩ [x]B for [x]A ∈ eq(A) and [x]B ∈ eq(B), x ∈ OB}. 4.3

A Property of Pe-relations in NISs

Before proposing another algorithm for producing pe-relations, we clarify a property of pe-relations. Because some pe-relations in distinct derived DISs may be the same, generally the number of distinct pe-relations is smaller than the number of derived DISs. Let us consider Table 4, which shows the relation between the numbers of derived DISs and distinct pe-relations. This result is computed by tool programs in the subsequent sections. For AT R={A, B, C}, there are 10368(=27 × 34 ) derived DISs. However in reality, there are 10 distinct pe-relations. For larger attributes set AT R, every object is much more discerned from other objects, i.e., every [x]AT R will become {x}. Therefore, every pe-relation will become a unique equivalence relation {{1}, {2}, · · · , {10}}. For AT R={A, B, C, D, E, F }, there exists in reality only 1 distinct pe-relation {{1}, {2}, · · · , {10}}. Table 4. The numbers of derived DISs and distinct pe-relations in N IS1 AT R {A, B} {A, B, C} {A, B, C, D} {A, B, C, D, E} {A, B, C, D, E, F } derived DISs 5184 10368 1118744 40310784 2176782336 10 6 2 1 pe relations 107

We examined several N ISs, and we experimentally conclude such a property that the number of distinct possible equivalence relations is generally much smaller than the number of all derived DISs. We make use of this property for computing deﬁnitions from N-i to N-v. 4.4

A Revised Algorithm for Producing Pe-relations

Algorithm 1 produces pe-relations as a side eﬀect of the search, but this algorithm is not suitable for N ISs with large number of derived DISs. This section revises Algorithm 1 by means of applying Proposition 3.

218

Hiroshi Sakai and Akimichi Okuma

Algorithm 2. Input: A N IS and a set AT R ⊆ AT . Output: A set of distinct pe-relations for AT R: pe rel(AT R). (1) Produce a set of pe-relations pe rel({A}) for every A ∈ AT R. (2) Set temp={} and pe rel(AT R)={{{1, 2, 3, · · · , |OB|}}}. (3) Repeat (4) to pe rel(AT R) and pe rel({K}) (K ∈ AT R − temp) until temp=AT R. (4) For each pair of pei ∈ pe rel(AT R) and pej ∈ pe rel({K}), apply Proposition 3 and produce pei,j ={M ⊆ OB|M = [x]i ∩ [x]j for [x]i ∈ pei and [x]j ∈ pej , x ∈ OB}. Let pe rel(AT R) be {pei,j |pei ∈ pe rel(AT R) pej ∈ pe rel({K})}, and set temp=temp ∪ {K}. In step (1), Algorithm 1 is applied to producing pe rel({A}) for every A ∈ AT R. In steps (3) and (4), Proposition 3 is repeatedly applied to merging two sets of pe-relations. For N IS1 , let us consider a case of AT R={A, B, C, D} in Algorithm 2. After ﬁnishing step (1), Table 5 is obtained. Table 5 shows the numbers of derived DISs and distinct pe-relations in every attribute. Table 5. The numbers of derived DISs and distinct pe-relations in every attribute Attribute A derived DISs 192 pe relations 176

B 27 27

C 2 2

D 108 96

E 36 36

F 54 54

For AT R={A, B, C, D}, Algorithm 2 sequentially produces pe rel({A, B}), pe rel({A, B, C}) and pe rel({A, B, C, D}). For producing pe rel({A, B}), it is necessary to handle 4752(=176×27) combinations of pe-relations, and it is possible to know |pe rel({A, B})|=107 in Table 4. However after this execution, the number of these combinations is reduced due to the property of pe-relations. For producing pe rel({A, B, C}), it is enough to handle 214(=107×2) combinations for 10368 derived DISs, and |pe rel({A, B, C})|=10 in Table 4 is obtained. For producing pe rel({A, B, C, D}), it is enough to handle 960(=10×96) combinations for 1118744 derived DISs. Generally, Algorithm 1 depends upon the number x∈OB,A∈AT R |g(x, A)|, which is the number of derived DISs. 2 at the most depends However, Algorithm 2 upon the number (|AT R| − 1)| × |g(x, A)| for such an attribute A that x∈OB |g(x, B)| ≤ |g(x, A)| for any B ∈ AT R. The product of |g(x, A)| x∈OB x∈OB in the number for Algorithm 1 is almost corresponding to the sum of |g(x, A)|2 in the number for Algorithm 2. Thus, in order to handle N ISs with large AT R and large number of derived DISs, Algorithm 2 will be more eﬃcient than Algorithm 1. In reality, the result in Table 4 is calculated by Algorithm 2. It is hard to apply Algorithm 1 to calculating pe-relations for AT R={A, B, C, D, E} or AT R={A, B, C, D, E, F }.

Basic Algorithms for Rough Non-deterministic Information Analysis

219

As for the implementation of Algorithm 2, the data structure of pe-relations and the program depending upon this structure are following the deﬁnitions in [23,25]. A real execution of a tool, which simulates Algorithm 2, is shown in Appendix 2. 4.5

Another Solution of the Definability of a Set

Algorithm 1 solves the deﬁnability of a set in N ISs, and it is also possible to apply pe-relations for solving the deﬁnability of a set in N ISs. After obtaining distinct pe-relations, we have only to check ∪x∈X [x]=X for every pe-relation. Example 3. Let us consider N IS1 . For AT R={A, B, C, D, E}, there are two distinct pe-relations pe1 ={{1}, {2}, · · · , {10}} and pe2 ={{1}, {2, 3}, {4}, · · ·, {10}}. A set {1, 2} is deﬁnable in pe1 , but it is not deﬁnable in pe2 . Therefore, a set {1, 2} is possibly deﬁnable in DIS real . Since a set {1, 2, 3} is deﬁnable in every pe-relation, this set is certainly deﬁnable in DIS real . In this way the load of the calculation, which depends upon all derived DISs, can be reduced.

5

The Necessary and Suﬃcient Condition for Checking the Consistency of an Object

This section examines the necessary and suﬃcient condition for checking the consistency of an object. Definition 7. (N-ii. The Consistency of an Object) Let us consider two disjoint sets CON, DEC ⊆ AT in a N IS. We say x ∈ OB is certainly consistent (in the relation from CON to DEC) in DIS real , if x is consistent (in the relation from CON to DEC) in every derived DIS from N IS. We say x is possibly consistent in DIS real , if x is consistent in some derived DISs from N IS. According to pe-relations and Proposition 1, it is easy to check the consistency of x. Let us consider two sets of pe-relations pe rel(CON ) and pe rel(DEC). An object x is certainly consistent in DIS real , if and only if [x]CON ⊆ [x]DEC ([x]CON ∈ pei and [x]DEC ∈ pej ) for any pei ∈ pe rel(CON ) and any pej ∈ pe rel(DEC). An object x is possibly consistent in DIS real , if and only if [x]CON ⊆ [x]DEC ([x]CON ∈ pei and [x]DEC ∈ pej ) for some pei ∈ pe rel(CON ) and some pej ∈ pe rel(DEC). However, it is also possible to check the consistency of an object by means of applying inf and sup information in Deﬁnition 5. Theorem 4. For a N IS and an object x, let CON be condition attributes and let DEC be decision attributes. (1) x is certainly consistent in DIS real if and only if sup(x, ζ, CON ) ⊆ inf (x, η, DEC) holds for any ζ ∈ P T (x, CON ) and any η ∈ P T (x, DEC). (2) x is possibly consistent in DIS real if and only if inf (x, ζ, CON ) ⊆ sup(x, η, DEC) holds for a pair of ζ ∈ P T (x, CON ) and η ∈ P T (x, DEC).

220

Hiroshi Sakai and Akimichi Okuma

Proof. Let us consider pe-classes [x]CON and [x]DEC in ϕ ∈ DD(x, (ζ, η), CON ∪ DEC). Then, inf (x, ζ, CON ) ⊆ [x]CON ⊆ sup(x, ζ, CON ) and inf (x, η, DEC) ⊆ [x]DEC ⊆ sup(x, η, DEC) hold according to Proposition 2. (1) Let us suppose sup(x, ζ, CON ) ⊆ inf (x, η, DEC) holds. Then, [x]CON ⊆ sup(x, ζ, CON ) ⊆ inf (x, η, DEC) ⊆ [x]DEC , and [x]CON ⊆ [x]DEC is derived. According to Proposition 1, object x is consistent in any ϕ ∈ DD(x, (ζ, η), CON ∪ DEC). This holds for any ζ ∈ P T (x, CON ) and any η ∈ P T (x, DEC). Thus, x is certainly consistent in DIS real . Conversely, let us suppose sup(x, ζ, CON ) ⊆ inf (x, η, DEC) holds for a pair of ζ and η. According to Proposition 2, [x]CON = sup(x, ζ, CON ) and [x]DEC =inf (x, η, DEC) hold in some ϕ ∈ DD(x, (ζ, η), CON ∪ DEC). Since [x]CON ⊆ [x]DEC holds in ϕ, x is not certainly consistent. By contraposition, the converse is also proved. (2) Let us suppose inf (x, ζ, CON ) ⊆ sup(x, η, DEC) holds for a pair of ζ ∈ P T (x, CON ) and η ∈ P T (x, DEC). According to Proposition 2, [x]CON =inf (x, ζ, CON ) and [x]DEC =sup(x, η, DEC) hold in some ϕ ∈ DD(x, (ζ, η), CON ∪ DEC). Namely, x is consistent in this ϕ. Conversely, let us suppose inf (x, ζ, CON ) ⊆ sup(x, η, DEC) holds. Since inf (x, ζ, CON ) ⊆ [x]CON and [x]DEC ⊆ sup(x, η, DEC) hold for any [x]CON and [x]DEC , [x]CON ⊆ [x]DEC is derived. Namely, x is not possibly consistent. By contraposition, the converse is also proved. Theorem 4, which is one of the most important results in this paper, is an extension of Proposition 1 and results in [20,21]. In Proposition 1, [x]CON and [x]DEC are mutually unique. However in N ISs, these pe-classes may not be unique. In order to check the consistency of objects in N ISs, it is necessary to consider possible tuples and derived DISs. Algorithm 1 and 2 produce pe-relations according to inf and sup information in Deﬁnition 5. Theorem 4 also characterizes the consistency of an object by means of applying inf and sup information, therefore inf and sup information in Deﬁnition 5 is the most essential information.

6

An Algorithm and Tool Programs for Data Dependency in NISs

The formal deﬁnition of data dependency in N ISs has not been established yet. This section extends the deﬁnition D-iii to N-iii in the following, and examines an algorithm and tool programs for data dependency in N ISs. Definition 8 [26] (N-iii. Data Dependencies among Attributes). Let us consider any N IS, condition attributes CON , decision attributes DEC and all derived DIS1 , · · ·, DISm from N IS. For two threshold values val1 and val2 (0 ≤ val1 , val2 ≤ 1), if conditions (1) and (2) hold then we see DEC depends on CON in N IS. (1) |{DISi |deg(CON, DEC)=1 in DISi (1 ≤ i ≤ m)}|/m ≥ val1 . (2) mini {deg(CON, DEC) in DISi } ≥ val2 .

Basic Algorithms for Rough Non-deterministic Information Analysis

221

In Deﬁnition 8, condition (1) requires most of derived DISs are consistent, i.e., every object is consistent in most of derived DISs. Condition (2) speciﬁes the minimal value of the degree of dependency. If both two conditions are satisﬁed, it is expected that deg(CON, DEC) in DIS real will also be high. The deﬁnition N-iii is easily computed according to pe rel(CON ) and pe rel(DEC). For each pair of pei ∈ pe rel(CON ) and pej ∈ pe rel(DEC), the degree of dependency is |{x ∈ OB|[x]CON ⊆ [x]DEC for [x]CON ∈ pei , [x]DEC ∈ pej }|/|OB|. Namely, all kinds of the degrees of dependency are obtained by means of calculating all combinations of pairs. For example, let us consider CON ={A, B, C, D, E} and DEC={F } in N IS1 . Since |pe rel({A, B, C, D, E})| =2 and |pe rel({F })|=54, it is possible to obtain all kinds of degrees by means of examining 108(=2×54) combinations. This calculation is the same as the calculation depending upon 2176782336 derived DISs. A real execution handling data dependency and the consistency of objects is shown in Appendix 3.

7

An Algorithm and Tool Programs for Rules in NISs

This section investigates an algorithm and tool programs [27] for rules in N ISs. 7.1

Certain Rules and Possible Rules in NISs

Possible implications in N ISs are proposed, and certain rules and possible rules are deﬁned by possible implications satisfying some constraints. Definition 9. For any N IS, let CON be condition attributes and let DEC be decision attributes. For any x ∈ OB, let P I(x, CON, DEC) denote a set {[CON, ζ] ⇒ [DEC, η]|ζ ∈ P T (x, CON ), η ∈ P T (x, DEC)}. We call an element of P I(x, CON, DEC) a possible implication (in the relation from CON to DEC) from x. We call a possible implication, which satisﬁes some constraints, a rule in N IS. It is necessary to remark that a possible implication τ : [CON, ζ] ⇒ [DEC, η] from x appears in every ϕ ∈ DD(x, (ζ, η), CON ∪ DEC). This set DD(x, (ζ, η), CON ∪ DEC) is a subset of all derived DISs for AT R=CON ∪ DEC. In N IS1 , P T (1, {A, B})={(3, 1), (3, 3), (3, 4)}, P T (1, {C})={(3)} and P I(1, {A, B}, {C}) consists of three possible implications [A, 3] ∧ [B, 1] ⇒ [C, 3], [A, 3] ∧ [B, 3] ⇒ [C, 3] and [A, 3] ∧ [B, 4] ⇒ [C, 3]. The ﬁrst possible implication appears in every ϕ ∈ DD(1, (3, 1, 3), {A, B, C}). This set DD(1, (3, 1, 3), {A, B, C}) consists of 1/3 of derived DISs for {A, B, C}. Definition 10. Let us consider a N IS, condition attributes CON and decision attributes DEC. If P I(x, CON, DEC) is a singleton set {τ } (τ : [CON, ζ] ⇒ [DEC, η]), we say τ (from x) is def inite. Otherwise we say τ (from x) is indef inite. If a set {ϕ ∈ DD(x, (ζ, η), CON ∪ DEC)| x is consistent in ϕ} is equal to DD(x, (ζ, η), CON ∪ DEC), we say τ is globally consistent (GC). If this set is equal to ∅, we say τ is globally inconsistent (GI). Otherwise we say τ is marginal (M A). According to two cases, i.e., ‘D(ef inite) or I(ndef inite)’ and ‘GC or M A or GI’, we deﬁne six classes, D-GC, D-M A, D-GI, I-GC, I-M A, I-GI, for possible implications.

222

Hiroshi Sakai and Akimichi Okuma

If a possible implication from x belongs to either D-GC, I-GC, D-M A or I-M A, x is consistent in some derived DISs. If a possible implication from x belongs to D-GC, x is consistent in every derived DISs. Thus, we give Deﬁnition 11 in the following. Definition 11 (N-iv. Certain and Possible Rules).s For a N IS, let CON be condition attributes and let DEC be decision attributes. We say τ ∈ P I(x, CON, DEC) is a possible rule in DIS real , if τ belongs to either D-GC, I-GC, D-M A or I-M A class. Especially, we say τ is a certain rule in DIS real , if τ belongs to D-GC class. Theorem 5 in the following characterizes certain and possible rules according to inf and sup information. Theorem 5 is also related to results in [20,21], but there exist some diﬀerences, which we have shown in Example 2. Theorem 5 [27]. For a N IS, let CON be condition attributes and let DEC be decision attributes. For τ : [CON, ζ] ⇒ [DEC, η] ∈ P I(x, CON, DEC), the following holds. (1) τ is a possible rule if and only if inf (x, ζ, CON ) ⊆ sup(x, η, DEC) holds. (2) τ is a certain rule if and only if P I(x, CON, DEC)={τ } and sup(x, ζ, CON ) ⊆ inf (x, η, DEC) hold. Proposition 6. For any N IS, let AT R ⊆ AT be {A1 , · · · , An }, and let a possible tuple ζ ∈ P T (x, AT R) be (ζ1 , · · · , ζn ). Then, the following holds. (1) inf (x, ζ, AT R)=∩i inf (x, (ζi ), {Ai }). (2) sup(x, ζ, AT R)=∩i sup(x, (ζi ), {Ai }). Proof of (1): For any y ∈ inf (x, ζ, AT R), P T (y, AT R)={(ζ1, · · · , ζn )} holds due to the deﬁnition of inf . Namely, P T (y, {Ai })={(ζi )} holds for every i, and y ∈ inf (x, (ζi ), {Ai }) for every i. Namely, y ∈ ∩i inf (x, (ζi ), {Ai }). The converse of this proof clearly holds. Proposition 6 shows us a way to manage inf and sup information in Deﬁnition 5. Namely, we ﬁrst prepare inf and sup information for every x ∈ OB, Ai ∈ AT and (ζi,j ) ∈ P T (x, {Ai }). Then, we produce inf and sup information by means of repeating the set intersection operations. For obtained inf (x, ζ, CON ), sup(x, ζ, CON ), inf (x, η, DEC) and sup(x, η, DEC), Theorem 5 is applied to checking the certainty or the possibility of τ : [CON, ζ] ⇒ [DEC, η]. 7.2

The Minimum and Maximum of Three Criterion Values

This section proposes the minimum and maximum of three criterion values for possible implications, and investigates an algorithm to calculate them. Definition 12. For a N IS, let us consider a possible implication τ :[CON, ζ] ⇒ [DEC, η] ∈ P I(x, CON, DEC) and DD(x, (ζ, η), CON ∪ DEC). Let minsup(τ ) denote minϕ∈DD(x,(ζ,η),CON ∪DEC){support(τ ) in ϕ}, and let maxsup(τ ) denote maxϕ∈DD(x,(ζ,η),CON ∪DEC){support(τ ) in ϕ}. As for accuracy and coverage, minacc(τ ), maxacc(τ ), mincov(τ ) and maxcov(τ ) are similarly deﬁned. Let us suppose DIS real ∈ DD(x, (ζ, η), CON ∪ DEC). According to Deﬁnition 12, clearly minsup(τ ) ≤ support(τ ) in DIS real ≤ maxsup(τ ), minacc(τ ) ≤

Basic Algorithms for Rough Non-deterministic Information Analysis

223

accuracy(τ ) in DIS real ≤ maxacc(τ ) and mincov(τ ) ≤ coverage(τ ) in DIS real ≤ maxcov(τ ) hold. For calculating each deﬁnition, it is necessary to examine every ϕ ∈ DD(x, (ζ, η), CON ∪ DEC), and this calculation depends upon |DD(x, (ζ, η), CON ∪ DEC)|. However, these minimum and maximum values are also calculated by means of applying inf and sup information, again. Theorem 7. For a N IS, let us consider a possible implication τ :[CON, ζ] ⇒ [DEC, η] ∈ P I(x, CON, DEC). The following holds. (1) minsup(τ )=|inf (x, ζ, CON ) ∩ inf (x, η, DEC)|/|OB|. (2) maxsup(τ )=|sup(x, ζ, CON ) ∩ sup(x, η, DEC)|/|OB|. Theorem 8. For a N IS, let us consider a possible implication τ :[CON, ζ] ⇒ [DEC, η] ∈ P I(x, CON, DEC). Let IN ACC denote a set [sup(x, ζ, CON ) − inf (x, ζ, CON )] ∩ sup(x, η, DEC), and let OU T ACC denote a set [sup(x, ζ, CON ) − inf (x, ζ, CON )] − inf (x, η, DEC). Then, the following holds. (x,ζ,CON )∩inf (x,η,DEC)| (1) minacc(τ )= |inf |inf (x,ζ,CON )|+|OUT ACC| . )∩sup(x,η,DEC)|+|IN ACC| . (2) maxacc(τ )= |inf (x,ζ,CON |inf (x,ζ,CON )|+|IN ACC|

Proof of (1) According to Proposition 2, inf (x, ζ, CON ) ⊆ [x]CON ⊆ sup(x, ζ, CON ) holds. Therefore, the denominator is in the form of |inf (x, ζ, CON )| + |K1 | (K1 ⊆ [sup(x, ζ, CON ) − inf (x, ζ, CON )]). Since P I(y, CON, DEC)={τ } for any y ∈ inf (x, ζ, CON ) ∩ inf (x, η, DEC), the numerator is in the form of |inf (x, ζ, CON ) ∩ inf (x, η, DEC)| + |K2 | + |K3 | (K2 ⊆ inf (x, ζ, CON ) ∩ [sup(x, η, DEC) − inf (x, η, DEC)] and K3 ⊆ K1 ). Thus, accuracy(τ ) is in the form of (|inf (x, ζ, CON ) ∩ inf (x, η, DEC)| + |K2 | + |K3 |)/(|inf (x, ζ, CON )| + |K1 |). In order to produce the minacc(τ ), we show such ϕ1 ∈ DD(x, (ζ, η), CON ∪ DEC) that K2 =K3 =∅ and |K1 | is maximum. This is approved by such a formula that b/(a + (k1 − k3 )) ≤ (b + k2 + k3 )/(a + k1 ) for any 0 ≤ b ≤ a (a = 0), any 0 ≤ k3 ≤ k1 and any 0 ≤ k2 . Since sup(x, ζ, CON )−inf (x, ζ, CON ) is equal to the union of disjoint sets ([sup(x, ζ, CON ) − inf (x, ζ, CON )] − inf (x, η, DEC)) ([sup(x, ζ, CON ) − inf (x, ζ, CON )] ∩ inf (x, η, DEC)), let us consider two disjoint sets. The ﬁrst set is OU T ACC. For any y ∈ OU T ACC, there exists a possible implication τ ∗ : [CON, ζ] ⇒ [DEC, η ∗ ] ∈ P I(y, CON , DEC) (η ∗ = η) by the deﬁnition of inf and sup. For any y ∈ [sup(x, ζ, CON ) − inf (x, ζ, CON )] ∩ inf (x, η, DEC), P T (y, DEC)={η} holds, and there exists a possible implication τ ∗∗ : [CON, ζ ∗ ] ⇒ [DEC, η] ∈ P I(y, CON, DEC) (ζ ∗ = ζ). In ϕ1 ∈ DD(x, (ζ, η), CON ∪ DEC) with these τ ∗ and τ ∗∗ , the denominator is |inf (x, ζ, CON )| + |OU T ACC| and the numerator is |inf (x, ζ, CON ) ∩ inf (x, η, DEC)|. Theorem 9. For a N IS, let us consider a possible implication τ :[CON, ζ] ⇒ [DEC, η] ∈ P I(x, CON, DEC). Let IN COV denote a set [sup(x, η, DEC) − inf (x, η, DEC)] ∩ sup(x, ζ, CON ), and let OU T COV denote a set [sup(x, η, DEC) − inf (x, η, DEC)] − inf (x, ζ, CON ). Then, the following holds. (x,ζ,CON )∩inf (x,η,DEC)| (1) mincov(τ )= |inf |inf (x,η,DEC)|+|OUT COV | . )∩inf (x,η,DEC)|+|IN COV | . (2) maxcov(τ )= |sup(x,ζ,CON |inf (x,η,DEC)|+|IN COV |

224

8

Hiroshi Sakai and Akimichi Okuma

An Algorithm for Reduction of Attributes in NISs

This section gives an algorithm for reducing the condition attributes in rules. Definition 13 (N-v. Reduction of Condition Attributes in Rules). Let us consider a certain rule τ : [CON, ζ] ⇒ [DEC, η] ∈ P I(x, CON, DEC). We say K ∈ CON is certainly dispensable from τ in DIS real , if τ : [CON −{K}, ζ ] ⇒ [DEC, η] is a certain rule. We say K ∈ CON is possibly dispensable from τ in DIS real , if τ : [CON − {K}, ζ ] ⇒ [DEC, η] is a possible rule. Let us consider a possible implication τ : [A, 3] ∧ [C, 3] ∧ [D, 2] ∧ [E, 5] ⇒ [F, 5] in N IS1 . This τ is deﬁnite, and τ belongs to D-GC class, i.e., τ is a certain rule. For AT R={A, C, E}, inf (1, (3, 3, 5), {A, C, E})=inf (1, (3), {A}) ∩ inf (1, (3), {C})∩inf (1, (5), {E})={1}∩{1, 5, 9}∩{1, 6}={1} and sup(1, (3, 3, 5), {A, C, E})=sup(1, (3), {A}) ∩ sup(1, (3), {C}) ∩ sup(1, (5), {E})={1, 5, 6, 10} ∩ {1, 5, 9} ∩ {1, 3, 4, 6, 7}={1} hold according to Proposition 6. For AT R={F }, inf (1, (5), {F })={1, 3, 4} and sup(1, (5), {F })={1, 3, 4, 5, 8}. Because sup(1, (3, 3, 5), {A, C, E})={1} ⊆ {1, 3, 4}=inf (1, (5), {F }) holds, τ : [A, 3] ∧ [C, 3] ∧ [E, 5] ⇒ [F, 5] is also a certain rule by Theorem 5. Thus, an attribute D is certainly dispensable from τ . In this way, it is possible to examine the reduction of attributes. In this case also, inf and sup information in Deﬁnition 5 is essential. An important problem on the reduction in DISs is to ﬁnd some minimal sets of condition attributes. Several work deals with reduction for ﬁnding minimal reducts. In [3], this problem is proved to be NP-hard, which means that to compute reducts is a non-trivial task. For solving this problem, a discernibility f unction is proposed also in [3], and this function is extended to a discernibility function in incomplete information systems [21,22]. In [19], an algorithm for ﬁnding a minimal complex is presented. In N ISs, it is also important to deal with this problem on the minimal reducts. Definition 14. For any N IS and any disjoint CON, DEC ⊆ AT , let us consider a possible implication τ :[CON, ζ] ⇒ [DEC, η], which belongs to either D-GC, IGC, D-M A or I-M A class. Furthermore, let Φ be a set {ϕ ∈ DD(x, (ζ, η), CON ∪ DEC)| x is consistent in ϕ }. If there is no proper subset CON ∗ ⊆ CON such that {ϕ ∈ DD(x, (ζ, η), CON ∪DEC)|x is consistent (in the relation from CON ∗ to DEC) in ϕ } is equal to the set Φ, we say τ is minimal (in this class). Problem 1. For any N IS, let DEC be decision attributes and let η be a tuple of decision attributes values for DEC. Then, ﬁnd all minimal certain or minimal possible rules in the form of [CON, ζ] ⇒ [DEC, η]. For additional information, calculate the minimum and maximum values of support, accuracy and coverage for every rule, too. For solving Problem 1, we introduced a total order, which is deﬁned by the signiﬁcance of attributes, over (AT –DEC), and we think about rules based on this order. Under this assumption, we have realized a tool for solving Problem 1, which is shown in Appendix 4. For example, let us suppose {A, B, C, D, E}

Basic Algorithms for Rough Non-deterministic Information Analysis

225

be an ordered set, and let [A, ζA ] ∧ [B, ζB ] ∧ [C, ζC ] ∧ [D, ζD ] ⇒ [F, ηF ] and [B, ζB ] ∧ [E, ζE ] ⇒ [F, ηF ] be certain rules. The latter seems simple, but we choose the former rule according to the order of signiﬁcance. In this case, each attribute Ai ∈ (AT –DEC) is sequentially picked up based on this order, and the necessity of the descriptor [Ai , ζi,j ] is checked. Then, Proposition 6 and Theorem 5 are applied. Of course, the introduction of total order over attributes is too strong simpliﬁcation of the problem. Therefore in the next step, it is necessary to solve the problem of reduction in N ISs without using any total order.

9

Concluding Remarks

A framework of RN IA (rough non-deterministic information analysis) is proposed, and an overview of algorithms is presented. Throughout this paper, rough sets based concepts in N ISs and the application of either inf and sup information or equivalence relations are studied. Especially, inf and sup in Deﬁnition 5 are key information for RN IA. This paper also presented some tool programs for RN IA. The authors would be grateful to Professor J.W. Grzymala-Busse and anonymous referees.

References 1. Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Dordrecht, (1991) 2. Pawlak, Z.: New Look on Bayes’ Theorem - The Rough Set Outlook. Bulletin of Int’l. Rough Set Society 5 (2001) 1–8 3. Komorowski, J., Pawlak, Z., Polkowski, L., Skowron, A.: Rough Sets: A Tutorial. Rough Fuzzy Hybridization. Springer (1999) 3–98 4. Nakamura, A., Tsumoto, S., Tanaka, H., Kobayashi, S.: Rough Set Theory and Its Applications. Journal of Japanese Society for AI 11 (1996) 209–215 5. Polkowski, L., Skowron, A.(eds.): Rough Sets in Knowledge Discovery 1. Studies in Fuzziness and Soft Computing, Vol.18. Physica-Verlag (1998) 6. Polkowski, L., Skowron, A.(eds.): Rough Sets in Knowledge Discovery 2. Studies in Fuzziness and Soft Computing, Vol.19. Physica-Verlag (1998) 7. Grzymala-Busse, J.: A New Version of the Rule Induction System LERS. Fundamenta Informaticae 31 (1997) 27–39 8. Ziarko, W.: Variable Precision Rough Set Model. Journal of Computer and System Sciences 46 (1993) 39–59 9. Tsumoto, S.: Knowledge Discovery in Clinical Databases and Evaluation of Discovered Knowledge in Outpatient Clinic. Information Sciences 124 (2000) 125–137 10. Zhong, N., Dong, J., Fujitsu, S., Ohsuga, S.: Soft Techniques to Rule Discovery in Data. Transactions of Information Processing Society of Japan 39 (1998) 2581– 2592 11. Rough Set Software. Bulletin of Int’l. Rough Set Society 2 (1998) 15–46 12. Codd, E.: A Relational Model of Data for Large Shared Data Banks. Communication of the ACM 13 (1970) 377–387

226

Hiroshi Sakai and Akimichi Okuma

13. Orlowska, E., Pawlak, Z.: Representation of Nondeterministic Information. Theoretical Computer Science 29 (1984) 27–39 14. Orlowska, E.: What You Always Wanted to Know about Rough Sets. Incomplete Information: Rough Set Analysis. Studies in Fuzziness and Soft Computing, Vol.13. Physica-Verlag (1998) 1–20 15. Lipski, W.: On Semantic Issues Connected with Incomplete Information Databases. ACM Transaction on Database Systems 4 (1979) 262–296 16. Lipski, W.: On Databases with Incomplete Information. Journal of the ACM 28 (1981) 41–70 17. Nakamura, A.: A Rough Logic based on Incomplete Information and Its Application. Int’l. Journal of Approximate Reasoning 15 (1996) 367-378 18. Grzymala-Busse, J.: On the Unknown Attribute Values in Learning from Examples. Lecture Notes in AI, Vol.542. Springer-Verlag (1991) 368–377 19. Grzymala-Busse, J., Werbrouck, P.: On the Best Search Method in the LEM1 and LEM2 Algorithms. Incomplete Information: Rough Set Analysis. Studies in Fuzziness and Soft Computing, Vol.13. Physica-Verlag (1998) 75–91 20. Kryszkiewicz, M.: Properties of Incomplete Information Systems in the Framework of Rough Sets. Rough Sets in Knowledge Discovery 1. Studies in Fuzziness and Soft Computing, Vol.18. Physica-Verlag (1998) 442-450 21. Kryszkiewicz, M.: Rough Set Approach to Incomplete Information Systems. Information Sciences 112 (1998) 39–49 22. Kryszkiewicz, M.: Rules in Incomplete Information Systems. Information Sciences 113 (1999) 271–292 23. Sakai, H.: Eﬀective Procedures for Data Dependencies in Information Systems. Rough Set Theory and Granular Computing. Studies in Fuzziness and Soft Computing, Vol.125. Springer (2003) 167–176 24. Sakai, H., Okuma, A.: An Algorithm for Finding Equivalence Relations from Tables with Non-deterministic Information. Lecture Notes in AI, Vol.1711. SpringerVerlag (1999) 64–72 25. Sakai, H.: Eﬀective Procedures for Handling Possible Equivalence Relations in Nondeterministic Information Systems. Fundamenta Informaticae 48 (2001) 343–362 26. Sakai, H., Okuma, A.: An Algorithm for Checking Dependencies of Attributes in a Table with Non-deterministic Information: A Rough Sets based Approach. Lecture Notes in AI, Vol.1886. Springer-Verlag (2000) 219–229 27. Sakai, H.: A Framework of Rough Sets based Rule Generation in Non-deterministic Information Systems. Lecture Notes in AI, Vol.2871. Springer-Verlag (2003) 143– 151

Appendixes Throughout the appendixes, every input to Unix system and programs is underlined. Furthermore, every attribute is identiﬁed with the ordinal number. For example, attributes A and C are identiﬁed with 1 and 3, respectively. These tool programs are implemented on a workstation with 450MHz Ultrasparc CPU.

Basic Algorithms for Rough Non-deterministic Information Analysis

Appendix 1. % more nis1.pl · · · (A1-1) object(10,6). data(1,[3,[1,3,4],3,2,5,5]). data(2,[[2,4],2,2,[3,4],[1,3,4],4]). data(3,[[1,2],[2,4,5],2,3,[4,5],5]). data(4,[[1,5],5,[2,4],2,[1,4,5],5]). data(5,[[3,4],4,3,[1,2,3],1,[2,5]]). data(6,[[3,5],4,1,[2,3,5],5,[2,3,4]]). data(7,[[1,5],4,5,[1,4],[3,5],1]). data(8,[4,[2,4,5],2,[1,2,3],2,[1,2,5]]). data(9,[2,5,3,5,4,2]). data(10,[[2,3,5],1,2,3,1,[1,2,3]]). % more attrib1.pl · · · (A1-2) condition([1,2,3]). decision([6]). % prolog · · · (A1-3) K-Prolog Compiler version 4.11 (C). ?-consult(define.pl). yes ?-translate1. · · · (A1-4) Data File Name: ’nis1.pl’. Attribute File Name: ’attrib1.pl’. EXEC TIME=0.073(sec) yes ?-class(con,[4,5,6]). · · · (A1-5) [1] Pe-classes: [4],[5],[6] Positive Selection Tuple from 4: [1,5,2] * Tuple from 5: [3,4,3] * Tuple from 6: [3,4,1] * Negative Selection Tuple from 1: [3,4,3] * Tuple from 3: [1,5,2] * [2] Pe-classes: [4],[5],[6] : : : [16] Pe-classes: [4],[5],[6] Positive Selection Tuple from 4: [5,5,4] * Tuple from 5: [4,4,3] * Tuple from 6: [5,4,1] * Negative Selection Certainly Definable EXEC TIME=0.058(sec) yes

227

228

Hiroshi Sakai and Akimichi Okuma

In (A1-1), data N IS1 is displayed. In (A1-2), condition attributes {A, B, C} and decision attributes {F } are displayed. In (A1-3), prolog interpreter is invoked. In (A1-4), inf and sup information is produced according to attribute ﬁle. In (A1-5), the deﬁnability of a set {4, 5, 6} for AT R={A, B, C} is examined. In the ﬁrst response, tuples from 4, 5 and 6 are ﬁxed to (1,5,2), (3,4,3) and (3,4,1). At the same time, tuples (3,4,3) from object 1 and (1,5,2) from object 3 are implicitly rejected. There are 16 responses, and a set {4, 5, 6} is proved to be certainly deﬁnable. Appendix 2. ?-translate2. · · · (A2-1) Data File Name: ’nis1.pl’. EXEC TIME=0.189(sec) yes ?-pe. · · · (A2-2) [1] Derived DISs: 192 Distinct Pe-relations: 176 [2] Derived DISs: 27 Distinct Pe-relations: 27 : : : [6] Derived DISs: 54 Distinct Pe-relations: 54 EXEC TIME=1.413(sec) yes % more 3.rs · · · (A2-3) object(10). attrib(3). cond(1,3,1,3). pos(1,3,1). cond(2,3,1,2). pos(2,3,1). : : : inf([7,3,1],[7,3,1],[[7],[1]]). sup([7,3,1],[7,3,1],[[7],[1]]). % more 3.pe · · · (A2-4) 10 1 3 2 2 1 2 2 2 1 6 7 2 1 2 5 3 4 8 9 0 0 10 0 0 1 1 2 2 4 1 6 7 2 1 2 5 3 8 0 9 0 0 10 0 0 1 % merge · · · (A2-5) EXEC TIME=0.580(sec) % more 12345.pe · · · (A2-6) 10 5 1 2 3 4 5 40310784 2 1 2 3 4 5 6 7 8 9 10 0 0 0 0 0 0 0 0 0 0 40030848 1 2 2 4 5 6 7 8 9 10 0 3 0 0 0 0 0 0 0 0 279936 In (A2-1), inf and sup information is produced for each attribute. In (A2-2), the deﬁnability of a set OB is examined for each attribute. As a side eﬀect, every pe-relation is obtained. In (A2-3), inf and sup information for the attribute C

Basic Algorithms for Rough Non-deterministic Information Analysis

229

Table 6. Deﬁnitions of N ISs N IS N IS2 N IS3 N IS4

|OB| |AT | Derived DISs 30 5 7558272(= 27 × 310 ) 50 5 120932352(= 211 × 310 ) 100 5 1451188224(= 213 × 311 )

is displayed, and the contents of pe rel({C}) are displayed in (A2-4). In (A2-5), program merge is invoked for merging ﬁve pe-relations pe rel({A}), pe rel({B}), · · ·, pe rel({E}). The produced pe rel({A, B, C, D, E}) are displayed in (A2-6). Here, we show each execution time for the following N ISs, which are automatically produced by means of applying a random number program. Appendix 3. % depend · · · (A3-1) File Name for Condition: 1.pe File Name for Decision: 6.pe CRITERION 1 Derived DISs: 10368 Derived Consistent DISs: 0 Degree of Consistent DISs: 0.000 CRITERION 2 Minimum Degree of Dependency: 0.000 Maximum Degree of Dependency: 0.600 EXEC TIME=0.030(sec) % depratio · · · (A3-2) File Name for Condition: 12345.pe File Name for Decision: 6.pe CRITERION 1 Derived DISs: 2176782336 Derived Consistent DISs: 2161665792 Degree of Consistent DISs: 0.993 CRITERION 2 Minimum Degree of Dependency: 0.800 Maximum Degree of Dependency: 1.000 Consistency Ratio Object 1: 1.000(=2176782336/2176782336) Object 2: 0.993(=2161665792/2176782336) Object 3: 0.993(=2161665792/2176782336) : : : Object 10: 1.000(=2176782336/2176782336) EXEC TIME=0.020(sec) In (A3-1), the dependency from {A} to {F } is examined. Here, two pe-relations pe rel({A}) and pe rel({F }) are applied. There is no consistent derived DIS. Furthermore, the maximum value of the dependency is 0.6. Therefore, it will be diﬃcult to recognize the dependency from {A} to {F }. In (A3-2), the depen-

230

Hiroshi Sakai and Akimichi Okuma

Table 7. Each execution time(sec) of translate2, pe and merge for {A, B, C}. N 1 denotes the number of derived DISs, and N 2 denotes the number of distinct perelations N IS translate2 pe merge N 1 N IS2 0.308 1.415 0.690 5832 0.548 8.157 0.110 5184 N IS3 1.032 16.950 2.270 20736 N IS4

N2 120 2 8

Table 8. Each execution time(sec) of depend and depratio from {A, B, C} to {E}. N 3 denotes the number of derived DISs for {A, B, C, E}, and N 4 denotes the number of combined pairs pei ∈ pe rel({A, B, C}) and pej ∈ pe rel({E}) N IS N IS2 N IS3 N IS4

depend depratio N3 0.020 0.080 104976 0.010 0.060 279936 0.070 0.130 4478976

N4 2160 108 1728

dency from {A, B, C, D, E} to {F } is examined. This time, it will be possible to recognize the dependency from {A, B, C, D, E} to {F }. Appendix 4. % more attrib2.pl · · · (A4-1) decision([6]). decval([5]). order([1,2,3,4,5]). ?-translate3. · · · (A4-2) Data File Name: ’nis1.pl’. Attribute File Name: ’attrib2.pl’. EXEC TIME=0.066(sec) yes ?-certain. · · · (A4-3) DECLIST: [A Certain Rule from Object 1] [1,3]&[3,3]&[5,5]=>[6,5] [746496/746496,DGC] [(0.1,0.1),(1.0,1.0),(0.2,0.333)] [A Certain Rule from Object 3] [A Certain Rule from Object 4] EXEC TIME=0.026(sec) yes ?-possible. · · · (A4-4) DECLIST: [Possible Rules from Object 1] === One Attribute === [1,3]=>[6,5] [10368/10368,DMA] [(0.1,0.2),(0.25,1.0),(0.2,0.5)]

Basic Algorithms for Rough Non-deterministic Information Analysis

231

[2,3]=>[6,5] [486/1458,IGC] [(0.1,0.1),(1.0,1.0),(0.2,0.333)] [4,2]=>[6,5] [5832/5832,DMA] [(0.2,0.4),(0.4,1.0),(0.4,0.8)] [Possible Rules from Object 3] === One Attribute === [1,1]=>[6,5] [5184/10368,IMA] : : : [Possible Rules from Object 8] === One Attribute === [1,4]=>[6,5] [3456/10368,IMA] : : : [(0.3,0.4),(0.6,1.0),(0.6,0.8)] [5,2]=>[6,5] [648/1944,IGC] [(0.1,0.1),(1.0,1.0),(0.2,0.25)] EXEC TIME=0.118(sec) yes In order to handle rules, it is necessary to prepare a ﬁle like in (A4-1). In (A4-2), inf and sup information is produced according to attrib2.pl. Program certain extracts possible implications belonging to D-GC class in (A4-3). As an additional information, minsup, maxsup, minacc, maxacc, mincov and maxcov are sequentially displayed. Program possible extracts possible implications belonging to I-GC or M A classes in (A4-4). Table 9 shows each execution time for three N ISs. Here, the order is sequentially A, B, C and D for the decision attribute {E}, and the decision attribute value is 1. This execution time depends upon the number of such object x that 1 ∈ P T (x, {E}). Table 9. Each execution time(sec) of translate3, possible and certain. N 5 denotes the number of such object x that 1 ∈ P T (x, {E}) N IS translate3 possible certain N IS2 0.178 0.115 0.054 0.173 0.086 0.039 N IS3 0.612 0.599 0.391 N IS4

N5 7 4 9

A Partition Model of Granular Computing Yiyu Yao Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 [email protected] http://www.cs.uregina.ca/∼yyao

Abstract. There are two objectives of this chapter. One objective is to examine the basic principles and issues of granular computing. We focus on the tasks of granulation and computing with granules. From semantic and algorithmic perspectives, we study the construction, interpretation, and representation of granules, as well as principles and operations of computing and reasoning with granules. The other objective is to study a partition model of granular computing in a set-theoretic setting. The model is based on the assumption that a ﬁnite set of universe is granulated through a family of pairwise disjoint subsets. A hierarchy of granulations is modeled by the notion of the partition lattice. The model is developed by combining, reformulating, and reinterpreting notions and results from several related ﬁelds, including theories of granularity, abstraction and generalization (artiﬁcial intelligence), partition models of databases, coarsening and reﬁning operations (evidential theory), set approximations (rough set theory), and the quotient space theory for problem solving.

1

Introduction

The basic ideas of granular computing, i.e., problem solving with diﬀerent granularities, have been explored in many ﬁelds, such as artiﬁcial intelligence, interval analysis, quantization, rough set theory, Dempster-Shafer theory of belief functions, divide and conquer, cluster analysis, machine learning, databases, and many others [73]. There is a renewed and fast growing interest in granular computing [21, 30, 32, 33, 41, 43, 48, 50, 51, 58, 60, 70, 77]. The term “granular computing (GrC)” was ﬁrst suggested by T.Y. Lin [74]. Although it may be diﬃcult to have a precise and uncontroversial deﬁnition, we can describe granular computing from several perspectives. We may deﬁne granular computing by examining its major components and topics. Granular computing is a label of theories, methodologies, techniques, and tools that make use of granules, i.e., groups, classes, or clusters of a universe, in the process of problem solving [60]. That is, granular computing is used as an umbrella term to cover these topics that have been studied in various ﬁelds in isolation. By examining existing studies in a uniﬁed framework of granular computing and extracting their commonalities, one may be able to develop a general theory for problem solving. Alternatively, we may deﬁne granular computing by J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 232–253, 2004. c Springer-Verlag Berlin Heidelberg 2004

A Partition Model of Granular Computing

233

identifying its unique way of problem solving. Granular computing is a way of thinking that relies on our ability to perceive the real world under various grain sizes, to abstract and consider only those things that serve our present interest, and to switch among diﬀerent granularities. By focusing on diﬀerent levels of granularities, one can obtain various levels of knowledge, as well as inherent knowledge structure. Granular computing is essential to human problem solving, and hence has a very signiﬁcant impact on the design and implementation of intelligent systems. The ideas of granular computing have been investigated in artiﬁcial intelligence through the notions of granularity and abstraction. Hobbs proposed a theory of granularity based on the observation that “[w]e look at the world under various grain seizes and abstract from it only those things that serve our present interests” [18]. Furthermore, “[o]ur ability to conceptualize the world at diﬀerent granularities and to switch among these granularities is fundamental to our intelligence and ﬂexibility. It enables us to map the complexities of the world around us into simpler theories that are computationally tractable to reason in” [18]. Giunchigalia and Walsh proposed a theory of abstraction [14]. Abstraction can be thought of as “the process which allows people to consider what is relevant and to forget a lot of irrelevant details which would get in the way of what they are trying to do”. They showed that the theory of abstraction captures and generalizes most previous work in the area. The notions of granularity and abstraction are used in many subﬁelds of artiﬁcial intelligence. The granulation of time and space leads naturally to temporal and spatial granularities. They play an important role in temporal and spatial reasoning [3, 4, 12, 19, 54]. Based on granularity and abstraction, many authors studied some fundamental topics of artiﬁcial intelligence, such as, for example, knowledge representation [14, 75], theorem proving [14], search [75, 76], planning [24], natural language understanding [35], intelligent tutoring systems [36], machine learning [44], and data mining [16]. Granular computing recently received much attention from computational intelligence community. The topic of fuzzy information granulation was ﬁrst proposed and discussed by Zadeh in 1979 and further developed in the paper published in 1997 [71, 73]. In particular, Zadeh proposed a general framework of granular computing based on fuzzy set theory [73]. Granules are constructed and deﬁned based on the concept of generalized constraints. Relationships between granules are represented in terms of fuzzy graphs or fuzzy if-then rules. The associated computation method is known as computing with words (CW) [72]. Although the formulation is diﬀerent from the studies in artiﬁcial intelligence, the motivations and basic ideas are the same. Zadeh identiﬁed three basic concepts that underlie human cognition, namely, granulation, organization, and causation [73]. “Granulation involves decomposition of whole into parts, organization involves integration of parts into whole, and causation involves association of causes and eﬀects.” [73] Yager and Filev argued that “human beings have been developed a granular view of the world” and “. . . objects with which mankind perceives, measures, conceptualizes and reasons are granular” [58]. Therefore, as

234

Yiyu Yao

pointed out by Zadeh, “[t]he theory of fuzzy information granulation (TFIG) is inspired by the ways in which humans granulate information and reason with it.”[73] The necessity of information granulation and simplicity derived from information granulation in problem solving are perhaps some of the practical reasons for the popularity of granular computing. In many situations, when a problem involves incomplete, uncertain, or vague information, it may be diﬃcult to diﬀerentiate distinct elements and one is forced to consider granules [38–40]. In some situations, although detailed information may be available, it may be suﬃcient to use granules in order to have an eﬃcient and practical solution. In fact, very precise solutions may not be required at all for many practical problems. It may also happen that the acquisition of precise information is too costly, and coarsegrained information reduces cost [73]. They suggest a basic guiding principle of fuzzy logic: “Exploit the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness, low solution cost and better rapport with reality” [73]. This principle oﬀers a more practical philosophy for real world problem solving. Instead of searching for the optimal solution, one may search for good approximate solutions. One only needs to examine the problem at a ﬁner granulation level with more detailed information when there is a need or beneﬁt for doing so [60]. The popularity of granular computing is also due to the theory of rough sets [38, 39]. As a concrete theory of granular computing, rough set model enables us to precisely deﬁne and analyze many notions of granular computing. The results provide an in-depth understanding of granular computing. The objectives of this chapter are two-fold based on investigations at two levels. Sections 2 and 3 focus on a high and abstract level development of granular computing, and Section 3 deals with a low and concrete level development by concentrating on a partition model of granular computing. The main results are summarized as follows. In Section 2, we discuss in general terms the basic principles and issues of granular computing based on related studies, such as the theory of granularity, the theory of abstraction, and their applications. The tasks of granulation and computing with granules are examined and related to existing studies. We study the construction, interpretation, and representation of granules, as well as principles and operations of computing and reasoning with granules. In Section 3, we argue that granular computing is a way of thinking. This way of thinking is demonstrated based on three problem solving domains, i.e., concept formation, top-down programming, and top-down theorem proving. In Section 4, we study a partition model of granular computing in a settheoretic setting. The model is based on the assumption that a ﬁnite set of universe is granulated through a family of pairwise disjoint subsets. A hierarchy of granulations is modeled by the notion of the partition lattice. Results from rough sets [38], quotient space theory [75, 76], belief functions [46], databases[27], data mining [31, 34], and power algebra [6] are reformulated, re-interpreted, reﬁned, extended and combined for granular computing. We introduce two basic

A Partition Model of Granular Computing

235

operations called zooming-in and zooming-out operators. Zooming-in allows us to expand an element of the quotient universe into a subset of the universe, and hence reveals more detailed information. Zooming-out allows us to move to the quotient universe by ignoring some details. Computations in both universes can be connected through zooming operations.

2

Basic Issues of Granular Computing

Granular computing may be studied based on two related issues, namely granulation and computation [60]. The former deals with the construction, interpretation, and representation of granules, and the latter deals with the computing and reasoning with granules. They can be further divided with respect to algorithmic and semantic aspects [60]. The algorithmic study concerns the procedures for constructing granules and related computation, and the semantic study concerns the interpretation and physical meaningfulness of various algorithms. Studies from both aspects are necessary and important. The results from semantic study may provide not only interpretations and justiﬁcations for a particular granular computing model, but also guidelines that prevent possible misuses of the model. The results from algorithmic study may lead to eﬃcient and eﬀective granular computing methods and tools. 2.1

Granulations

Granulation of a universe involves dividing the universe into subsets or grouping individual objects into clusters. A granule may be viewed as a subset of the universe, which may be either fuzzy or crisp. A family of granules containing every object in the universe is called a granulation of the universe, which provides a coarse-grained view of the universe. A granulation may consist of a family of either disjoint or overlapping granules. There are many granulations of the same universe. Diﬀerent views of the universe can be linked together, and a hierarchy of granulations can be established. The notion of granulation can be studied in many diﬀerent contexts. The granulation of the universe, particularly the semantics of granulation, is domain and application dependent. Nevertheless, one can still identify some domain independent issues [75]. Some of such issues are described in more detail below. Granulation Criteria. A granulation criterion deals with the semantic interpretation of granules and addresses the question of why two objects are put into the same granule. It is domain speciﬁc and relies on the available knowledge. In many situations, objects are usually grouped together based on their relationships, such as indistinguishability, similarity, proximity, or functionality [73]. One needs to build models to provide both semantical and operational interpretations of these notions. A model enables us to deﬁne formally and precisely various notions involved, and to study systematically the meanings and rationalities of granulation criteria.

236

Yiyu Yao

Granulation Structures. It is necessary to study granulation structures derivable from various granulations of the universe. Two structures can be observed, the structure of individual granules and structure of a granulation. Consider the case of crisp granulation. One can immediately deﬁne an order relation between granules based on the set inclusion. In general, a large granule may contain small granules, and small granules may be combined to form a large granule. The order relation can be extended to diﬀerent granulations. This leads to multi-level granulations in a natural hierarchical structure. Various hierarchical granulation structures have been studied by many authors [22, 36, 54, 75, 76]. Granulation Methods. From the algorithmic aspect, a granulation method addresses the problem of how to put two objects into the same granule. It is necessary to develop algorithms for constructing granules and granulations eﬃciently based on a granulation criterion. The construction process can be modeled as either top-down or bottom-up. In a top-down process, the universe is decomposed into a family of subsets, each subset can be further decomposed into smaller subsets. In a bottom-up process, a subset of objects can be grouped into a granule, and smaller granules can be further grouped into larger granules. Both processes lead naturally to a hierarchical organization of granules and granulations [22, 61]. Representation/Description of Granules. Another semantics related issue is the interpretation of the results of a granulation method. Once constructed, it is necessary to describe, to name and to label granules using certain languages. This can be done in several ways. One may assign a name to a granule such that an element in the granule is an instance of the named category, as being done in classiﬁcation [22]. One may also construct a certain type of center as the representative of a granule, as being done in information retrieval [45, 56]. Alternatively, one may select some typical objects from a granule as its representative. For example, in many search engines, the search results are clustered into granules and a few titles and some terms can be used as the description of a granule [8, 17]. Quantitative Characteristics of Granules and Granulations. One can associate quantitative measures to granules and granulations to capture their features. Consider again the case of crisp granulation. The cardinality of a granule, or Hartley information measure, can be used as a measure of the size or uncertainty of a granule [64]. The Shannon entropy measure can be used as a measure of the granularity of a partition [64]. These issues can be understood by examining a concrete example of granulation known as the cluster analysis [2]. This can be done by simply change granulation into clustering and granules into clusters. Clustering structures may be hierarchical or non-hierarchical, exclusive or overlapping. Typically, a similarity or distance function is used to deﬁne the relationships between objects. Clustering criteria may be deﬁned based on the similarity or distance function, and the required cluster structures. For example, one would expect strong similarities between objects in the same cluster, and weak similarities between objects

A Partition Model of Granular Computing

237

in diﬀerent clusters. Many clustering methods have been proposed and studied, including the families of hierarchical agglomerative, hierarchical divisive, iterative partitioning, density search, factor analytic, clumping, and graph theoretic methods [1]. Cluster analysis can be used as an exploratory tool to interpret data and ﬁnd regularities from data [2]. This requires the active participation of experts to interpret the results of clustering methods and judge their signiﬁcance. A good representation of clusters and their quantitative characterizations may make the task of exploration much easier. 2.2

Computing and Reasoning with Granules

Computing and reasoning with granules depend on the previously discussed notion of granulations. They can be similarly studied from both the semantic and algorithmic perspectives. One needs to design and interpret various methods based on the interpretation of granules and relationships between granules, as well as to deﬁne and interpret operations of granular computing. The two level structures, the granule level and the granulation level, provide the inherent relationships that can be explored in problem solving. The granulated view summarizes available information and knowledge about the universe. As a basic task of granular computing, one can examine and explore further relationships between granules at a lower level, and relationships between granulations at a higher level. The relationships include closeness, dependency, and association of granules and granulations [43]. Such relationships may not hold fully and certain measures can be employed to quantify the degree to which the relationships hold [64]. This allows the possibility to extract, analyze and organize information and knowledge through relationships between granules and between granulations [62, 63]. The problem of computing and reasoning with granules is domain and application dependent. Some general domain independent principles and issues are listed below. Mappings between Diﬀerent Level of Granulations. In the granulation hierarchy, the connections between diﬀerent levels of granulations can be described by mappings. Giunchglia and Walsh view an abstraction as a mapping between a pair of formal systems in the development of a theory of abstraction [14]. One system is referred to as the ground space, and the other system is referred to as the abstract space. At each level of granulation, a problem is represented with respect to the granularity of the level. The mapping links diﬀerent representations of the same problem at diﬀerent levels of details. In general, one can classify and study diﬀerent types of granulations by focusing on the properties of the mappings [14]. Granularity Conversion. A basic task of granular computing is to change views with respect to diﬀerent levels of granularity. As we move from one level of details to another, we need to convert the representation of a problem accordingly [12, 14]. A move to a more detailed view may reveal information that otherwise cannot be seen, and a move to a simpler view can improve the

238

Yiyu Yao

high level understanding by omitting irrelevant details of the problem [12, 14, 18, 19, 73, 75, 76]. The change between grain-sized views may be metaphorically stated as the change between the forest and trees. Property Preservation. Granulation allows the diﬀerent representations of the same problem in diﬀerent levels of details. It is naturally expected that the same problem must be consistently represented [12]. A granulation and its related computing methods are meaningful only they preserve certain desired properties [14, 30, 75]. For example, Zhang and Zhang studied the “false-preserving” property, which states that if a coarse-grained space has no solution for a problem then the original ﬁne-grained space has no solution [75, 76]. Such a property can be explored to improve the eﬃciency of problem solving by eliminating a more detailed study in a coarse-grained space. One may require that the structure of a solution in a coarse-grained space is similar to the solution in a ﬁne-grained space. Such a property is used in top-down problem solving techniques. More speciﬁcally, one starts with a sketched solution and successively reﬁnes it into a full solution. In the context of hierarchical planning, one may impose similar properties, such as upward solution property, downward solution property, monotonicity property, etc. [24]. Operators. The relationship between granules at diﬀerent levels and conversion of granularity can be precisely deﬁned by operators [12, 36]. They serve as the basic build blocks of granular computing. There are at least two types of operators that can be deﬁned. One type deals with the shift from a ﬁne granularity to a coarse granularity. A characteristics of such an operator is that it will discard certain details, which makes distinct objects no longer diﬀerentiable. Depending on the context, many interpretations and deﬁnitions are available, such as abstraction, simpliﬁcation, generalization, coarsening, zooming-out, etc. [14, 18, 19, 36, 46, 66, 75]. The other type deals with the change from a coarse granularity to a ﬁne granularity. A characteristics of such an operator is that it will provide more details, so that a group of objects can be further classiﬁed. They can be deﬁned and interpreted differently, such as articulation, speciﬁcation, expanding, reﬁning, zooming-in, etc. [14, 18, 19, 36, 46, 66, 75]. Other types of operators may also be deﬁned. For example, with the granulation, one may not be able to exactly characterize an arbitrary subset of a ﬁne-grained universe in a coarse-grained universe. This leads to the introduction of approximation operators in rough set theory [39, 59]. The notion of granulation describes our ability to perceive the real world under various grain sizes, and to abstract and consider only those things that serve our present interest. Granular computing methods describe our ability to switch among diﬀerent granularities in problem solving. Detailed and domain speciﬁc methods can be developed by elaborating these issues with explicit reference to an application. For example, concrete domain speciﬁc conversion methods and operators can be deﬁned. In spite of the diﬀerences between various methods, they are all governed by the same underlying principles of granular computing.

A Partition Model of Granular Computing

3

239

Granular Computing as a Way of Thinking

The underlying ideas of granular computing have been used either explicitly or implicitly for solving a wide diversity of problems. Their eﬀectiveness and merits may be diﬃcult to study and analyze based on some kind of formal proofs. They may be judged based on the powerful and yet imprecise and subjective tools of our experience, intuition, reﬂections and observations [28]. As pointed out by Leron [28], a good way of activating these tools is to carry out some case studies. For such a purpose, the general ideas, principles, and methodologies of granular computing are further examined with respect to several diﬀerent ﬁelds in the rest of this section. It should be noted that analytical and experimental results on the eﬀectiveness of granular computing in speciﬁc domains, though will not be discussed in this chapter, are available [20, 24, 75]. 3.1

Concept Formation

From philosophical point of view, granular computing can be understood as a way of thinking in terms of the notion of concepts that underlie the human knowledge. Every concept is understood as a unit of thoughts consisting of two parts, the intension and the extension of the concept [9, 52, 53, 55, 57]. The intension (comprehension) of a concept consists of all properties or attributes that are valid for all those objects to which the concept applies. The extension of a concept is the set of objects or entities which are instances of the concept. All objects in the extension have the same properties that characterize the concept. In other words, the intension of a concept is an abstract description of common features or properties shared by elements in the extension, and the extension consists of concrete examples of the concept. A concept is thus described jointly by its intension and extension. This formulation enables us to study concepts in a logic setting in terms of intensions and also in a set-theoretic setting in terms of extensions. The description of granules characterize concepts from the intension point of view, while granules themselves characterize concepts from the extension point of view. Through the connections between extensions of concepts, one may establish relationships between concepts [62, 63]. In characterizing human knowledge, one needs to consider two topics, namely, context and hierarchy [42, 47]. Knowledge is contextual and hierarchical. A context in which concepts are formed provides meaningful interpretation of the concepts. Knowledge is organized in a tower or a partial ordering. The baselevel, or ﬁrst-level, concepts are the most fundamental concepts, and higher-level concepts depend on lower-level concepts. To some extent, granulation and inherent hierarchical granulation structures reﬂect naturally the way in which human knowledge is organized. The construction, interpretation, and description of granules and granulations are of fundamental importance in the understanding, representation, organization and synthesis of data, information, and knowledge.

240

3.2

Yiyu Yao

Top-Down Programming

The top-down programming is an eﬀective technique to deal with the complex problem of programming, which is based on the notions of structured programming and stepwise reﬁnement [26]. The principles and characteristics of the topdown design and stepwise reﬁnement, as discussed by Ledgard, Gueras and Nagin [26], provide a convincing demonstration that granular computing is a way of thinking. According to Ledgard, Gueras and Nagin [26], the top-down programming approach has the following characteristics: Design in Levels. A level consists of a set of modules. At higher levels, only a brief description of a module is provided. The details of the module are to be reﬁned, divided into smaller modules, and developed in lower levels. Initial Language Independence. The high-level representations at initial levels focus on expressions that are relevant to the problem solution, without explicit reference to machine and language dependent features. Postponement of Details to Lower Levels. The initial levels concern critical broad issues and the structure of the problem solution. The details such as the choice of speciﬁc algorithms and data structures are postponed to lower, implementation levels. Formalization of Each Level. Before proceeding to a lower level, one needs to obtain a formal and precise description of the current level. This will ensure a full understanding of the structure of the current sketched solution. Veriﬁcation of Each Level. The sketched solution at each level must be veriﬁed, so that errors pertinent to the current level will be detected. Successive Reﬁnements. Top-down programming is a successive reﬁnement process. Starting from the top level, each level is redeﬁned, formalized, and veriﬁed until one obtains a full program. In terms of granular computing, program modules correspond to granules, and levels of the top-down programming correspond to diﬀerent granularities. One can immediately see that those characteristics also hold for granular computing in general. 3.3

Top-Down Theorem Proving

Another demonstration of granular computing as a way of thinking is the approach of top-down theorem proving, which is used by computer systems and human experts. The PROLOG interpreter basically employs a top-down, depthﬁrst search strategy to solve problem through theorem proving [5]. It has also been suggested that the top-down approach is eﬀective for developing, communicating and writing mathematical proofs [13, 14, 25, 28]. PROLOG is a logic programming language widely used in artiﬁcial intelligence. It is based on the ﬁrst-order predicate logic and models problem solving as theorem proving [5]. A PROLOG program consists of a set of facts and rules

A Partition Model of Granular Computing

241

in the form of Horn clauses that describe the objects and relations in a problem domain. The PROLOG interpreter answers a query, referred to as goals, by ﬁnding out whether the query is a logical consequence of the facts and rules of a PROLOG program. The inference is performed in a top-down, left to right, depth-ﬁrst manner. A query is a sequence of one or more goals. At the top level, the leftmost goal is reduced to a sequence of subgoals to be tried by using a clause whose head uniﬁes with the leftmost goal. The PROLOG interpreter then continues by trying to reduce the leftmost goal of the new sequence of goals. Eventually the lestmost goal is satisﬁed by a fact, and the second leftmost goal is tried in the same manner. Backtracking is used when the interpreter fails to ﬁnd a uniﬁcation that solves a goal, so that other clauses can be tried. A proof found by the PROLOG interpreter can be expressed naturally in a hierarchical structure, with the proofs of subgoals as the children of a goal. In the process of reducing a goal to a sequence of subgoals, one obtains more details of the proof. The strategy can be applied to general theorem proving. This may be carried out by abstracting the goal, proving its abstracted version and then using the structure of the resulting proof to help construct the proof of the original goal [14]. By observing the systematic way of top-down programming, some authors suggest that the similar approach can be used in developing, teaching and communicating mathematical proofs [13, 28]. Leron proposed a structured method for presenting mathematical proofs [28]. The main objective is to increase the comprehensibility of mathematical presentations, and at the same time, retain their rigor. The traditional linear fashion presents a proof step-by-step from hypotheses to conclusion. In contrast, the structured method arranges the proof in levels and proceeds in a top-down manner. Like the top-down, step-wise reﬁnement programming approach, a level consists of short autonomous modules, each embodying one major idea of the proof to be further developed in the subsequent levels. The top level is a very general description of the main line of the proof. The second level elaborates on the generalities of the top level by supplying proofs of unsubstantiated statements, details of general descriptions, and so on. For some more complicated tasks, the second level only gives brief descriptions and the details are postponed to the lower levels. The process continues by supplying more details of the higher levels until a complete proof is reached. Such a development of proofs procedure is similar to the strategy used by the PROLOG interpreter. A complicated proof task is successively divided into smaller and easier subtasks. The inherent structures of those tasks not only improve the comprehensibility of the proof, but also increase our understanding of the problem. Lamport proposed a proof style, a reﬁnement of natural deduction, for developing and writing structured proofs [25]. It is also based on hierarchical structuring, and divides proofs into levels. By using a numbering scheme to label various parts of a proof, one can explicitly show the structures of the proof. Furthermore, such a structure can be conveniently expressed using a computer-based hypertext system. One can concentrate on a particular level in the structure

242

Yiyu Yao

and suppress lower level details. In principle, the top-down design and stepwise reﬁnement strategy of programming can be applied in developing proofs to eliminate possible errors. 3.4

Granular Computing Approach of Problem Solving

In their book on research methods, Granziano and Raulin make a clear separation of research process and content [11]. They state, “... the basic processes and the systematic way of studying problems are common elements of science, regardless of each discipline’s particular subject matter. It is the process and not the content that distinguishes science from other ways of knowing, and it is the content – the particular phenomena and fact of interest – that distinguishes one scientiﬁc discipline from another.” [11] From the discussion of the previous examples, we can make a similar separation of the granular computing process and content (i.e., domains of applications). The systematic way of granular computing is generally applicable to diﬀerent domains, and can be studied based on the basic issues and principles discussed in the last section. In general, granular computing approach can be divided into top-down and bottom-up modes. They present two directions of switch between levels of granularities. The concept formation can be viewed as a combination of top-down and bottom-up. One can combine speciﬁc concepts to produce a general concept in a bottom-up manner, and divide a concept into more speciﬁc subconcepts in top-down manner. Top-down programming and top-down theorem proving are typical examples of top-down approaches. Independent of the modes, step-wise (successive) reﬁnement plays an important role. One needs to fully understand all notions of a particular level before moving up or down to another level. From the case studies, we can abstract some common features by ignoring irrelevant formulation details. It is easy to arrive at a conclusion that granular computing is a way of thinking and a philosophy for problem solving. At an abstract level, it captures and reﬂects our ability to solve a problem by focusing on diﬀerent levels of details, and move easily from diﬀerent levels at various stages. The principles of granular computing are the same and applicable to many domains.

4

A Partition Model

A partition model is developed by focusing on the basic issues of granular computing. The partition model has been studied extensively in rough set theory [39]. 4.1

Granulation by Partition and Partition Lattice

A simple granulation of the universe can be deﬁned based on an equivalence relation or a partition. Let U denote a ﬁnite and non-empty set called the universe. Suppose E ⊆ U × U denote an equivalence relation on U , where × denotes the Cartesian product of sets. That is, E is reﬂective, symmetric, and transitive.

A Partition Model of Granular Computing

243

The equivalence relation E divides the set U into a family of disjoint subsets called the partition of the universe induced by E and denoted by πE = U/E. The subsets in a partition are also called blocks. Conversely, given a partition π of the universe, one can uniquely deﬁne an equivalence relation Eπ : xEπ y ⇐⇒ x and y are in the same block of the partition π.

(1)

Due to the one to one relationship between equivalence relations and partitions, one may use them interchangeably. One can deﬁne an order relation on the set of all partitions of U , or equivalently the set of all equivalence relations on U . A partition π1 is a reﬁnement of another partition π2 , or equivalently, π2 is a coarsening of π1 , denoted by π1 π2 , if every block of π1 is contained in some block of π2 . In terms of equivalence relations, we have Eπ1 ⊆ Eπ2 . The reﬁnement relation is a partial order, namely, it is reﬂexive, antisymmetric, and transitive. It deﬁnes a partition lattice Π(U ). Given two partitions π1 and π2 , their meet, π1 ∧ π2 , is the largest partition that is a reﬁnement of both π1 and π2 , their join, π1 ∨ π2 , is the smallest partition that is a coarsening of both π1 and π2 . The blocks of the meet are all nonempty intersections of a block from π1 and a block from π2 . The blocks of the join are the smallest subsets which are exactly a union of blocks from π1 and π2 . In terms of equivalence relations, for two equivalence relations R1 and R2 , their meet is deﬁned by R1 ∩ R2 , and their join is deﬁned by (R1 ∪ R2 )∗ , the transitive closure of the relation R1 ∪ R2 . The lattice Π(U ) contains all possible partition based granulations of the universe. The reﬁnement partial order on partitions provides a natural hierarchy of granulations. The partition model of granular computing is based on the partition lattice or subsystems of the partition lattice. 4.2

Partition Lattice in an Information Table

Information tables provide a simple and convenient way to represent a set of objects by a ﬁnite set of attributes [39, 70]. Formally, an information table is deﬁned as the following tuple: (U, At, {Va | a ∈ At}, {Ia | a ∈ At}),

(2)

where U is a ﬁnite set of objects called the universe, At is a ﬁnite set of attributes or features, Va is a set of values for each attribute a ∈ At, and Ia : U −→ Va is an information function for each attribute a ∈ At. A database is an example of information tables. Information tables give a speciﬁc and concrete interpretation of equivalence relations used in granulation. With respect to an attribute a ∈ At, an object x ∈ U takes only one value from the domain Va of a. Let a(x) = Ia (x) denote the value of x on a. By extending to a subset of attributes A ⊆ At, A(x) denotes the value of x on attributes A, which can be viewed as a vector with each a(x), a ∈ A, as one of its components. For an attribute a ∈ At, an equivalence relation Ea is given by: for x, y ∈ U , xEa y ⇐⇒ a(x) = a(y). (3)

244

Yiyu Yao

Two objects are considered to be indiscernible, in the view of a single attribute a, if and only if they have exactly the same value. For a subset of attributes A ⊆ At, an equivalence relation EA is deﬁned by: xEA y ⇐⇒ A(x) = A(y) ⇐⇒ (∀a ∈ A)a(x) = a(y) ⇐⇒ Ea .

(4)

a∈A

With respect to all attributes in A, x and y are indiscernible, if and only if they have the same value for every attribute in A. The empty set ∅ produces the coarsest relation, i.e., E∅ = U × U . If the entire attribute set is used, one obtains the ﬁnest relation EAt . Moreover, if no two objects have the same description, EAt becomes the identity relation. The algebra ({EA }A⊆At , ∩) is a lower semilattice with the zero element EAt [37]. The family of partitions Π(At(U )) = {πEA | A ⊆ At} has been studied in databases [27]. In fact, Π(At(U )) is a lattice on its own right. While the meet of Π(At(U )) is the same as the meet of Π(U ), their joins are diﬀerent [27]. The lattice Π(At(U )) can be used to develop a partition model of databases. A useful result from the constructive deﬁnition of the equivalence relation is that one can associate a precise description with each equivalence class. This is done through the introduction of a decision logic language DL in an information table [39, 43, 65]. In the language DL, an atomic formula is given by a = v, where a ∈ At and v ∈ Va . If φ and ψ are formulas, then so are ¬φ, φ ∧ ψ, φ ∨ ψ, φ → ψ, and φ ≡ ψ. The semantics of the language DL can be deﬁned in the Tarski’s style through the notions of a model and satisﬁability. The model is an information table, which provides interpretation for symbols and formulas of DL. The satisﬁability of a formula φ by an object x, written x |= φ, is given by the following conditions: (1) x |= a = v iﬀ a(x) = v, (2) (3) (4) (5) (6)

x |= ¬φ iﬀ not x |= φ, x |= φ ∧ ψ iﬀ x |= φ and x |= ψ, x |= φ ∨ ψ iﬀ x |= φ or x |= ψ, x |= φ → ψ iﬀ x |= ¬φ ∨ ψ, x |= φ ≡ ψ iﬀ x |= φ → ψ and x |= ψ → φ.

If φ is a formula, the set m(φ) deﬁned by: m(φ) = {x ∈ U | x |= φ},

(5)

is called the meaning of the formula φ. For an equivalence class of EA , it can be described by a formula of the form, a∈A a = va , where va ∈ Va . Furthermore, [x]EA = m( a∈A a = a(x)), where a(x) is the value of x on attribute a.

A Partition Model of Granular Computing

4.3

245

Mappings between Two Universes

Given an equivalence relation E on U , we obtain a coarse-grained universe U/E called the quotient set of U . The relation E can be conveniently represented by a mapping from U to 2U , where 2U is the power set of U . The mapping [·]E : U −→ 2U is given by: [x]E = {y ∈ U | xEy}.

(6)

The equivalence class [x]E containing x plays dual roles. It is a subset of U and an element of U/E. That is, in U , [x]E is subset of objects, and in U/E, [x]E is considered to be a whole instead of many individuals [61]. In cluster analysis, one typically associates a name with a cluster such that elements of the cluster are instances of the named category or concept [22]. Lin [29], following Dubois and Prade [10], explicitly used [x]E for representing a subset of U and N ame([x]E ) for representing an element of U/E. In subsequent discussion, we use this convention. With a partition or an equivalence relation, we have two views of the same universe, a coarse-grained view U/E and a detailed view U . Their relationship can be deﬁned by a pair of mappings, r : U/E −→ U and c : U −→ U/E. More speciﬁcally, we have: r(N ame([x]E )) = [x]E , c(x) = N ame([x]E ).

(7)

A concept, represented as a subset of a universe, is described diﬀerently under diﬀerent views. As we move from one view to the other, we change our perceptions and representations of the same concept. In order to achieve this, we deﬁne zooming-in and zooming-out operators based on the pair of mappings [66]. 4.4

Zooming-in Operator for Reﬁnement

Formally, zooming-in can be deﬁned by an operator ω : 2U/E −→ 2U . Shafer referred to the zooming-in operation as reﬁning [46]. For a singleton subset {Xi } ∈ 2U/E , we deﬁne [10]: ω({Xi }) = [x]E ,

Xi = N ame([x]E ).

(8)

For an arbitrary subset X ⊆ U/E, we have: ω(X) =

ω({Xi }).

(9)

Xi ∈X

By zooming-in on a subset X ⊆ U/E, we obtain a unique subset ω(X) ⊆ U . The set ω(X) ⊆ U is called the reﬁnement of X.

246

Yiyu Yao

The zooming-in operation has the following properties [46]: (zi1)

ω(∅) = ∅,

(zi2) (zi3) (zi4) (zi5)

ω(U/E) = U, ω(X c ) = (ω(X))c , ω(X ∩ Y ) = ω(X) ∩ ω(Y ), ω(X ∪ Y ) = ω(X) ∪ ω(Y ),

(zi6)

X ⊆ Y ⇐⇒ ω(X) ⊆ ω(Y ),

where c denotes the set complement operator, the set-theoretic operators on the left hand side apply to the elements of 2U/E , and the same operators on the right hand side apply to the elements of 2U . From these properties, it can be seen that any relationships of subsets observed under coarse-grained view would hold under the detailed view, and vice versa. For example, in addition to (zi6), we have X ∩ Y = ∅ if and only if ω(X) ∩ ω(Y ) = ∅, and X ∪ Y = U/E if and only if ω(X) ∪ ω(Y ) = U . Therefore, conclusions drawn based on the coarse-grained elements in U/E can be carried over to the universe U . 4.5

Zooming-out Operators for Approximation

The change of views from U to U/E is called a zooming-out operation. By zooming-out, a subset of the universe is considered as a whole rather than many individuals. This leads to a loss of information. Zooming-out on a subset A ⊆ U may induce an inexact representation in the coarse-grained universe U/E. The theory of rough sets focuses on the zooming-out operation. For a subset A ⊆ U , we have a pair of lower and upper approximations in the coarse-grained universe [7, 10, 59]: apr(A) = {N ame([x]E ) | x ∈ U, [x]E ⊆ A}, apr(A) = {N ame([x]E ) | x ∈ U, [x]E ∩ A = ∅}.

(10)

The expression of lower and upper approximations as subsets of U/E, rather than subsets of U , has only been considered by a few researchers in rough set community [7, 10, 30, 59, 69]. On the other hand, such notions have been considered in other contexts. Shafer [46] introduced those notions in the study of belief functions and called them the inner and outer reductions of A ⊆ U in U/E. The connections between notions introduced by Pawlak in rough set theory and these introduced by Shafer in belief function theory have been pointed out by Dubois and Prade [10]. The expression of approximations in terms of elements of U/E clearly shows that representation of A in the coarse-grained universe U/E. By zooming-out, we only obtain an approximate representation. The lower and upper approximations satisfy the following properties [46, 69]: (zo1)

apr(∅) = apr(∅) = ∅,

A Partition Model of Granular Computing

(zo2) (zo3)

apr(U ) = apr(U ) = U/E, apr(A) = (apr(Ac ))c ,

(zo4)

apr(A) = (apr(Ac ))c ; apr(A ∩ B) = apr(A) ∩ apr(B),

(zo5)

apr(A ∩ B) ⊆ apr(A) ∩ apr(B), apr(A) ∪ apr(B) ⊆ apr(A ∪ B),

(zo6)

apr(A ∪ B) = apr(A) ∪ apr(B), A ⊆ B =⇒ [apr(A) ⊆ apr(B), apr(A) ⊆ apr(B)],

(zo7)

apr(A) ⊆ apr(A).

247

According to properties (zo4)-(zo6), relationships between subsets of U may not be carried over to U/E through the zooming-out operation. It may happen that A ∩ B = ∅, but apr(A ∩ B) = ∅, or A ∪ B = U , but apr(A ∪ B) = U/E. Similarly, we may have A = B, but apr(A) = apr(B) and apr(A) = apr(B). Nevertheless, we can draw the following inferences: (i1) (i2)

apr(A) ∩ apr(B) = ∅ =⇒ A ∩ B = ∅,

(i3)

apr(A) ∩ apr(B) = ∅ =⇒ A ∩ B = ∅, apr(A) ∪ apr(B) = U/E =⇒ A ∪ B = U,

(i4)

apr(A) ∪ apr(B) = U/E =⇒ A ∪ B = U.

If apr(A) ∩ apr(B) = ∅, by property (zo4) we know that apr(A ∩ B) = ∅. We say that A and B have a non-empty overlap, and hence are related, in U/E. By (i1), A and B must have a non-empty overlap, and hence are related, in U . Similar explanations can be associated with other inference rules. The approximation of a set can be easily extended to the approximation of a partition, also called a classiﬁcation [39]. Let π = {X1 , . . . , Xn } be a partition of the universe U . Its approximations are a pair of families of sets, the family of lower approximations apr(π) = {apr(X1 ), . . . , apr(Xn )} and the family of upper approximations apr(π) = {apr(X1 ), . . . , apr(Xn )}. 4.6

Classical Rough Set Approximations by a Combination of Zooming-out and Zooming-in

Traditionally, lower and upper approximations of a set are also subsets of the same universe. One can easily obtain the classical deﬁnition by performing a combination of zooming-out and zooming-in operators as follows [66]: ω(apr(A)) = ω({Xi }) Xi ∈apr(A)

=

ω(apr(A)) =

{[x]E | x ∈ U, [x]E ⊆ A}, ω({Xi })

Xi ∈apr(A)

=

{[x]E | x ∈ U, [x]E ∩ A = ∅}.

(11)

248

Yiyu Yao

For a subset X ⊆ U/E we can zoom-in and obtain a subset ω(X) ⊆ U , and then zoom-out to obtain a pair of subsets apr(ω(X)) and apr(ω(X)). The compositions of zooming-in and zooming-out operations have the properties [46]: for X ⊆ U/E and A ⊆ U , (zio1) (zio2)

ω(apr(A)) ⊆ A ⊆ ω(apr(A)), apr(ω(X)) = apr(ω(X)) = X.

The composition of zooming-out and zooming-in cannot recover the original set A ⊆ U . The composition zooming-in and zooming-out produces the original set X ⊆ U/E. A connection between the zooming-in and zooming-out operations can be established. For a pair of subsets X ⊆ U/E and A ⊆ U , we have [46]: (1) (2)

w(X) ⊆ A ⇐⇒ X ⊆ apr(A), A ⊆ ω(X) ⇐⇒ apr(A) ⊆ X.

Property (1) can be understood as follows. Any subset X ⊆ U/E, whose reﬁnement is a subset of A, is a subset of the lower approximation of A. Only a subset of the lower approximation of A has a reﬁnement that is a subset of A. It follows that apr(A) is the largest subset of U/E whose reﬁnement is contained in A, and apr(A) is the smallest subset of U/E whose reﬁnement containing A. 4.7

Consistent Computations in the Two Universes

Computation in the original universe is normally based on elements of U . When zooming-out to the coarse-grained universe U/E, we need to ﬁnd the consistent computational methods. The zooming-in operator can be used for achieving this purpose. Suppose f : U −→ is a real-valued function on U . One can lift the function f to U/E by performing set-based computations [67]. The lifted function f + is a set-valued function that maps an element of U/E to a subset of real numbers. More speciﬁcally, for an element Xi ∈ U/E, the value of function is given by: f + (Xi ) = {f (x) | x ∈ ω({Xi })}.

(12)

The function f + can be changed into a single-valued function f0+ in a number of ways. For example, Zhang and Zhang [75] suggested the following methods: f0+ (Xi ) = min f + (Xi ) = min{f (x) | x ∈ ω({Xi })}, f0+ (Xi ) = max f + (Xi ) = max{f (x) | x ∈ ω({Xi })}, f0+ (Xi ) = averagef + (Xi ) = average{f (x) | x ∈ ω({Xi })}.

(13)

The minimum, maximum, and average deﬁnitions may be regarded as the most permissive, the most optimistic, and the balanced view in moving functions from U to U/E. More methods can be found in the book by Zhang and Zhang [75]. For a binary operation ◦ on U , a binary operation ◦+ on U/E is deﬁned by [6, 67]: Xi ◦+ Xj = {xi ◦ xj | xi ∈ ω({Xi }), xj ∈ ω({Xj })}, (14)

A Partition Model of Granular Computing

249

In general, one may lift any operation p on U to an operation p+ on U/E, called the power operation of p. Suppose p : U n −→ U (n ≥ 1) is an n-ary operation on U . Its power operation p+ : (U/E)n −→ 2U is deﬁned by [6]: p+ (X0 , . . . , Xn−1 ) = {p(x0 , . . . , xn−1 ) | xi ∈ ω({Xi }) for i = 0, . . . , n − 1}, (15) for any X0 , . . . , Xn−1 ∈ U/E. This provides a universal-algebraic construction approach. For any algebra (U, p1 , . . . , pk ) with base set U and operations + p1 , . . . , pk , its quotient algebra is given by (U/E, p+ 1 , . . . , pk ). + The power operation p may carry some properties of p. For example, for a binary operation p : U 2 −→ U , if p is commutative and associative, p+ is commutative and associative, respectively. If e is an identity for some operation p, the set {e} is an identity for p+ . Many properties of p are not carried over by p+ . For instance, if a binary operation p is idempotent, i.e., p(x, x) = x, p+ may not be idempotent. If a binary operation g is distributive over p, g + may not be distributive over p+ . In some situations, we need to carry information from the quotient set U/E to U . This can be done through the zooming-out operators. A simple example is used to illustrate the basic idea. Suppose µ : 2U/E −→ [0, 1] is a set function on U/E. If µ satisﬁes the conditions: (i) (ii) (iii)

µ(∅) = 0, µ(U/E) = 1, X ⊆ Y =⇒ µ(X) ≤ µ(Y ),

µ is called a fuzzy measure [23]. Examples of fuzzy measures are probability functions, possibility and necessity functions, and belief and plausibility functions. Information about subsets in U can be obtained from µ on U/E and the zooming-out operation. For a subset A ⊆ U , we can deﬁne a pair of inner and outer fuzzy measures [68]: µ(A) = µ(apr(A)), µ(A) = µ(apr(A)).

(16)

They are fuzzy measures. If µ is a probability function, µ and µ are a pair of belief and plausibility functions [15, 49, 46, 68]. If µ is a belief function, µ is a belief function, and if µ is a plausibility function, µ is a plausibility [68].

5

Conclusion

Granular computing, as a way of thinking, has been explored in many ﬁelds. It captures and reﬂects our ability to perceive the world at diﬀerent granularity and to change granularities in problem solving. In this chapter, the same approach is used to study the granular computing itself in two levels. In the ﬁrst part of the chapter, we consider the fundamental issues of granular computing in general

250

Yiyu Yao

terms. The objective is to present a domain-independent way of thinking without details of any speciﬁc formulation. The second part of the chapter concretizes the high level investigations by considering a partition model of granular computing. To a large extent, the model is based on the theory of rough sets. However, results from other theories, such as the quotient space theory, belief functions, databases, and power algebras, are incorporated. In the development of diﬀerent research ﬁelds, each ﬁeld may develop its theories and methodologies in isolation. However, one may ﬁnd that these theories and methodologies share the same or similar underlying principles and only diﬀer in their formulation. It is evident that granular computing may be a basic principle that guides many problem solving methods. The results of rough set theory have drawn our attention to granular computing. On the other hand, the study of rough set theory in the wide context of granular computing may result in an in-depth understanding of rough set theory.

References 1. Aldenderfer, M.S., Blashﬁeld, R.K.: Cluster Analysis. Sage Publications, The International Professional Publishers, London (1984) 2. Anderberg, M.R.: Cluster Analysis for Applications. Academic Press, New York (1973) 3. Bettini, C., Montanari, A. (Eds.): Spatial and Temporal Granularity: Papers from the AAAI Workshop. Technical Report WS-00-08. The AAAI Press, Menlo Park, CA. (2000) 4. Bettini, C., Montanari, A.: Research issues and trends in spatial and temporal granularities. Annals of Mathematics and Artiﬁcial Intelligence 36 (2002) 1-4 5. Bratko, I.: PROLOG: Programming for Artiﬁcial Intelligence, Second edition. Addison-Wesley, New York (1990) 6. Brink, C.: Power structures. Algebra Universalis 30 (1993) 177-216 7. Bryniarski, E.: A calculus of rough sets of the ﬁrst order. Bulletin of the Polish Academy of Sciences, Mathematics 37 (1989) 71-77 8. de Loupy, C., Bellot, P., El-B`eze, M., Marteau, P.F.: Query expansion and classiﬁcation of retrieved documents. Proceedings of the Seventh Text REtrieval Conference (TREC-7) (1998) 382-389 9. Demri, S, Orlowska, E.: Logical analysis of indiscernibility. In: Incomplete Information: Rough Set Analysis, Orlowska, E. (Ed.). Physica-Verlag, Heidelberg (1998) 347-380 10. Dubois, D., Prade, P.: Fuzzy rough sets and rough fuzzy sets. International Journal of General Systems 17 (1990) 191-209 11. Graziano, A.M., Raulin, M.L.: Research Methods: A Process of Inquiry, 4th edition. Allyn and Bacon, Boston (2000) 12. Euzenat, J.: Granularity in relational formalisms - with application to time and space representation. Computational Intelligence 17 (2001) 703-737 13. Friske, M.: Teaching proofs: a lesson from software engineering. American Mathematical Monthly 92 (1995) 142-144 14. Giunchglia, F., Walsh, T.: A theory of abstraction. Artiﬁcial Intelligence 56 (1992) 323-390

A Partition Model of Granular Computing

251

15. Grzymala-Busse, J.W.: Rough-set and Dempster-Shafer approaches to knowledge acquisition under uncertainty – a comparison. Manuscript. Department of Computer Science, University of Kansas (1987) 16. Han, J., Cai, Y., Cercone, N.: Data-driven discovery of quantitative rules in data bases. IEEE Transactions on Knowledge and Data Engineering 5 (1993) 29-40 17. Hearst, M.A., Pedersen, J.O.: Reexamining the cluster hypothesis: Scatter/Gather on retrieval results. Proceedings of SIGIR’96 (1996) 76-84 18. Hobbs, J.R.: Granularity. Proceedings of the Ninth Internation Joint Conference on Artiﬁcial Intelligence (1985) 432-435 19. Hornsby, K.: Temporal zooming. Transactions in GIS 5 (2001) 255-272 20. Imielinski, T.: Domain abstraction and limited reasoning. Proceedings of the 10th International Joint Conference on Artiﬁcial Intelligence (1987) 997-1003 21. Inuiguchi, M., Hirano, S., Tsumoto, S. (Eds.): Rough Set Theory and Granular Computing. Springer, Berlin (2003) 22. Jardine, N., Sibson, R.: Mathematical Taxonomy. Wiley, New York (1971) 23. Klir, G.J., Folger, T.A.: Fuzzy Sets, Uncertainty, and Information. Prentice Hall, Englewood Cliﬀs (1988) 24. Knoblock, C.A.: Generating Abstraction Hierarchies: an Automated Approach to Reducing Search in Planning. Kluwer Academic Publishers, Boston (1993) 25. Lamport, L.: How to write a proof. American Mathematical Monthly 102 (1995) 600-608 26. Ledgard, H.F., Gueras, J.F., Nagin, P.A.: PASCAL with Style: Programming Proverbs. Hayden Book Company, Inc., Rechelle Park, New Jersey (1979) 27. Lee, T.T.: An information-theoretic analysis of relational databases – part I: data dependencies and information metric. IEEE Transactions on Software Engineering SE-13 (1987) 1049-1061 28. Leron, U.: Structuring mathematical proofs. American Mathematical Monthly 90 (1983) 174-185 29. Lin, T.Y.: Topological and fuzzy rough sets. In: Intelligent Decision Support: Handbook of Applications and Advances of the Rough Sets Theory, Slowinski, R. (Ed.). Kluwer Academic Publishers, Boston (1992) 287-304 30. Lin, T.Y.: Granular computing on binary relations I: data mining and neighborhood systems, II: rough set representations and belief functions. In: Rough Sets in Knowledge Discovery 1. Polkowski, L., Skowron, A. (Eds.). Physica-Verlag, Heidelberg (1998) 107-140 31. Lin, T.Y.: Generating concept hierarchies/networks: mining additional semantics in relational data. Advances in Knowledge Discovery and Data Mining, Proceedings of the 5th Paciﬁc-Asia Conference, Lecture Notes on Artiﬁcial Intelligence 2035 (2001) 174-185 32. Lin, T.Y.: Granular computing. Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, Proceedings of the 9th International Conference, Lecture Notes in Artiﬁcial Intelligence 2639 (2003) 16-24. 33. Lin. T.Y., Yao, Y.Y., Zadeh, L.A. (Eds.): Rough Sets, Granular Computing and Data Mining. Physica-Verlag, Heidelberg (2002) 34. Lin, T.Y., Zhong, N., Dong, J., Ohsuga, S.: Frameworks for mining binary relations in data. Rough sets and Current Trends in Computing, Proceedings of the 1st International Conference, Lecture Notes in Artiﬁcial Intelligence 1424 (1998) 387393 35. Mani, I.: A theory of granularity and its application to problems of polysemy and underspeciﬁcation of meaning. Principles of Knowledge Representation and Reasoning, Proceedings of the Sixth International Conference (1998) 245-255

252

Yiyu Yao

36. McCalla, G., Greer, J., Barrie, J., Pospisil, P.: Granularity hierarchies. Computers and Mathematics with Applications 23 (1992) 363-375 37. Orlowska, E.: Logic of indiscernibility relations. Bulletin of the Polish Academy of Sciences, Mathematics 33 (1985) 475-485 38. Pawlak, Z.: Rough sets. International Journal of Computer and Information Sciences 11 (1982) 341-356. 39. Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Boston (1991) 40. Pawlak, Z.: Granularity of knowledge, indiscernibility and rough sets. Proceedings of 1998 IEEE International Conference on Fuzzy Systems (1998) 106-110 41. Pedrycz, W.: Granular Computing: An Emerging Paradigm. Springer-Verlag, Berlin (2001) 42. Peikoﬀ, L.: Objectivism: the Philosophy of Ayn Rand. Dutton, New York (1991) 43. Polkowski, L., Skowron, A.: Towards adaptive calculus of granules. Proceedings of 1998 IEEE International Conference on Fuzzy Systems (1998) 111-116 44. Saitta, L., Zucker, J.-D.: Semantic abstraction for concept representation and learning. Proceedings of the Symposium on Abstraction, Reformulation and Approximation (1998) 103-120 http://www.cs.vassar.edu/∼ellman/sara98/papers/. retrieved on December 14, 2003. 45. Salton, G., McGill, M.: Introduction to Modern Information Retrieval. McGraw Hill, New York (1983) 46. Shafer, G.: A Mathematical Theory of Evidence. Princeton University Press, Princeton (1976) 47. Simpson, S.G.: What is foundations of mathematics? (1996). http://www.math.psu.edu/simpson/hierarchy.html. retrieved November 21, 2003. 48. Skowron, A.: Toward intelligent systems: calculi of information granules. Bulletin of International Rough Set Society 5 (2001) 9-30 49. Skowron, A., Grzymala-Busse, J.: From rough set theory to evidence theory. In: Advances in the Dempster-Shafer Theory of Evidence, Yager, R.R., Fedrizzi, M., Kacprzyk, J. (Eds.). Wiley, New York (1994) 193-236 50. Skowron, A., Stepaniuk, J.: Information granules and approximation spaces. Proceedings of 7th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (1998) 354-361 51. Skowron, A., Stepaniuk, J.: Information granules: towards foundations of granular computing. International Journal of Intelligent Systems 16 (2001) 57-85 52. Smith, E.E.: Concepts and induction. In: Foundations of Cognitive Science, Posner, M.I. (Ed.). The MIT Press, Cambridge (1989) 501-526 53. Sowa, J.F.: Conceptual Structures, Information Processing in Mind and Machine. Addison-Wesley, Reading (1984) 54. Stell, J.G., Worboys, M.F.: Stratiﬁed map spaces: a formal basis for multiresolution spatial databases. Proceedings of the 8th International Symposium on Spatial Data Handling (1998) 180-189 55. van Mechelen, I., Hampton, J., Michalski, R.S., Theuns, P. (Eds.): Categories and Concepts: Theoretical Views and Inductive Data Analysis. Academic Press, New York (1993) 56. van Rijsbergen, C.J.: Information Retrieval. Butterworths, London (1979) 57. Wille, R.: Concept lattices and conceptual knowledge systems. Computers Mathematics with Applications 23 (1992) 493-515

A Partition Model of Granular Computing

253

58. Yager, R.R., Filev,D.: Operations for granular computing: mixing words with numbers. Proceedings of 1998 IEEE International Conference on Fuzzy Systems (1998) 123-128 59. Yao, Y.Y.: Two views of the theory of rough sets in ﬁnite universes. International Journal of Approximation Reasoning 15 (1996) 291-317 60. Yao, Y.Y.: Granular computing: basic issues and possible solutions. Proceedings of the 5th Joint Conference on Information Sciences (2000) 186-189 61. Yao, Y.Y.: Information granulation and rough set approximation. International Journal of Intelligent Systems 16 (2001) 87-104 62. Yao, Y.Y.: Modeling data mining with granular computing. Proceedings of the 25th Annual International Computer Software and Applications Conference (COMPSAC 2001) (2001) 638-643 63. Yao, Y.Y.: A step towards the foundations of data mining. In: Data Mining and Knowledge Discovery: Theory, Tools, and Technology V, Dasarathy, B.V. (Ed.). The International Society for Optical Engineering (2003) 254-263 64. Yao, Y.Y.: Probabilistic approaches to rough sets. Expert Systems 20 (2003) 287297 65. Yao, Y.Y., Liau, C.-J.: A generalized decision logic language for granular computing. Proceedings of FUZZ-IEEE’02 in the 2002 IEEE World Congress on Computational Intelligence, (2002) 1092-1097 66. Yao, Y.Y., Liau, C.-J., Zhong, N.: Granular computing based on rough sets, quotient space theory, and belief functions. Proceedings of ISMIS’03 (2003) 152-159 67. Yao, Y.Y., Noroozi, N.: A uniﬁed framework for set-based computations. Proceedings of the 3rd International Workshop on Rough Sets and Soft Computing. The Society for Computer Simulation (1995) 252-255 68. Yao, Y.Y., Wong, S.K.M.: Representation, propagation and combination of uncertain information. International Journal of General Systems 23 (1994) 59-83 69. Yao, Y.Y., Wong, S.K.M., Lin, T.Y.: A review of rough set models. In: Rough Sets and Data Mining: Analysis for Imprecise Data, Lin, T.Y., Cercone, N. (Eds.). Kluwer Academic Publishers, Boston (1997) 47-75 70. Yao, Y.Y., Zhong, N.: Granular computing using information tables. In: Data Mining, Rough Sets and Granular Computing, Lin, T.Y., Yao, Y.Y., Zadeh, L.A. (Eds.). Physica-Verlag, Heidelberg (2002) 102-124 71. Zadeh, L.A.: Fuzzy sets and information granularity. In: Advances in Fuzzy Set Theory and Applications, Gupta, N., Ragade, R., Yager, R. (Eds.). North-Holland, Amsterdam (1979) 3-18 72. Zadeh, L.A.: Fuzzy logic = computing with words. IEEE Transactions on Fuzzy Systems 4 (1996) 103-111 73. Zadeh, L.A.: Towards a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Sets and Systems 19 (1997) 111-127 74. Zadeh, L.A.: Some reﬂections on soft computing, granular computing and their roles in the conception, design and utilization of information/intelligent systems. Soft Computing 2 (1998) 23-25 75. Zhang, B., Zhang, L.: Theory and Applications of Problem Solving, North-Holland, Amsterdam (1992) 76. Zhang, L., Zhang, B.: The quotient space theory of problem solving. Proceedings of International Conference on Rough Sets, Fuzzy Set, Data Mining and Granular Computing, Lecture Notes in Artriﬁcal Intelligence 2639 (2003) 11-15 77. Zhong, N., Skowron, A., Ohsuga S. (Eds.): New Directions in Rough Sets, Data Mining, and Granular-Soft Computing. Springer-Verlag, Berlin (1999)

Musical Phrase Representation and Recognition by Means of Neural Networks and Rough Sets Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek Multimedia Systems Department, Gdansk University of Technology Narutowicza 11/12, 80-952 Gdansk, Poland {andcz,marek,bozenka}@sound.eti.pg.gda.pl http://sound.eti.pg.gda.pl

Abstract. This paper discusses various musical phrase representations that can be used to classify musical phrases with a considerable accuracy. Musical phrase analysis plays an important role in music information retrieval domain. In the paper various representations of a musical phrase are described and analyzed. Also the experiments were designed to facilitate pitch prediction within a musical phrase by means of entropy-coding of music. We used the concept of predictive data coding introduced by Shannon. Encoded music representations, stored in the database, are then used for automatic recognition of musical phrases by means of Neural Networks (NN) and rough sets (RS). A discussion on obtained results is carried out and conclusions are included.

1 Introduction The ability to analyze musical phrases in the context of automatic retrieval is still not a fully achieved objective [11]. It should be however stated that such an objective depends both on a quality of a musical phrase representation and the inference engine utilized. Thorough analysis of musical phrase features would make possible searching for a particular melody in the musical databases, but also might reveal features that for example characterize music of the same epoch. Recognizing similarities between music of a particular epoch or particular genre would enable searching the Internet according to music taxonomy. For the purpose of this study a collection of MIDI encoded musical phrases was gathered containing Bach’s fugues. Musical phrase could be stored in various formats, such as mono- or polyphonic signal, MIDI code, and musical score. Any of such formats may be accompanied by textual information. In the study presented Bach’s fugues from the “Well Tempered Clavier” were played on a MIDI keyboard and then transferred to the computer hard disk through the MIDI card and Cubase VST 3.5 program. The automatic recognition of musical phrase patterns required some preliminary stages, such as MIDI data conversion, parametrization of musical phrases, and discretization of parameter values in the case of rule-based decision systems [4], [23]. These tasks resulted in the creation of musical phrase database containing feature vectors. J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 254–278, 2004. © Springer-Verlag Berlin Heidelberg 2004

Musical Phrase Representation and Recognition by Means of Neural Networks

255

Experiments performed consisted in preparing various representations on the basis of gathered musical phrases and then analyzing them in the context of automatic music information retrieval. Both Neural Networks (NNs) and Rough Set (RS) method were used to this end. NNs were also used for feature quality evaluation, this issue will be explained later on. The decision systems were used both as a classifier and a comparator.

2 Musical Phrase Description In the experiments it was assumed that musical phrases considered are single-voice, only. This means that at the moment t one musical event is occurring in the phrase. A musical event is defined as a single sound of defined pitch, amplitude and duration [26]. A musical pause – absence of sound – is a musical event as well. For practical reasons musical pause was assumed to be a musical event of pitch equal to the pitch of the preceding sound, but of amplitude equals zero. A single-voice musical phrase fr can be expressed as a sequence of musical events: fr = {e1 , e2 ,..., e n }

(1)

Musical event ei can be described as a pair of values denoting sound pitch hi (in the case of a pause, pitch of the previous sound), and sound duration ti: ei = {hi , t i }

(2)

One can therefore express a musical phrase by a sequence of pitches being a function of time fr(t). Sample illustration of the function fr(t) is presented in Fig. 1. Sound pitch is defined according to the MIDI standard, i.e. as a difference from the C0 sound measured in semitones [2].

Fig. 1. Sequence of pitches as a function of time. Sound pitch is expressed according to the MIDI standard.

256

Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek

One of the basic composer’s and performer’s tools is transforming musical phrases according to rules specific to music perception, and aesthetic and cultural conventions and constraints [1]. Generally, listeners perceive a modified musical phrase as identical to the unmodified original phrase. Modifications of musical phrases involve sound pitch shifting (transposition), time changes (e.g. augmentation), changes of ornament and/or transposition, shifting pitches of individual sounds etc. [27]. A formal definition of such modifications may be presented on the example of a transposed musical phrase, expressed as follows: frmod (t ) = frref (t ) + c

(3)

where frref (t ) denotes unmodified, original musical phrase, frmod (t ) modified musical phrase, and c is component expressing in semitones shift of individual sounds of the phrase (for |c|=12n there is an octave shift). A musical phrase with changed tempo can be expressed as follows: frmod (t ) = frref (kt )

(4)

where k is tempo change factor. Phrase tempo is slowed down for values of factor k < 1. Tempo increase is obtained for values of factor k > 1. A transposed musical phrase with changed tempo can be expressed as follows: frmod (t ) = frref (kt ) + c

(5)

Tempo variations in respects to the score can result mostly from inexactness in performance, which is often related to performer’s expressiveness [7], [8], [22]. Tempo changes can be expressed as function ∆k(t). A musical phrase with varying tempo can be expressed as follows: frmod (t ) = frref [t ⋅ ∆k (t )]

(6)

Modifications of musical phrase melodic content are often used. One can discern such modifications as: ornament, transposition, inversion, retrograde, scale change (major – minor), change of pitch of individual sounds (e.g. harmonic adjustment), etc. In general, they can be described by melodic modification function ψ(t). Therefore, a musical phrase with melodic content modifications can be expressed as follows: frmod (t ) = frref (t ) + ψ (t )

(7)

In consequence, a musical phrase modified by transposition, tempo change, tempo fluctuation and melodic content modification can be expressed as follows: frmod (t ) = frref [kt + t ⋅ ∆k (t )] + ψ [kt + t ⋅ ∆k (t )] + c

(8)

Above given formalism allows for defining the research problem of automatic classification of musical phrases. Let frmod be a modified musical phrase being classified and let FR be a set of unmodified reference phrases:

{

FR = fr1ref , fr2ref ,..., frNref

}

(9)

Musical Phrase Representation and Recognition by Means of Neural Networks

257

The task of recognizing musical phrase frmod can therefore be described as finding in the set FR such a phrase frnref, for which the musical phrase modification formula is fulfilled. If the applied modifications are limited to transposition and uniform tempo change, modification can be described using two constants: transposition constant c and tempo change constant k. In the discussed case the task of classifying a musical phrase is limited to determining such vales of constants c and k that the formula is fulfilled. If function ∆k(t)≠0, then classification algorithm should minimize the influence of the function ∆k(t) on the expression. Small values of the function ∆k(t) indicate slight changes resulting from articulation inexactness and moderate performer’s expressiveness [6]. Such changes can be easily corrected by using time quantization. Larger values of the function ∆k(t) indicate major temporal fluctuations resulting chiefly from performer’s expressiveness. Such changes can be corrected using advanced methods of time quantization [8]. Function ψ(t) describes a wide range of musical phrase modifications characteristic for composer, such as performer’s style and technique. Values of function ψ(t), which describe qualitatively the character of the above constants, are difficult or impossible to determine in a hard-defined manner. The last mentioned problem is the main issue associated with the task of automatic classification of musical phrases.

3 Parametrization of Musical Phrases A fundamental quality of decision systems is the ability to classify data that is not precisely defined or cannot be modeled mathematically. This quality allows for using intelligent decision algorithms for automatically classifying musical phrases in conditions when the character of the ψ(t) and ∆k(t) functions is rather qualitative. Parametrization can be considered as a part of feature selection, the latter process meaning finding a subset of features, from the original set of pattern features, optimally according to the defined criterion [25]. The data to be classified can be represented by a vector P of the form: P = [ p1 , p2 ,..., p N ]

(10)

The constant number N of elements of vector P requires the musical phrase fr to be represented by N parameters, independent of number of notes in phrase fr. Converting a musical phrase fr of the form of {e1, e2, …, en} into N-element vector of parameters allows for representing only the distinctive features of musical phrase fr. As shown above, transposition of a musical phrase and uniform proportional tempo change can be represented as alteration of values: c and k. It would therefore be advantageous to design such method of musical phrase parameterization, for which: where:

P( frmod ) = P( frref )

(11)

frmod (t ) = frref (kt ) + c

(12)

258

Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek

Creating numerical representation of musical structures to be used in automatic classification and prediction systems requires among others defining the following characteristics of musical phrases: sequence length, method of representing sound pitch, methods of representing time-scale and frequency properties, methods of representing other musical properties by feature vectors. In addition, having defined various subsets of features, feature selection should be performed. Typically, this process consists in finding an optimal feature subset from a whole original feature set, which guarantees the accomplishment of a processing goal while minimizing a defined feature selection criterion [25]. Feature relevance may be evaluated on the basis of openloop or closed-loop methods. In the first approach separability criteria are used. To this end Fisher criterion is often employed. The closed-loop methods are based on feature selection using a predictor performance. This means that the feedback from the predictor quality is used for the feature selection process [25]. On the other hand, here we deal with the situation, in which feature set contains several disjoint feature subsets. The feature selection defined for the purpose of this study consists in eliminating the less effective method of parametrization according to the processing goal, first, and then reducing number of parameters to the optimal one. Both, open- and closed-loop methods were used in the study performed. Individual musical structures may show significant differences in number of elements, i.e. sounds or other musical units. In an extreme case one can imagine that the classifier can be fed with the melody or the whole musical piece. It is therefore necessary to limit the number of elements in the numerical representation vector. Sound pitch can be expressed as absolute or relative value. An absolute representation is characterized by the exact definition of the reference sound (e.g. C1 sound). In the case of absolute representation the number of possible values defining a given note in a sequence is equal to the number of possible sound pitch values restricted to musical scale. A disadvantage of this representation is the fact of shifting the values for sequence elements by a constant in the case of transposition. In the case of relative representation the reference point is being updated all the time. The reference point may be e.g. the previous sound, a sound at the previously accented time part or a sound at the beginning of the onset. The number of possible values defining a given note in a sequence is equal to the number of possible intervals. An advantage of relative representation is absence of change of musical structures caused by transposition as well as the ability to limit the scope of available intervals without limiting the available musical scales. Its disadvantage is sensitivity to small structure modifications resulting in shifting the reference sound. 3.1 Parametric Representation Research performed so far resulted in designing a number of parametric representations of musical phrases. Some of these methods were described in detail in earlier authors’ publications [13], [14], [15], [16], [17], [24], therefore only their brief characteristics are given below. At the earlier stage of this study, both Fisher criterion and correlation coefficient for evaluation of parameter quality were used [17].

Musical Phrase Representation and Recognition by Means of Neural Networks

259

Statistical Parametrization. The designed statistical parameterization approach is aimed at describing structural features of a musical phrase based on music theory [27]. Statistical parametrization introduced by authors involves representing a musical phrase with five parameters [13], [15]: !

P1 – difference between weighted average sound pitch and pitch of the lowest sound of phrase, where T is phrase duration, hn denotes pitch of n-th sound, tn is duration of n-th sound, and N is number of sounds in phrase. 1 P1 = T

!

N

∑h t

n n

n =1

− min (hn ) n

(13)

P2 – ambitus – difference between pitches of the highest and the lowest sounds of phrase. Typically, the term ambitus denotes a range of pitches for a given voice in a part of music. It also may denote the pitch range that a musical instrument is capable of playing, however, in our experiments, the first meaning is closer to the definition given below: P2 = max (hn ) − min (hn ) n

n

(14)

! P3 – average absolute difference of pitch of subsequent sounds: P3 =

!

N −1

1 hn − hn +1 N − 1 n =1

∑

(15)

P4 – duration of the longest sound of phrase: P4 = max (t n ) n

(16)

! P5 – average sound duration: P5 =

1 N

N

∑t

n

(17)

n =1

Statistical parameters representing a musical phrase can be divided into two groups: parameters describing melodic features of musical phrase (P1, P2, P3) and ones describing rhythmical features of musical phrase (P4, P5). Trigonometric Parametrization. Trigonometric parametrization involves representing the shape of a musical phrase with a vector of parameters P = [p1, p2, …, pM] in the form of a series of cosines [15]: 1 π 1 π 1π fr * (t ) = p1 cos t − + p2 cos 2 t − + ... + p M cos M t − T T 2 2 2T

(18)

where M is the number of trigonometric parameters representing the musical phrase.

260

Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek

For discrete time domain it is assumed that the sampling period is a common denominator of durations of all rhythmic units of a musical phrase. Elements pm of the trigonometric parameter vector P are calculated according to the following formula: l

pm =

∑h

k

k =1

1 π cos[m(k − ) ] 2 l

(19)

T denotes phrase length exsnt pressed as a multiple of sampling period, snt is the shortest note duration, and hk denotes pitch of sound in k-th sample. According to the above assumption each rhythmic value being a multiple of the sampling period is transformed into a series of rhythmic values equal to the sampling period. This leads to loss of information on rhythmic structure of the phrase. Absolute changes of values concerning sound pitch and proportional time changes do not affect the values of trigonometric parameters. Trigonometric parameters allow for reconstructing the shape of the musical phrase they represent. Phrase shape is reconstructed using vector K=[k1, k2, ..., kN]. Elements of vector K are calculated according to the following formula:

where pm is m-th element of the feature vector, l =

kn =

1 N

M

∑2 p m=1

m

mnπ cos N

(20)

where M is the number of trigonometric parameters representing the musical phrase, and pm denotes m-th element of parameters vector. Values of elements kn express in semitones the difference between the current and the average sound pitch in the musical phrase being reconstructed. Polynomial Parametrization. Single-voice musical phrase fr can be represented by function fr(t), whose time domain is either discrete or continuous. In discrete time domain musical phrase fr can be represented as a set of points in two-dimensional space of time and sound pitch. A musical phrase can be represented in discrete time domain by means of points denoting sound pitch at time t, and by points denoting note onset. If tempo varies in time (function ∆k(t)≠0) or musical phrase includes additional sounds of duration inconsistent with the general rhythmic pattern (e.g. ornament or augmentation), sampling period can be determined by minimizing the quantization error defined by the formula:

ε (b ) =

1 N −1

N −1

∑ i =1

ti − t i −1 t −t − Round i i −1 b b

(21)

where b denotes sampling period, and Round is rounding function. On the basis of representation of a musical phrase in discrete time domain one can approximate a musical phrase by a polynomial of order M:

Musical Phrase Representation and Recognition by Means of Neural Networks

fr * (t ) = a0 + a1t + a 2 t 2 + ...a M t M

261

(22)

Coefficients a0, a1,…aM are found numerically by means of mean-square approximation, i.e. by minimizing the error ε of form: T

ε2 =

∫ fr

*

(t ) − fr (t )

2

dt

- for continuous case

0

ε2 =

(23) N

∑ fr

* i

− fri

2

- for discrete case

i =0

One can also express the error in semitones per sample, which facilitates approximation evaluation, according to the formula:

χ=

1 N

N

∑ fr

* i

− fri

(24)

i =1

3.2 Binary Representation Binary representation is based on dividing the time window W into n equal time sections T, where n is consistent with metric division and T corresponds to the smallest, basic rhythmic unit in the music material being represented. Each time section T is assigned a bit of information bT in the vector of rhythmic units. Bit bT takes the value of 1, if a sound begins in the given time section T. If time section T covers a sound started in a previous section or a pause, the rhythmic information bit bT assumes the value of 0. An advantage of binary representation of rhythmic structures is fixed length of the sequence representation vector. On the other hand, the disadvantages are: large size of vector length in comparison to other representation methods and the possibility of errors resulting from time quantization. On the basis of methods of representing values of individual musical parameters one can distinguish three types of representations: local, distributed and global ones. In the case of a local representation every musical unit en is represented by a vector of n bits, where n is the number of all possible values of a musical unit en. Current value of musical unit en is represented by ascribing the value of 1 to the bit of representation vector corresponding to this value. Other bits of the representation vector take the value of 0 (unipolar activation) or –1 (bipolar activation). This type of representation was used e.g. by Hörnel [10] and Todd [28]. The system of representing musical sounds proposed by Hörnel and co-workers is an example of parametric representation [9]. In this system each subsequent note p is represented by the following parameters: consonance of note p with respect to its harmony, relation of note p towards its successor and predecessor in the case of dissonance against the harmonic content, direction of p (up, down to next pitch), dis-

262

Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek Table 1. Distributed representation of sound pitches according to Mozer. Sound pitch C C# D D# E F F# G G# A A# B

–1 –1 –1 –1 –1 –1 +1 +1 +1 +1 +1 +1

Mozer’s distributed representation –1 –1 –1 –1 –1 –1 –1 –1 –1 –1 –1 +1 –1 –1 +1 +1 –1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 –1 +1 +1 –1 –1 +1 –1 –1 –1 –1 –1 –1 –1

–1 +1 +1 +1 +1 +1 +1 –1 –1 –1 –1 –1

tance of note p to base note (if p is consonant), octave, tenuto – if p is an extension of the previous note of the same pitch. The presented method of coding does not employ direct representation of sound pitch, it is distributed with respect to pitch. Sound pitch is coded as a function of harmony. Such distributed representation was used among others by Mozer [19]. In the case of a distributed representation the value of musical unit E is encoded with m bits according to the formula: m = log 2 N

(24)

where N – number of possible values of musical unit en. An example of representing the sounds of the chromatic scale using a distributed representation is presented in Table 1. In the case of a global representation the value of a musical unit is represented by a real value. The above methods of representing values of individual musical units imply their suitability for processing certain types of music material, for certain tasks and analysis tools, classifiers and predictors. 3.3 Prediction of Musical Events Our experiments were aimed at designing a method of predicting and entropy-coding of music. We used the concept of predictive data coding presented by Shannon and later employed for investigating entropy coding of English text by Moradi, Grzymala–Busse and Roberts [18]. The engineered method was used as a musical event predictor in order to enhance a system of pitch detection of a musical sound. The block scheme of a prediction coding system for music is presented in Fig. 2. The idea of entropy coding involves using two identical predictors in the modules of data coding and decoding. The process of coding consists in determining the number of prediction attempts k required for correct prediction of event en+1. Prediction is based on parameters of musical events collected in data buffer. The number of prediction attempts k is sent to the decoder. The decoder module determines the value of

Musical Phrase Representation and Recognition by Means of Neural Networks

coder coder input data

decoder decoder

en-z,en-z+1,...,en buffer

en-z,en-z+1,...,en

predictor

eˆn+1

predictor

NO

eˆn+1 = en+1

j=k

k

buffer

eˆn+1

k = k+1

en+1

YES

263

i = i+1 NO

YES

k

en = eˆn

output data

Fig. 2. Block diagram of prediction coder and decoder.

event en+1 by repeating k prediction attempts. Subsequent values for samples – musical events – are then collected in a buffer. Two types of data buffers were implemented: fixed-size buffer, and fading memory model. In the first case the buffer stores data on z musical events; each event is represented by a separate vector. That means that z vectors representing z individual musical events are supplied to the predictor input. In the carried out experiments the value z was limited to 5, 10 and 20 samples (music events). On the other hand, the fading memory model involves storing preceding values of the vector elements and summing them with current ones according to the formula: n

bn =

∑e r k

n−k

(25)

k =1

where r is the fading factor from the range (0,1). In the case of using the fading memory model a single vector of parameters of musical events is supplied to the predictor input. This means a z-fold reduction of the number of input parameters compared with the buffer of size z. For the needs of investigating the music predictor a set of musical data, consisted of fugues from the set Well-Tempered Clavier by J. S. Bach, were used as musical material. In the experiments performed a neural network-based predictor was employed. A series of experiments aimed at optimizing the predictor structure, data buffer parameters and prediction algorithm parameters were performed. In the training process we utilized all voices of the individual fugues except the uppermost ones. The highest voices were used for testing the predictor. Three methods of parametric representation of sound pitch: binary method, a so-called modified Hörnel’s representation and modified Mozer’s representation were utilized. In all cases relative representation was used, i.e. differences between pitch of subsequent sounds were coded. In the case of binary representation individual musical intervals (differences between pitch of subsequent sounds) are represented as 27-bit vectors. The utilized representation of sound pitch is presented in Table. 2.

264

Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek

Table 2. Illustration of binary representation of musical interval (example – 2 semitones up). -octave

-12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +octave

Interval [in semitones]

0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0

Presented representation of intervals designed by Hörnel is a diatonic representation (corresponding to seven-step musical scale). For the needs of our research we modified Hörnel’s representation to allow for chromatic (twelve-step) representation. Individual intervals are represented by means of 11 parameters. The method of representing sound pitch designed by Mozer characterizes pitch as an absolute value. Within the scope of our research we modified Mozer’s representation to allow relative representation of interval size. The representation was complemented by adding direction and octave bits. An individual musical event is therefore coded by means of 8 parameters. A relative binary representation was designed for coding rhythmic values. Rhythmic values are coded by a parameters vector:

{

}

(26)

−2

(27)

p r = p1r , p 2r , p3r , p4r , p5r

where individual parameters assume the values: enr −1 p1r

=

e nr

−2 +

e nr −1 e nr

12

er er 8 − nr−1 + 8 − nr−1 en en 12 p2r = r r en−1 − 1 + en−1 − 1 er enr n 2 er 2 − rn en−1 p3r = r 2 − en−1 enr

er + 2 − rn en−1 2 er + 2 − nr−1 en 2

for

for

for

for

enr −1 enr

enr −1 enr

≥2

(28)

Editorial Board Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen University of Dortmund, Germany Madhu Sudan Massachusetts Institute of Technology, MA, USA Demetri Terzopoulos New York University, NY, USA Doug Tygar University of California, Berkeley, CA, USA Moshe Y. Vardi Rice University, Houston, TX, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany

3100

James F. Peters Andrzej Skowron Jerzy W. Grzymała-Busse Bo˙zena Kostek ´ Roman W. Swiniarski Marcin S. Szczuka (Eds.)

Transactions on Rough Sets I

13

Editors-in-Chief James F. Peters University of Manitoba, Department of Electrical and Computer Engineering Manitoba, Winnipeg, Manitoba R3T 5V6 Canada E-mail: [email protected] Andrzej Skowron University of Warsaw, Institute of Mathematics Banacha 2, 02-097 Warsaw, Poland E-mail: [email protected] Volume Editors Jerzy W. Grzymała-Busse University of Kansas, Department of Electrical Engineering and Computer Science 3014 Eaton Hall 1520 W. 15th St., #2001 Lawrence, KS 66045-7621, USA E-mail: [email protected] Bo˙zena Kostek Gdansk University of Technology Faculty of Electronics, Telecommunications and Informatics Multimedia Systems Department, Narutowicza 11/12, 80-952 Gdansk, Poland E-mail: [email protected] ´ Roman W. Swiniarski San Diego State University, Department of Computer Science 5500 Campanile Drive, San Diego, CA 92182-7720, USA E-mail: [email protected] Marcin S. Szczuka Warsaw University, Institute of Mathematics Banacha 2, 02-097 Warsaw, Poland E-mail: [email protected] Library of Congress Control Number: 2004108444 CR Subject Classiﬁcation (1998): F.4.1, F.1, I.2, H.2.8, I.5.1, I.4 ISSN 0302-9743 ISBN 3-540-22374-6 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, speciﬁcally the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microﬁlms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable to prosecution under the German Copyright Law. Springer-Verlag is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2004 Printed in Germany Typesetting: Camera-ready by author, data conversion by Olgun Computergraﬁk Printed on acid-free paper SPIN: 11011873 06/3142 543210

Preface

We would like to present, with great pleasure, the ﬁrst volume of a new journal, Transactions on Rough Sets. This journal, part of the new journal subline in the Springer-Verlag series Lecture Notes in Computer Science, is devoted to the entire spectrum of rough set related issues, starting from logical and mathematical foundations of rough sets, through all aspects of rough set theory and its applications, data mining, knowledge discovery and intelligent information processing, to relations between rough sets and other approaches to uncertainty, vagueness, and incompleteness, such as fuzzy sets, theory of evidence, etc. The ﬁrst, pioneering papers on rough sets, written by the originator of the idea, Professor Zdzislaw Pawlak, were published in the early 1980s. We are proud to dedicate this volume to our mentor, Professor Zdzislaw Pawlak, who kindly enriched this volume with his contribution on philosophical, logical, and mathematical foundations of rough set theory. In his paper Professor Pawlak shows all over again the underlying ideas of rough set theory as well as its relations with Bayes’ theorem, conﬂict analysis, ﬂow graphs, decision networks, and decision rules. After an overview and introductory article written by Professor Pawlak, the ten following papers represent and focus on rough set theory-related areas. Some papers provide an extension of rough set theory towards analysis of very large data sets, real data tables, data sets with missing values and rough non-deterministic information. Other theory-based papers deal with variable precision fuzzy-rough sets, consistency measure conﬂict proﬁles, and layered learning for concept synthesis. In addition, a paper on generalization of rough sets and rule extraction provides two diﬀerent interpretations of rough sets. The last paper of this group addresses a partition model of granular computing. Other topics with a more application-orientated view are covered by the following eight articles of this ﬁrst volume of Transactions on Rough Sets. They can be categorized into the following groups: – music processing, – rough set theory applied to software design models and inductive learning programming, – environmental engineering models, – medical data processing, – pattern recognition and classiﬁcation. These papers exemplify analysis and exploration of complex data sets from various domains. They provide useful insight into analyzed problems, showing for example how to compute decision rules from incomplete data. We believe that readers of this volume will better appreciate rough set theory-related trends after reading the case studies.

VI

Preface

Many scientists and institutions have contributed to the creation and the success of the rough set community. We are very thankful to everybody within the International Rough Set Society who supported the idea of creating a new LNCS journal subline – the Transactions on Rough Sets. It would not have been possible without Professors Peters’ and Skowron’s invaluable initiative, thus we are especially grateful to them. We believe that this very ﬁrst issue will be followed by many others, reporting new developments in the rough set domain. This issue would not have been possible without the great eﬀorts of many anonymously acting reviewers. Here, we would like to express our sincere thanks to all of them. Finally, we would like to express our gratitude to the LNCS editorial staﬀ of Springer-Verlag, in particular Alfred Hofmann, Ursula Barth and Christine G¨ unther, who supported us in a very professional way. Throughout preparation of this volume the Editors have been supported by various research programs and funds; Jerzy Grzymala-Busse has been supported by NSF award 9972843, Bo˙zena Kostek has been supported by the grant 4T11D01422 from the Polish Ministry for Scientiﬁc Research and Information ´ Technology, Roman Swiniarski has received support from the “Adaptive Data Mining and Knowledge Discovery Methods for Distributed Data” grant, awarded ´ by Lockheed-Martin, and Marcin Szczuka and Roman Swiniarski have been supported by the grant 3T11C00226 from the Polish Ministry for Scientiﬁc Research and Information Technology.

April 2004

Jerzy W. Grzymala-Busse Bo˙zena Kostek ´ Roman Swiniarski Marcin Szczuka

LNCS Transactions on Rough Sets

This journal subline has as its principal aim the fostering of professional exchanges between scientists and practitioners who are interested in the foundations and applications of rough sets. Topics include foundations and applications of rough sets as well as foundations and applications of hybrid methods combining rough sets with other approaches important for the development of intelligent systems. The journal includes high-quality research articles accepted for publication on the basis of thorough peer reviews. Dissertations and monographs up to 250 pages that include new research results can also be considered as regular papers. Extended and revised versions of selected papers from conferences can also be included in regular or special issues of the journal.

Honorary Editor: Editors-in-Chief:

Zdzislaw Pawlak James F. Peters, Andrzej Skowron

Editorial Board M. Beynon G. Cattaneo A. Czy˙zewski J.S. Deogun D. Dubois I. Duentsch S. Greco J.W. Grzymala-Busse M. Inuiguchi J. Jrvinen D. Kim J. Komorowski C.J. Liau T.Y. Lin E. Menasalvas M. Moshkov T. Murai

M. do C. Nicoletti H.S. Nguyen S.K. Pal L. Polkowski H. Prade S. Ramanna R. Slowi´ nski J. Stepaniuk ´ R. Swiniarski Z. Suraj M. Szczuka S. Tsumoto G. Wang Y. Yao N. Zhong W. Ziarko

Table of Contents

Rough Sets – Introduction Some Issues on Rough Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zdzislaw Pawlak

1

Rough Sets – Theory Learning Rules from Very Large Databases Using Rough Multisets . . . . . . . 59 Chien-Chung Chan Data with Missing Attribute Values: Generalization of Indiscernibility Relation and Rule Induction . . . . . . . . . . . 78 Jerzy W. Grzymala-Busse Generalizations of Rough Sets and Rule Extraction . . . . . . . . . . . . . . . . . . . . . 96 Masahiro Inuiguchi Towards Scalable Algorithms for Discovering Rough Set Reducts . . . . . . . . . 120 Marzena Kryszkiewicz and Katarzyna Cicho´ n Variable Precision Fuzzy Rough Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Alicja Mieszkowicz-Rolka and Leszek Rolka Greedy Algorithm of Decision Tree Construction for Real Data Tables . . . . 161 Mikhail Ju. Moshkov Consistency Measures for Conﬂict Proﬁles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 Ngoc Thanh Nguyen and Michal Malowiecki Layered Learning for Concept Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Sinh Hoa Nguyen, Jan Bazan, Andrzej Skowron, and Hung Son Nguyen Basic Algorithms and Tools for Rough Non-deterministic Information Analysis . . . . . . . . . . . . . . . . . . . . . 209 Hiroshi Sakai and Akimichi Okuma A Partition Model of Granular Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Yiyu Yao

Rough Sets – Applications Musical Phrase Representation and Recognition by Means of Neural Networks and Rough Sets . . . . . . . . . . . . . . . . . . . . . . . . . 254 Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek

X

Table of Contents

Processing of Musical Metadata Employing Pawlak’s Flow Graphs . . . . . . . 279 Bozena Kostek and Andrzej Czyzewski Data Decomposition and Decision Rule Joining for Classiﬁcation of Data with Missing Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 Rafal Latkowski and Michal Mikolajczyk Rough Sets and Relational Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321 R.S. Milton, V. Uma Maheswari, and Arul Siromoney Approximation Space for Software Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 James F. Peters and Sheela Ramanna Application of Rough Sets to Environmental Engineering Models . . . . . . . . . 356 Robert H. Warren, Julia A. Johnson, and Gordon H. Huang Rough Set Theory and Decision Rules in Data Analysis of Breast Cancer Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375 Jerzy Zaluski, Renata Szoszkiewicz, Jerzy Krysi´ nski, and Jerzy Stefanowski Independent Component Analysis, Principal Component Analysis and Rough Sets in Face Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 ´ Roman W. Swiniarski and Andrzej Skowron Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405

Some Issues on Rough Sets Zdzislaw Pawlak1,2 1

1

Institute for Theoretical and Applied Informatics Polish Academy of Sciences ul. Baltycka 5, 44-100 Gliwice, Poland 2 Warsaw School of Information Technology ul. Newelska 6, 01-447 Warsaw, Poland [email protected]

Introduction

The aim of this paper is to give rudiments of rough set theory and present some recent research directions proposed by the author. Rough set theory is a new mathematical approach to imperfect knowledge. The problem of imperfect knowledge has been tackled for a long time by philosophers, logicians and mathematicians. Recently it became also a crucial issue for computer scientists, particularly in the area of artiﬁcial intelligence. There are many approaches to the problem of how to understand and manipulate imperfect knowledge. The most successful one is, no doubt, the fuzzy set theory proposed by Lotﬁ Zadeh [1]. Rough set theory proposed by the author in [2] presents still another attempt to this problem. This theory has attracted attention of many researchers and practitioners all over the world, who have contributed essentially to its development and applications. Rough set theory overlaps with many other theories. However we will refrain to discuss these connections here. Despite this, rough set theory may be considered as an independent discipline in its own right. Rough set theory has found many interesting applications. The rough set approach seems to be of fundamental importance to AI and cognitive sciences, especially in the areas of machine learning, knowledge acquisition, decision analysis, knowledge discovery from databases, expert systems, inductive reasoning and pattern recognition. The main advantage of rough set theory in data analysis is that it does not need any preliminary or additional information about data – like probability in statistics, or basic probability assignment in Dempster-Shafer theory, grade of membership or the value of possibility in fuzzy set theory. One can observe the following about the rough set approach: – – – –

introduction of eﬃcient algorithms for ﬁnding hidden patterns in data, determination of minimal sets of data (data reduction), evaluation of the signiﬁcance of data, generation of sets of decision rules from data,

Former University of Information Technology and Management.

J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 1–58, 2004. c Springer-Verlag Berlin Heidelberg 2004

2

Zdzislaw Pawlak

– easy-to-understand formulation, – straightforward interpretation of obtained results, – suitability of many of its algorithms for parallel processing. Rough set theory has been extended in many ways (see, e.g., [3–17]) but we will not discuss these issues in this paper. Basic ideas of rough set theory and its extensions, as well as many interesting applications can be found in books (see, e.g., [18–27, 12, 28–30]), special issues of journals (see, e.g., [31–34, 34–38]), proceedings of international conferences (see, e.g., [39–49] ), tutorials (e.g., [50–53]), and on the internet (see, e.g., www.roughsets.org, logic.mimuw.edu.pl,rsds.wsiz.rzeszow.pl). The paper is organized as follows: Section 2 (Basic Concepts) contains general formulation of basic ideas of rough set theory together with brief discussion of its place in classical set theory. Section 3 (Rough Sets and Reasoning from Data) presents the application of rough set concept to reason from data (data mining). Section 4 (Rough Sets and Bayes’ Theorem) gives a new look on Bayes’ theorem and shows that Bayes’ rule can be used diﬀerently to that oﬀered by classical Bayesian reasoning methodology. Section 5 (Rough Sets and Conﬂict Analysis) discuses the application of rough set concept to study conﬂict. In Section 6 (Data Analysis and Flow Graphs) we show that many problems in data analysis can be boiled down to ﬂow analysis in a ﬂow network. This paper is a modiﬁed version of lectures delivered at the Taragona University seminar on Formal Languages and Rough Sets in August 2003.

2 2.1

Rough Sets – Basic Concepts Introduction

In this section we give some general remarks on a concept of a set and the place of rough sets in set theory. The concept of a set is fundamental for the whole mathematics. Modern set theory was formulated by George Cantor [54]. Bertrand Russell discovered that the intuitive notion of a set proposed by Cantor leads to antinomies [55]. Two kinds of remedy for this discontent have been proposed: axiomatization of Cantorian set theory and alternative set theories. Another issue discussed in connection with the notion of a set or a concept is vagueness (see, e.g., [56–61]). Mathematics requires that all mathematical notions (including set) must be exact (Gottlob Frege [62]). However, philosophers and recently computer scientists have become interested in vague concepts. In fuzzy set theory vagueness is deﬁned by graduated membership. Rough set theory expresses vagueness, not by means of membership, but employing a boundary region of a set. If the boundary region of a set is empty it means that the set is crisp, otherwise the set is rough (inexact). Nonempty boundary region of a set means that our knowledge about the set is not suﬃcient to deﬁne the set precisely.

Some Issues on Rough Sets

3

The detailed analysis of sorities paradoxes for vague concepts using rough sets and fuzzy sets is presented in [63]. In this section the relationship between sets, fuzzy sets and rough sets will be outlined and brieﬂy discussed. 2.2

Sets

The notion of a set is not only basic for mathematics but it also plays an important role in natural language. We often speak about sets (collections) of various objects of interest, e.g., collection of books, paintings, people etc. Intuitive meaning of a set according to some dictionaries is the following: “A number of things of the same kind that belong or are used together.” Webster’s Dictionary “Number of things of the same kind, that belong together because they are similar or complementary to each other.” The Oxford English Dictionary Thus a set is a collection of things which are somehow related to each other but the nature of this relationship is not speciﬁed in these deﬁnitions. In fact these deﬁnitions are due to Cantor [54], which reads as follows: “Unter einer Mannigfaltigkeit oder Menge verstehe ich n¨ amlich allgenein jedes Viele, welches sich als Eines denken l¨asst, d.h. jeden Inbegriﬀ bestimmter Elemente, welcher durch ein Gesetz zu einem Ganzen verbunden werden kann.” Thus according to Cantor a set is a collection of any objects, which according to some law can be considered as a whole. All mathematical objects, e.g., relations, functions, numbers, etc., are some kind of sets. In fact set theory is needed in mathematics to provide rigor. Russell discovered that the Cantorian notion of a set leads to antinomies (contradictions). One of the best known antinomies called the powerset antinomy goes as follows: consider (inﬁnite) set X of all sets. Thus X is the greatest set. Let Y denote the set of all subsets of X. Obviously Y is greater then X, because the number of subsets of a set is always greater the number of its elements. Hence X is not the greatest set as assumed and we arrived at contradiction. Thus the basic concept of mathematics, the concept of a set, is contradictory. This means that a set cannot be a collection of arbitrary elements as was stipulated by Cantor. As a remedy for this defect several improvements of set theory have been proposed. For example, – Axiomatic set theory (Zermello and Fraenkel, 1904). – Theory of types (Whitehead and Russell, 1910). – Theory of classes (v. Neumann, 1920). All these improvements consist in restrictions, put on objects which can form a set. The restrictions are expressed by properly chosen axioms, which say how

4

Zdzislaw Pawlak

the set can be build. They are called, in contrast to Cantors’ intuitive set theory, axiomatic set theories. Instead of improvements of Cantors’ set theory by its axiomatization, some mathematicians proposed escape from classical set theory by creating completely new idea of a set, which would free the theory from antinomies. Some of them are listed below. – Mereology (Le´sniewski, 1915). – Alternative set theory (Vopenka, 1970). – “Penumbral” set theory (Apostoli and Kanada, 1999). No doubt the most interesting proposal was given by Stanisaw Le´sniewski [64], who proposed instead of membership relation between elements and sets, employed in classical set theory, the relation of “being a part”. In his set theory, called mereology, this relation is a fundamental one. None of the three mentioned above “new” set theories were accepted by mathematicians, however Le´sniewski’s mereology attracted some attention of philosophers and recently also computer scientists, (e.g., Lech Polkowski and Andrzej Skowron [6]). In classical set theory a set is uniquely determined by its elements. In other words, this means that every element must be uniquely classiﬁed as belonging to the set or not. In contrast, the notion of a beautiful painting is vague, because we are unable to classify uniquely all paintings into two classes: beautiful and not beautiful. Thus beauty is not a precise but a vague concept. That is to say the notion of a set is a crisp (precise) one. For example, the set of odd numbers is crisp because every number is either odd or even. In mathematics we have to use crisp notions, otherwise precise reasoning would be impossible. However philosophers for many years were interested also in vague (imprecise) notions. Almost all concepts we are using in natural language are vague. Therefore common sense reasoning based on natural language must be based on vague concepts and not on classical logic. This is why vagueness is important for philosophers and recently also for computer scientists. Vagueness is usually associated with the boundary region approach (i.e., existence of objects which cannot be uniquely classiﬁed to the set or its complement) which was ﬁrst formulated in 1893 by the father of modern logic Gottlob Frege [62], who wrote: “Der Begriﬀ muss scharf begrenzt sein. Einem unscharf begrenzten Begriﬀe w¨ urde ein Bezirk entsprechen, der nicht u ¨ berall eine scharfe Grenzlinie h¨atte, sondern stellenweise ganz verschwimmend in die Umgebung u ¨berginge. Das w¨ are eigentlich gar kein Bezirk; und so wird ein unscharf deﬁnirter Begriﬀ mit Unrecht Begriﬀ genannt. Solche begriﬀsartige Bildungen kann die Logik nicht als Begriﬀe anerkennen; es ist unm¨oglich, von ihnen genaue Gesetze aufzustellen. Das Gesetz des ausgeschlossenen Dritten ist ja eigentlich nur in anderer Form die Forderung, dass der Begriﬀ scharf begrenzt sei. Ein beliebiger Gegenstand x f¨ allt entweder unter der Begriﬀ y, oder er f¨ allt nicht unter ihn: tertium non datur.”

Some Issues on Rough Sets

5

Thus according to Frege “The concept must have a sharp boundary. To the concept without a sharp boundary there would correspond an area that had not a sharp boundary-line all around.” That is, mathematics must use crisp, not vague concepts, otherwise it would be impossible to reason precisely. Summing up, vagueness is – Not allowed in mathematics. – Interesting for philosophy. – Necessary for computer science. 2.3

Fuzzy Sets

Zadeh proposed completely new, elegant approach to vagueness called fuzzy set theory [1]. In his approach an element can belong to a set to a degree k(0 ≤ k ≤ 1), in contrast to classical set theory where an element must deﬁnitely belong or not to a set. For example, in classical set theory language we can state that one is deﬁnitely ill or healthy, whereas in fuzzy set theory we can say that someone is ill (or healthy) in 60 percent (i.e., in the degree 0.6). Of course, at once the question arises where we get the value of degree from. This issue raised a lot of discussion, but we will refrain from considering this problem here. Thus fuzzy membership function can be presented as µX (x) ∈< 0, 1 >, where, X is a set and x is an element. Let us observe that the deﬁnition of fuzzy set involves more advanced mathematical concepts, real numbers and functions, whereas in classical set theory the notion of a set is used as a fundamental notion of whole mathematics and is used to derive any other mathematical concepts, e.g., numbers and functions. Consequently fuzzy set theory cannot replace classical set theory, because, in fact, the theory is needed to deﬁne fuzzy sets. Fuzzy membership function has the following properties: µU−X (x) = 1 − µX (x) for any x ∈ U,

(1)

µX∪Y (x) = max(µX (x), µY (x)) for any x ∈ U, µX∩Y (x) = min(µX (x), µY (x)) for any x ∈ U. This means that the membership of an element to the union and intersection of sets is uniquely determined by its membership to constituent sets. This is a very nice property and allows very simple operations on fuzzy sets, which is a very important feature both theoretically and practically. Fuzzy set theory and its applications developed very extensively over recent years and attracted attention of practitioners, logicians and philosophers worldwide.

6

Zdzislaw Pawlak

2.4

Rough Sets

Rough set theory [2, 18] is still another approach to vagueness. Similarly to fuzzy set theory it is not an alternative to classical set theory but it is embedded in it. Rough set theory can be viewed as a speciﬁc implementation of Frege’s idea of vagueness, i.e., imprecision in this approach is expressed by a boundary region of a set, and not by a partial membership, like in fuzzy set theory. Rough set concept can be deﬁned quite generally by means of topological operations, interior and closure, called approximations. Let us describe this problem more precisely. Suppose we are given a set of objects U called the universe and an indiscernibility relation R ⊆ U × U, representing our lack of knowledge about elements of U . For the sake of simplicity we assume that R is an equivalence relation. Let X be a subset of U. We want to characterize the set X with respect to R. To this end we will need the basic concepts of rough set theory given below. – The lower approximation of a set X (with respect to R) is the set of all objects, which can be for certain classiﬁed as X with respect to R (are certainly X with respect to R). – The upper approximation of a set X (with respect to R) is the set of all objects which can be possibly classiﬁed as X with respect to R (are possibly X with respect to R). – The boundary region of a set X (with respect to R) is the set of all objects, which can be classiﬁed neither as X nor as not-X with respect to R. Now we are ready to give the deﬁnition of rough sets. – Set X is crisp (exact with respect to R), if the boundary region of X is empty. – Set X is rough (inexact with respect to R), if the boundary region of X is nonempty. Thus a set is rough (imprecise) if it has nonempty boundary region; otherwise the set is crisp (precise). This is exactly the idea of vagueness proposed by Frege. The approximations and the boundary region can be deﬁned more precisely. To this end we need some additional notation. The equivalence class of R determined by element x will be denoted by R(x). The indiscernibility relation in certain sense describes our lack of knowledge about the universe. Equivalence classes of the indiscernibility relation, called granules generated by R, represent elementary portion of knowledge we are able to perceive due to R. Thus in view of the indiscernibility relation, in general, we are unable to observe individual objects but we are forced to reason only about the accessible granules of knowledge. Formal deﬁnitions of approximations and the boundary region are as follows: R-lower approximation of X R∗ (X) =

x∈U

{R(x) : R(x) ⊆ X},

(2)

Some Issues on Rough Sets

7

R-upper approximation of X R∗ (X) =

{R(x) : R(x) ∩ X = ∅},

(3)

x∈U

R-boundary region of X BNR (X) = R∗ (X) − R∗ (X).

(4)

As we can see from the deﬁnition approximations are expressed in terms of granules of knowledge. The lower approximation of a set is union of all granules which are entirely included in the set; the upper approximation – is union of all granules which have non-empty intersection with the set; the boundary region of set is the diﬀerence between the upper and the lower approximation. In other words, due to the granularity of knowledge, rough sets cannot be characterized by using available knowledge. Therefore with every rough set we associate two crisp sets, called its lower and upper approximation. Intuitively, the lower approximation of a set consists of all elements that surely belong to the set, whereas the upper approximation of the set constitutes of all elements that possibly belong to the set, and the boundary region of the set consists of all elements that cannot be classiﬁed uniquely to the set or its complement, by employing available knowledge. Thus any rough set, in contrast to a crisp set, has a non-empty boundary region. The approximation deﬁnition is clearly depicted in Figure 1.

Fig. 1. A rough set

8

Zdzislaw Pawlak

Approximations have the following properties: R∗ (X) ⊆ X ⊆ R∗ (X), R∗ (∅) = R∗ (∅) = ∅; R∗ (U ) = R∗ (U ) = U,

(5)

R∗ (X ∪ Y ) = R∗ (X) ∪ R∗ (Y ), R∗ (X ∩ Y ) = R∗ (X) ∩ R∗ (Y ), R∗ (X ∪ Y ) ⊇ R∗ (X) ∪ R∗ (Y ), R∗ (X ∩ Y ) ⊆ R∗ (X) ∩ R∗ (Y ), X ⊆ Y → R∗ (X) ⊆ R∗ (Y )&R∗ (X) ⊆ R∗ (Y ), R∗ (−X) = −R∗ (X), R∗ (−X) = −R∗ (X), R∗ R∗ (X) = R∗ R∗ (X) = R∗ (X), R∗ R∗ (X) = R∗ R∗ (X) = R∗ (X).

It is easily seen that approximations are in fact interior and closure operations in a topology generated by the indiscernibility relation. Thus fuzzy set theory and rough set theory require completely diﬀerent mathematical setting. Rough sets can be also deﬁned employing, instead of approximation, rough membership function [65] (6) µR X : U →< 0, 1 >, where µR X (x) =

card(X ∩ R(x)) , card(R(x))

(7)

and card(X) denotes the cardinality of X. The rough membership function expresses conditional probability that x belongs to X given R and can be interpreted as a degree that x belongs to X in view of information about x expressed by R. The meaning of rough membership function can be depicted as shown in Figure 2. The rough membership function can be used to deﬁne approximations and the boundary region of a set, as shown below: R∗ (X) = {x ∈ U : µR X (x) = 1}, ∗ R (X) = {x ∈ U : µR X (x) > 0},

(8)

BNR (X) = {x ∈ U : 0 < µR X (x) < 1}. It can be shown that the membership function has the following properties [65]: µR X (x) = 1 iﬀ x ∈ R∗ (X), ∗ µR X (x) = 0 iﬀ x ∈ U − R (X), 0 < µR X (x) < 1 iﬀ x ∈ BNR (X),

(9)

Some Issues on Rough Sets

9

Fig. 2. Rough membership function R µR U−X (x) = 1 − µX (x) for any x ∈ U, R R µR X∪Y (x) ≥ max(µX (x), µY (x)) for any x ∈ U, R R µR X∩Y (x) ≤ min(µX (x), µY (x)) for any x ∈ U.

From the properties it follows that the rough membership diﬀers essentially from the fuzzy membership, because the membership for union and intersection of sets, in general, cannot be computed as in the case of fuzzy sets from their constituents membership. Thus formally the rough membership is a generalization of fuzzy membership. Besides, the rough membership function, in contrast to fuzzy membership function, has a probabilistic ﬂavour. Now we can give two deﬁnitions of rough sets. Set X is rough with respect to R if R∗ (X) = R∗ (X). Set X rough with respect to R if for some x, 0 < µR X (x) < 1. It is interesting to observe that the above deﬁnitions are not equivalent [65], but we will not discuss this issue here. One can deﬁne the following four basic classes of rough sets, i.e., four categories of vagueness: R∗ (X) = ∅ and R∗ (X) = U, iﬀ X is roughly R-definable, R∗ (X) = ∅ and R∗ (X) = U, iﬀ X is internally R-indefinable,

(10)

R∗ (X) = ∅ and R∗ (X) = U, iﬀ X is externally R-definable, R∗ (X) = ∅ and R∗ (X) = U, iﬀ X is totally R-indefinable. The intuitive meaning of this classiﬁcation is the following. If X is roughly R-definable, this means that we are able to decide for some elements of U whether they belong to X or −X, using R.

10

Zdzislaw Pawlak

If X is internally R-indeﬁnable, this means that we are able to decide whether some elements of U belong to −X, but we are unable to decide for any element of U , whether it belongs to X or not, using R. If X is externally R-indeﬁnable, this means that we are able to decide for some elements of U whether they belong to X, but we are unable to decide, for any element of U whether it belongs to −X or not, using R. If X is totally R-indeﬁnable, we are unable to decide for any element of U whether it belongs to X or −X, using R. A rough set can also be characterized numerically by the following coeﬃcient αR (X) =

card(R∗ (X)) , card(R∗ (X))

(11)

called accuracy of approximation. Obviously, 0 ≤ αR (X) ≤ 1. If αR (X) = 1, X is crisp with respect to R (X is precise with respect to R), and otherwise, if αR (X) < 1, X is rough with respect to R (X is vague with respect to R). It is interesting to compare deﬁnitions of classical sets, fuzzy sets and rough sets. Classical set is a primitive notion and is deﬁned intuitively or axiomatically. Fuzzy sets are deﬁned by employing the fuzzy membership function, which involves advanced mathematical structures, numbers and functions. Rough sets are deﬁned by approximations. Thus this deﬁnition also requires advanced mathematical concepts. Let us also mention that rough set theory clearly distinguishes two very important concepts, vagueness and uncertainty, very often confused in the AI literature. Vagueness is the property of sets and can be described by approximations, whereas uncertainty is the property of elements of a set and can expressed by the rough membership function.

3 3.1

Rough Sets and Reasoning from Data Introduction

In this section we deﬁne basic concepts of rough set theory in terms of data, in contrast to general formulation presented in Section 2. This is necessary if we want to apply rough sets to reason from data. In what follows we assume that, in contrast to classical set theory, we have some additional data (information, knowledge) about elements of a universe of discourse. Elements that exhibit the same features are indiscernible (similar) and form blocks that can be understood as elementary granules (concepts) of knowledge about the universe. For example, patients suﬀering from a certain disease, displaying the same symptoms are indiscernible and may be thought of as representing a granule (disease unit) of medical knowledge. These granules can be considered as elementary building blocks of knowledge. Elementary concepts can be combined into compound concepts, i.e., concepts that are uniquely determined in terms of elementary concepts. Any union of elementary sets is called a crisp set, and any other sets are referred to as rough (vague, imprecise).

Some Issues on Rough Sets

3.2

11

An Example

Before we will formulate the above ideas more precisely let us consider a simple tutorial example. Data are often presented as a table, columns of which are labeled by attributes, rows by objects of interest and entries of the table are attribute values. For example, in a table containing information about patients suﬀering from a certain disease objects are patients (strictly speaking their ID’s), attributes can be, for example, blood pressure, body temperature etc., whereas the entry corresponding to object Smith and the attribute blood preasure can be normal. Such tables are known as information tables, attribute-value tables or information system. We will use here the term information system. Below an example of information system is presented. Suppose we are given data about 6 patients, as shown in Table 1. Table 1. Exemplary information system Patient Headache Muscle-pain Temperature Flu p1 no yes high yes p2 yes no high yes p3 yes yes very high yes p4 no yes normal no p5 yes no high no p6 no yes very high yes

Columns of the table are labeled by attributes (symptoms) and rows – by objects (patients), whereas entries of the table are attribute values. Thus each row of the table can be seen as information about speciﬁc patient. For example, patient p2 is characterized in the table by the following attributevalue set (Headache, yes), (Muscle-pain, no), (Temperature, high), (Flu, yes), which form the information about the patient. In the table patients p2, p3 and p5 are indiscernible with respect to the attribute Headache, patients p3 and p6 are indiscernible with respect to attributes Muscle-pain and Flu, and patients p2 and p5 are indiscernible with respect to attributes Headache, Muscle-pain and Temperature. Hence, for example, the attribute Headache generates two elementary sets {p2, p3, p5} and {p1, p4, p6}, whereas the attributes Headache and Muscle-pain form the following elementary sets: {p1, p4, p6}, {p2, p5} and {p3}. Similarly one can deﬁne elementary sets generated by any subset of attributes. Patient p2 has ﬂu, whereas patient p5 does not, and they are indiscernible with respect to the attributes Headache, Muscle-pain and Temperature, hence ﬂu cannot be characterized in terms of attributes Headache, Muscle-pain and

12

Zdzislaw Pawlak

Temperature. Hence p2 and p5 are the boundary-line cases, which cannot be properly classiﬁed in view of the available knowledge. The remaining patients p1, p3 and p6 display symptoms which enable us to classify them with certainty as having ﬂu, patients p2 and p5 cannot be excluded as having ﬂu and patient p4 for sure does not have ﬂu, in view of the displayed symptoms. Thus the lower approximation of the set of patients having ﬂu is the set {p1, p3, p6} and the upper approximation of this set is the set {p1, p2, p3, p5, p6}, whereas the boundary-line cases are patients p2 and p5. Similarly p4 does not have ﬂu and p2, p5 cannot be excluded as having ﬂu, thus the lower approximation of this concept is the set {p4} whereas – the upper approximation – is the set {p2, p4, p5} and the boundary region of the concept “not ﬂu” is the set {p2, p5}, the same as in the previous case. 3.3

Information Systems

Now, we are ready to formulate basic concepts of rough set theory using data. Suppose we are given two ﬁnite, non-empty sets U and A, where U is the universe, and A – a set of attributes. The pair S = (U, A) will be called an information system. With every attribute a ∈ A we associate a set Va , of its values, called the domain of a. Any subset B of A determines a binary relation I(B) on U , which will be called an indiscernibility relation, and is deﬁned as follows: xI(B)y if and only if a(x) = a(y) for every a ∈ A, (12) where a(x) denotes the value of attribute a for element x. Obviously I(B) is an equivalence relation. The family of all equivalence classes of I(B), i.e., partition determined by B, will be denoted by U/I(B), or simple U/B; an equivalence class of I(B), i.e., block of the partition U/B, containing x will be denoted by B(x). If (x, y) belongs to I(B) we will say that x and y are B-indiscernible. Equivalence classes of the relation I(B) (or blocks of the partition U/B) are referred to as B-elementary sets. In the rough set approach the elementary sets are the basic building blocks (concepts) of our knowledge about reality. Now approximations can be deﬁned as follows: B∗ (X) = {x ∈ U : B(x) ⊆ X},

(13)

B ∗ (X) = {x ∈ U : B(x) ∩ X = ∅},

(14)

called the B-lower and the B-upper approximation of X, respectively. The set BNB (X) = B ∗ (X) − B∗ (X),

(15)

will be referred to as the B-boundary region of X. If the boundary region of X is the empty set, i.e., BNB (X) = ∅, then the set X is crisp (exact) with respect to B; in the opposite case, i.e., if BNB (X) = ∅, the set X is referred to as rough (inexact) with respect to B.

Some Issues on Rough Sets

13

The properties of approximations can be presented now as: B∗ (X) ⊆ X ⊆ B ∗ (X), B∗ (∅) = B ∗ (∅) = ∅, B∗ (U ) = B ∗ (U ) = U,

(16)

B ∗ (X ∪ Y ) = B ∗ (X) ∪ B ∗ (Y ), B∗ (X ∩ Y ) = B∗ (X) ∩ B∗ (Y ), X ⊆ Y implies B∗ (X) ⊆ B∗ (Y ) and B ∗ (X) ⊆ B ∗ (Y ), B∗ (X ∪ Y ) ⊇ B∗ (X) ∪ B∗ (Y ), B ∗ (X ∩ Y ) ⊆ B ∗ (X) ∩ B ∗ (Y ), B∗ (−X) = −B ∗ (X), B ∗ (−X) = −B∗ (X), B∗ (B∗ (X)) = B ∗ (B∗ (X)) = B∗ (X), B ∗ (B ∗ (X)) = B∗ (B ∗ (X)) = B ∗ (X). 3.4

Decision Tables

An information system in which we distinguish two classes of attributes, called condition and decision (action) attributes are called decision tables. The condition and decision attributes deﬁne partitions of the decision table universe. We aim at approximation of the partition deﬁned by the decision attributes by means of the partition deﬁned by the condition attributes. For example, in Table 1 attributes Headache, Muscle-pain and Temperature can be considered as condition attributes, whereas the attribute Flu – as a decision attribute. A decision table with condition attributes C and decision attributes D will be denoted by S = (U, C, D). Each row of a decision table determines a decision rule, which speciﬁes decisions (actions) that should be taken when conditions pointed out by condition attributes are satisﬁed. For example, in Table 1 the condition (Headache, no), (Muscle-pain, yes), (Temperature, high) determines uniquely the decision (Flu, yes). Objects in a decision table are used as labels of decision rules. Decision rules 2) and 5) in Table 1 have the same conditions but diﬀerent decisions. Such rules are called inconsistent (nondeterministic, conflicting); otherwise the rules are referred to as consistent (certain, deterministic, nonconflicting). Sometimes consistent decision rules are called sure rules, and inconsistent rules are called possible rules. Decision tables containing inconsistent decision rules are called inconsistent (nondeterministic, conflicting); otherwise the table is consistent (deterministic, non-conflicting). The number of consistent rules to all rules in a decision table can be used as consistency factor of the decision table, and will be denoted by γ(C, D), where C and D are condition and decision attributes respectively. Thus if γ(C, D) = 1 the decision table is consistent and if γ(C, D) = 1 the decision table is inconsistent. For example, for Table 1, we have γ(C, D) = 4/6.

14

Zdzislaw Pawlak

Decision rules are often presented in a form called if... then... rules. For example, rule 1) in Table 1 can be presented as follows if (Headache,no) and (Muscle-pain,yes) and (Temperature,high) then (Flu,yes). A set of decision rules is called a decision algorithm. Thus with each decision table we can associate a decision algorithm consisting of all decision rules occurring in the decision table. We must however, make distinction between decision tables and decision algorithms. A decision table is a collection of data, whereas a decision algorithm is a collection of rules, e.g., logical expressions. To deal with data we use various mathematical methods, e.g., statistics but to analyze rules we must employ logical tools. Thus these two approaches are not equivalent, however for simplicity we will often present here decision rules in form of implications, without referring deeper to their logical nature, as it is often practiced in AI. 3.5

Dependency of Attributes

Another important issue in data analysis is discovering dependencies between attributes. Intuitively, a set of attributes D depends totally on a set of attributes C, denoted C ⇒ D, if all values of attributes from D are uniquely determined by values of attributes from C. In other words, D depends totally on C, if there exists a functional dependency between values of D and C. For example, in Table 1 there are no total dependencies whatsoever. If in Table 1, the value of the attribute Temperature for patient p5 were “no” instead of “high”, there would be a total dependency {T emperature} ⇒ {F lu}, because to each value of the attribute Temperature there would correspond unique value of the attribute Flu. We would need also a more general concept of dependency of attributes, called a partial dependency of attributes. Let us depict the idea by example, referring to Table 1. In this table, for example, the attribute Temperature determines uniquely only some values of the attribute Flu. That is, (Temperature, very high) implies (Flu, yes), similarly (Temperature, normal) implies (Flu, no), but (Temperature, high) does not imply always (Flu, yes). Thus the partial dependency means that only some values of D are determined by values of C. Formally dependency can be deﬁned in the following way. Let D and C be subsets of A. We will say that D depends on C in a degree k (0 ≤ k ≤ 1), denoted C ⇒k D, if k = γ(C, D). If k = 1 we say that D depends totally on C, and if k < 1, we say that D depends partially (in a degree k) on C. The coeﬃcient k expresses the ratio of all elements of the universe, which can be properly classiﬁed to blocks of the partition U/D, employing attributes C. Thus the concept of dependency of attributes is strictly connected with that of consistency of the decision table.

Some Issues on Rough Sets

15

For example, for dependency {Headache, Muscle-pain, Temperature} ⇒ {Flu} we get k = 4/6 = 2/3, because four out of six patients can be uniquely classiﬁed as having ﬂu or not, employing attributes Headache, Muscle-pain and Temperature. If we were interested in how exactly patients can be diagnosed using only the attribute Temperature, that is – in the degree of the dependence {Temperature} ⇒ {Flu}, we would get k = 3/6 = 1/2, since in this case only three patients p3, p4 and p6 out of six can be uniquely classiﬁed as having ﬂu. In contrast to the previous case patient p4 cannot be classiﬁed now as having ﬂu or not. Hence the single attribute Temperature oﬀers worse classiﬁcation than the whole set of attributes Headache, Muscle-pain and Temperature. It is interesting to observe that neither Headache nor Muscle-pain can be used to recognize ﬂu, because for both dependencies {Headache} ⇒ {Flu} and {Muscle-pain} ⇒ {Flu} we have k = 0. It can be easily seen that if D depends totally on C then I(C) ⊆ I(D). That means that the partition generated by C is ﬁner than the partition generated by D. Observe, that the concept of dependency discussed above corresponds to that considered in relational databases. If D depends in degree k, 0 ≤ k ≤ 1, on C, then γ(C, D) =

card(P OSC (D)) , card(U )

where P OSC (D) =

C∗ (X).

(17)

(18)

X∈U/I(D)

The expression P OSC (D), called a positive region of the partition U/D with respect to C, is the set of all elements of U that can be uniquely classiﬁed to blocks of the partition U/D, by means of C. Summing up: D is totally (partially) dependent on C, if all (some) elements of the universe U can be uniquely classiﬁed to blocks of the partition U/D, employing C. 3.6

Reduction of Attributes

We often face a question whether we can remove some data from a data table preserving its basic properties, that is – whether a table contains some superﬂuous data. For example, it is easily seen that if we drop in Table 1 either the attribute Headache or Muscle-pain we get the data set which is equivalent to the original one, in regard to approximations and dependencies. That is we get in this case the same accuracy of approximation and degree of dependencies as in the original table, however using smaller set of attributes. In order to express the above idea more precisely we need some auxiliary notions. Let B be a subset of A and let a belong to B.

16

Zdzislaw Pawlak

– We say that a is dispensable in B if I(B) = I(B − {a}); otherwise a is indispensable in B. – Set B is independent if all its attributes are indispensable. – Subset B of B is a reduct of B if B is independent and I(B ) = I(B). Thus a reduct is a set of attributes that preserves partition. This means that a reduct is the minimal subset of attributes that enables the same classiﬁcation of elements of the universe as the whole set of attributes. In other words, attributes that do not belong to a reduct are superﬂuous with regard to classiﬁcation of elements of the universe. Reducts have several important properties. In what follows we will present two of them. First, we deﬁne a notion of a core of attributes. Let B be a subset of A. The core of B is the set oﬀ all indispensable attributes of B. The following is an important property, connecting the notion of the core and reducts Core(B) = Red(B), (19) where Red(B) is the set oﬀ all reducts of B. Because the core is the intersection of all reducts, it is included in every reduct, i.e., each element of the core belongs to some reduct. Thus, in a sense, the core is the most important subset of attributes, for none of its elements can be removed without aﬀecting the classiﬁcation power of attributes. To further simpliﬁcation of an information table we can eliminate some values of attribute from the table in such a way that we are still able to discern objects in the table as the original one. To this end we can apply similar procedure as to eliminate superﬂuous attributes, which is deﬁned next. – We will say that the value of attribute a ∈ B, is dispensable for x, if B(x) = B a (x), where B a = B − {a}; otherwise the value of attribute a is indispensable for x. – If for every attribute a ∈ B the value of a is indispensable for x, then B will be called orthogonal for x. – Subset B ⊆ B is a value reduct of B for x, iﬀ B is orthogonal for x and B(x) = B (x). The set of all indispensable values of attributes in B for x will be called the value core of B for x, and will be denoted CORE x (B). Also in this case we have Redx (B), (20) CORE x (B) = where Redx (B) is the family of all reducts of B for x. Suppose we are given a dependency C ⇒ D. It may happen that the set D depends not on the whole set C but on its subset C and therefore we might be interested to ﬁnd this subset. In order to solve this problem we need the notion of a relative reduct, which will be deﬁned and discussed next.

Some Issues on Rough Sets

17

Let C, D ⊆ A. Obviously if C ⊆ C is a D-reduct of C, then C is a minimal subset of C such that γ(C, D) = γ(C , D). (21) – We will say that attribute a ∈ C is D-dispensable in C, if P OSC (D) = P OS(C−{a}) (D); otherwise the attribute a is D-indispensable in C. – If all attributes a ∈ C are C-indispensable in C, then C will be called Dindependent. – Subset C ⊆ C is a D-reduct of C, iﬀ C is D-independent and P OSC (D) = P OSC (D). The set of all D-indispensable attributes in C will be called D − core of C, and will be denoted by CORED (C). In this case we have also the property CORED (C) = RedD (C), (22) where RedD (C) is the family of all D-reducts of C. If D = C we will get the previous deﬁnitions. For example, in Table 1 there are two relative reducts with respect to Flu, {Headache, Temperature} and {Muscle-pain, Temperature} of the set of condition attributes Headache, Muscle-pain, Temperature. That means that either the attribute Headache or Muscle-pain can be eliminated from the table and consequently instead of Table 1 we can use either Table 2 or Table 3. For Table 1 the relative core of with respect to the set {Headache, Muscle-pain, Temperature} is the Temperature. This conﬁrms our previous considerations showing that Temperature is the only symptom that enables, at least, partial diagnosis of patients. Table 2. Data table obtained from Table 1 by drooping the attribute Muscle-pain Patient Headache Temperature Flu p1 no high yes p2 yes high yes p3 yes very high yes p4 no normal no p5 yes high no p6 no very high yes Table 3. Data table obtained from Table 1 by drooping the attribute Headache Patient Muscle-pain Temperature Flu p1 yes high yes p2 no high yes p3 yes very high yes p4 yes normal no p5 no high no p6 yes very high yes

18

Zdzislaw Pawlak

We will need also a concept of a value reduct and value core. Suppose we are given a dependency C ⇒ D where C is relative D-reduct of C. To further investigation of the dependency we might be interested to know exactly how values of attributes from D depend on values of attributes from C. To this end we need a procedure eliminating values of attributes form C which does not inﬂuence on values of attributes from D. – We say that value of attribute a ∈ C, is D-dispensable for x ∈ U , if C(x) ⊆ D(x) implies C a (x) ⊆ D(x), otherwise the value of attribute a is D-indispensable for x. – If for every attribute a ∈ C value of a is D-indispensable for x, then C will be called D-independent (orthogonal) for x. – Subset C ⊆ C is a D-reduct of C for x (a value reduct), iﬀ C is Dindependent for x and C(x) ⊆ D(x) implies C (x) ⊆ D(x). The set of all D-indispensable for x values of attributes in C will be called the x D − core of C for x (the value core), and will be denoted CORED (C). We have also the following property x (C) = CORED

RedxD (C),

(23)

where RedxD (C) is the family of all D-reducts of C for x. Using the concept of a value reduct, Table 2 and Table 3 can be simpliﬁed and we obtain Table 4 and Table 5, respectively. For Table 4 we get its representation by means of rules if if if if if if

(Headache, no) and (Temperature, high) then (Flu, yes), (Headache, yes) and (Temperature, high) then (Flu, yes), (Temperature, very high) then (Flu, yes), (Temperature, normal) then (Flu, no), (Headache, yes) and (Temperature, high) then (Flu, no), (Temperature, very high) then (Flu, yes).

For Table 5 we have if if if if if if

(Muscle-pain, yes) and (Temperature, high) then (Flu, yes), (Muscle-pain, no) and (Temperature, high) then (Flu, yes), (Temperature, very high) then (Flu, yes), (Temperature, normal) then (Flu, no), (Muscle-pain, no) and (Temperature, high) then (Flu, no), (Temperature, very high) then (Flu, yes).

Some Issues on Rough Sets

19

Table 4. Simpliﬁed Table 2 Patient Headache Temperature Flu p1 no high yes p2 yes high yes p3 – very high yes p4 – normal no p5 yes high no p6 – very high yes Table 5. Simpliﬁed Table 3 Patient Muscle-pain Temperature Flu p1 yes high yes p2 no high yes p3 – very high yes p4 – normal no p5 no high no p6 – very high yes

The following important property a) B ⇒ B − B , where B is a reduct of B, connects reducts and dependency. Besides, we have: b) If B ⇒ C, then B ⇒ C , for every C ⊆ C, in particular c) If B ⇒ C, then B ⇒ {a}, for every a ∈ C. Moreover, we have: d) If B is a reduct of B, then neither {a} ⇒ {b} nor {b} ⇒ {a} holds, for every a, b ∈ B , i.e., all attributes in a reduct are pairwise independent. 3.7

Indiscernibility Matrices and Functions

To compute easily reducts and the core we will use discernibility matrix [66], which is deﬁned next. By an discernibility matrix of B ⊆ A denoted M (B) we will mean n × n matrix with entries deﬁned by: cij = {a ∈ B : a(xi ) = a(xj )} for i, j = 1, 2, . . . , n. Thus entry cij is the set of all attributes which discern objects xi and xj .

(24)

20

Zdzislaw Pawlak

The discernibility matrix M (B) assigns to each pair of objects x and y a subset of attributes δ(x, y) ⊆ B, with the following properties: δ(x, x) = ∅, δ(x, y) = δ(y, x),

(25)

δ(x, z) ⊆ δ(x, y) ∪ δ(y, z). These properties resemble properties of semi-distance, and therefore the function δ may be regarded as qualitative semi-matrix and δ(x, y) – qualitative semidistance. Thus the discernibility matrix can be seen as a semi-distance (qualitative) matrix. Let us also note that for every x, y, z ∈ U we have card(δ(x, x)) = 0,

(26)

card(δ(x, y)) = card(δ(y, x)), card(δ(x, z)) ≤ card(δ(x, y)) + card(δ(y, z)). It is easily seen that the core is the set of all single element entries of the discernibility matrix M (B), i.e., CORE(B) = {a ∈ B : cij = {a}, for some i, j}.

(27)

Obviously B ⊆ B is a reduct of B, if B is the minimal (with respect to inclusion) subset of B such that B ∩ c = ∅ for any nonempty entry c (c = ∅) in M (B).

(28)

In other words reduct is the minimal subset of attributes that discerns all objects discernible by the whole set of attributes. Every discernibility matrix M (B) deﬁnes uniquely a discernibility (boolean) function f (B) deﬁned as follows. Let us assign to each attribute a ∈ B a binary boolean variable a, and let Σδ(x, y) denote Boolean sum of all Boolean variables assigned to the set of attributes δ(x, y). Then the discernibility function can be deﬁned by the formula {Σδ(x, y) : (x, y) ∈ U 2 and δ(x, y) = ∅}. (29) f (B) = (x,y)∈U 2

The following property establishes the relationship between disjunctive normal form of the function f (B) and the set of all reducts of B. All constituents in the minimal disjunctive normal form of the function f (B) are all reducts of B. In order to compute the value core and value reducts for x we can also use the discernibility matrix as deﬁned before and the discernibility function, which must be slightly modiﬁed: {Σδ(x, y) : y ∈ U and δ(x, y) = ∅}. (30) f x (B) = y∈U

Some Issues on Rough Sets

21

Relative reducts and core can be computed also using discernibility matrix, which needs slight modiﬁcation cij = {a ∈ C : a(xi ) = a(xj ) and w(xi , xj )},

(31)

where w(xi , xj ) ≡ xi ∈ P OSC (D) and xj ∈ P OSC (D) or xi ∈ P OSC (D) and xj ∈ P OSC (D) or xi , xj ∈ P OSC (D) and (xj , xj ) ∈ I(D), for i, j = 1, 2, . . . , n. If the partition deﬁned by D is deﬁnable by C then the condition w(xi , xj ) in the above deﬁnition can be reduced to (xi , xj ) ∈ I(D). Thus entry cij is the set of all attributes which discern objects xi and xj that do not belong to the same equivalence class of the relation I(D). The remaining deﬁnitions need little changes. The D-core is the set of all single element entries of the discernibility matrix MD (C), i.e., CORED (C) = {a ∈ C : cij = (a), for some i, j}.

(32)

Set C ⊆ C is the D-reduct of C, if C is the minimal (with respect to inclusion) subset of C such that C ∩ c = ∅ for any nonempty entry c, (c = ∅) in MD (C).

(33)

Thus D-reduct is the minimal subset of attributes that discerns all equivalence classes of the relation I(D). Every discernibility matrix MD (C) deﬁnes uniquely a discernibility (Boolean) function fD (C) which is deﬁned as before. We have also the following property: All constituents in the disjunctive normal form of the function fD (C) are all D-reducts of C. For computing value reducts and the value core for relative reducts we use as a starting point the discernibility matrix MD (C) and discernibility function will have the form: x fD (C) = {Σδ(x, y) : y ∈ U and δ(x, y) = ∅}. (34) y∈U

Let us illustrate the above considerations by computing relative reducts for the set of attributes {Headache, Muscle-pain, Temperature} with respect to Flu. The corresponding discernibility matrix is shown in Table 6. In Table 6 H, M, T denote Headache, Muscle-pain and Temperature, respectively. The discernibility function for this table is T (H + M )(H + M + T )(M + T ),

22

Zdzislaw Pawlak Table 6. Discernibility matrix 1 2 3 1 2 3 4 T H, M, T 5 H, M M, T 6

4

5

T

H, M, T

6

where + denotes the boolean sum and the boolean multiplication is omitted in the formula. After simplication the discernibility function using laws of Boolean algebra we obtain the following expression T H + T H, which says that there are two reducts T H and T M in the data table and T is the core. 3.8

Significance of Attributes and Approximate Reducts

As it follows from considerations concerning reduction of attributes, they cannot be equally important, and some of them can be eliminated from an information table without losing information contained in the table. The idea of attribute reduction can be generalized by introducing a concept of significance of attributes, which enables us evaluation of attributes not only by two-valued scale, dispensable – indispensable, but by assigning to an attribute a real number from the closed interval [0,1], expressing how important is an attribute in an information table. Signiﬁcance of an attribute can be evaluated by measuring eﬀect of removing the attribute from an information table on classiﬁcation deﬁned by the table. Let us ﬁrst start our consideration with decision tables. Let C and D be sets of condition and decision attributes respectively and let a be a condition attribute, i.e., a ∈ A. As shown previously the number γ(C, D) expresses a degree of consistency of the decision table, or the degree of dependency between attributes C and D, or accuracy of approximation of U/D by C. We can ask how the coeﬃcient γ(C, D) changes when removing the attribute a, i.e., what is the diﬀerence between γ(C, D) and γ(C − {a}, D). We can normalize the diﬀerence and deﬁne the signiﬁcance of the attribute a as σ(C,D) (a) =

(γ(C, D) − γ(C − {a}, D)) γ(C − {a}, D) =1− , γ(C, D) γ(C, D)

(35)

and denoted simple by σ(a), when C and D are understood. Obviously 0 ≤ σ(a) ≤ 1. The more important is the attribute a the greater is the number σ(a). For example for condition attributes in Table 1 we have the following results:

Some Issues on Rough Sets

23

σ(Headache) = 0, σ(Muscle-pain) = 0, σ(Temperature) = 0.75. Because the signiﬁcance of the attribute Temperature or Muscle-pain is zero, removing either of the attributes from condition attributes does not eﬀect the set of consistent decision rules, whatsoever. Hence the attribute Temperature is the most signiﬁcant one in the table. That means that by removing the attribute Temperature, 75% (three out of four) of consistent decision rules will disappear from the table, thus lack of the attribute essentially eﬀects the ”decisive power” of the decision table. For a reduct of condition attributes, e.g., {Headache, Temperature}, we get σ(Headache) = 0.25, σ(Temperature) = 1.00. In this case, removing the attribute Headache from the reduct, i.e., using only the attribute Temperature, 25% (one out of four) of consistent decision rules will be lost, and dropping the attribute Temperature, i.e., using only the attribute Headache 100% (all) consistent decision rules will be lost. That means that in this case making decisions is impossible at all, whereas by employing only the attribute Temperature some decision can be made. Thus the coeﬃcient σ(a) can be understood as an error which occurs when attribute a is dropped. The signiﬁcance coeﬃcient can be extended to set of attributes as follows: ε(C,D) (B) =

γ(C − B, D) (γ(C, D) − γ(C − B, D)) =1− , γ(C, D) γ(C, D)

(36)

denoted by ε(B), if C and D are understood, where B is a subset of C. If B is a reduct of C, then ε(B) = 1, i.e., removing any reduct from a set of decision rules unables to make sure decisions, whatsoever. Any subset B of C will be called an approximate reduct of C, and the number ε(C,D) (B) =

γ(B, D) (γ(C, D) − γ(B, D)) =1− , γ(C, D) γ(C, D)

(37)

denoted simple as ε(B), will be called an error of reduct approximation. It expresses how exactly the set of attributes B approximates the set of condition attributes C. Obviously ε(B) = 1 − σ(B) and ε(B) = 1 − ε(C − B). For any subset B of C we have ε(B) ≤ ε(C). If B is a reduct of C, then ε(B) = 0. For example, either of attributes Headache and Temperature can be considered as approximate reducts of {Headache, Temperature}, and ε(Headache) = 1, ε(Temperature) = 0.25.

24

Zdzislaw Pawlak

But for the whole set of condition attributes {Headache, Muscle-pain, Temperature} we have also the following approximate reduct ε(Headache, Muscle-pain) = 0.75. The concept of an approximate reduct is a generalization of the concept of a reduct considered previously. The minimal subset B of condition attributes C, such that γ(C, D) = γ(B, D), or ε(C,D) (B) = 0 is a reduct in the previous sense. The idea of an approximate reduct can be useful in cases when a smaller number of condition attributes is preferred over accuracy of classiﬁcation.

4 4.1

Rough Sets and Bayes’ Theorem Introduction

Bayes’ theorem is the essence of statistical inference. “The result of the Bayesian data analysis process is the posterior distribution that represents a revision of the prior distribution on the light of the evidence provided by the data” [67]. “Opinion as to the values of Bayes’ theorem as a basic for statistical inference has swung between acceptance and rejection since its publication on 1763” [68]. Rough set theory oﬀers new insight into Bayes’ theorem [69–71]. The look on Bayes’ theorem presented here is completely diﬀerent to that studied so far using the rough set approach (see, e.g., [72–85]) and in the Bayesian data analysis philosophy (see, e.g., [67, 86, 68, 87]). It does not refer either to prior or posterior probabilities, inherently associated with Bayesian reasoning, but it reveals some probabilistic structure of the data being analyzed. It states that any data set (decision table) satisﬁes total probability theorem and Bayes’ theorem. This property can be used directly to draw conclusions from data without referring to prior knowledge and its revision if new evidence is available. Thus in the presented approach the only source of knowledge is the data and there is no need to assume that there is any prior knowledge besides the data. We simple look what the data are telling us. Consequently we do not refer to any prior knowledge which is updated after receiving some data. Moreover, the presented approach to Bayes’ theorem shows close relationship between logic of implications and probability, which was ﬁrst studied by Jan L ukasiewicz [88] (see also [89]). Bayes’ theorem in this context can be used to “invert” implications, i.e., to give reasons for decisions. This is a very important feature of utmost importance to data mining and decision analysis, for it extends the class of problem which can be considered in this domains. Besides, we propose a new form of Bayes’ theorem where basic role plays strength of decision rules (implications) derived from the data. The strength of decision rules is computed from the data or it can be also a subjective assessment. This formulation gives new look on Bayesian method of inference and also simpliﬁes essentially computations.

Some Issues on Rough Sets

4.2

25

Bayes’ Theorem

“In its simplest form, if H denotes an hypothesis and D denotes data, the theorem says that P (H | D) = P (D | H) × P (H)/P (D). (38) With P (H) regarded as a probabilistic statement of belief about H before obtaining data D, the left-hand side P (H | D) becomes an probabilistic statement of belief about H after obtaining D. Having speciﬁed P (D | H) and P (D), the mechanism of the theorem provides a solution to the problem of how to learn from data. In this expression, P (H), which tells us what is known about H without knowing of the data, is called the prior distribution of H, or the distribution of H priori. Correspondingly, P (H | D), which tells us what is known about H given knowledge of the data, is called the posterior distribution of H given D, or the distribution of H a posteriori [87]. “A prior distribution, which is supposed to represent what is known about unknown parameters before the data is available, plays an important role in Bayesian analysis. Such a distribution can be used to represent prior knowledge or relative ignorance” [68]. 4.3

Decision Tables and Bayes’ Theorem

In this section we will show that decision tables satisfy Bayes’ theorem but the meaning of this theorem diﬀers essentially from the classical Bayesian methodology. Every decision table describes decisions (actions, results etc.) determined, when some conditions are satisﬁed. In other words each row of the decision table speciﬁes a decision rule which determines decisions in terms of conditions. In what follows we will describe decision rules more exactly. Let S = (U, C, D) be a decision table. Every x ∈ U determines a sequence c1 (x), . . . , cn (x), d1 (x), . . . , dm (x) where {c1 , . . . , cn } = C and {d1 , . . . , dm } = D The sequence will be called a decision rule induced by x (in S) and denoted by c1 (x), . . . , cn (x) → d1 (x), . . . , dm (x) or in short C →x D. The number suppx (C, D) = card(C(x) ∩ D(x)) will be called a support of the decision rule C →x D and the number σx (C, D) =

suppx(C, D) , card(U )

(39)

will be referred to as the strength of the decision rule C →x D. With every decision rule C →x D we associate a certainty factor of the decision rule, denoted cerx (C, D) and deﬁned as follows: cerx (C, D) = where π(C(X)) =

suppx(C, D) σx (C, D) card(C(x) ∩ D(x)) = = , card(C(x)) card(C(x)) π(C(x))

card(C(x)) card(U) .

(40)

26

Zdzislaw Pawlak

The certainty factor may be interpreted as a conditional probability that y belongs to D(x) given y belongs to C(x), symbolically πx (D | C). If cerx (C, D) = 1, then C →x D will be called a certain decision rule in S; if 0 < cerx (C, D) < 1 the decision rule will be referred to as an uncertain decision rule in S. Besides, we will also use a coverage factor of the decision rule, denoted covx (C, D) deﬁned as covx (C, D) = where π(D(X)) = Similarly

suppx (C, D) σx (C, D) card(C(x) ∩ D(x)) = = , card(D(x)) card(D(x)) π(D(x))

(41)

card(D(x)) card(U) .

covx (C, D) = πx (C | D).

(42)

The certainty and coverage coeﬃcients have been widely used for years by data mining and rough set communities. However, L ukasiewicz [88] (see also [89]) was ﬁrst who used this idea to estimate the probability of implications. If C →x D is a decision rule then C →x D will be called an inverse decision rule. The inverse decision rules can be used to give explanations (reasons) for a decision. Let us observe that C (x) and covx (C, D). cerx (C, D) = πD(x)

(43)

That means that the certainty factor expresses the degree of membership of x to the decision class D(x), given C, whereas the coverage factor expresses the degree of membership of x to condition class C(x), given D. Decision tables have important probabilistic properties which are discussed next. Let C →x D be a decision rule in S and let Γ = C(x) and ∆ = D(x). Then the following properties are valid: cery (C, D) = 1, (44) y∈Γ

covy (C, D) = 1,

(45)

y∈Γ

π(D(x)) =

cery (C, D) · π(C(y)) =

y∈Γ

π(C(x)) =

σy (C, D),

(46)

σy (C, D),

(47)

y∈Γ

covy (C, D) · π(D(y)) =

y∈∆

y∈∆

σx (C, D) σx (C, D) covx (C, D) · π(D(x)) cerx (C, D) = = = , (48) covy (C, D) · π(D(y)) σy (C, D) π(C(x)) y∈Γ

y∈∆

Some Issues on Rough Sets

27

σx (C, D) σx (C, D) cerx (C, D) · π(C(x)) = = . (49) covx (C, D) = cery (C, D) · π(C(y)) σx (C, D) π(D(x)) y∈Γ

y∈Γ

That is, any decision table, satisﬁes (44)-(49). Observe that (46) and (47) refer to the well known total probability theorem, whereas (48) and (49) refer to Bayes’ theorem. Thus in order to compute the certainty and coverage factors of decision rules according to formula (48) and (49) it is enough to know the strength (support) of all decision rules only. The strength of decision rules can be computed from data or can be a subjective assessment. 4.4

Decision Language and Decision Algorithms

It is often useful to describe decision tables in logical terms. To this end we deﬁne a formal language called a decision language. Let S = (U, A) be an information system. With every B ⊆ A we associate a formal language, i.e., a set of formulas F or(B). Formulas of F or(B) are built up from attribute-value pairs (a, v) where a ∈ B and v ∈ Va by means of logical connectives ∧(and), ∨(or), ∼ (not) in the standard way. For any Φ ∈ F or(B) by Φ S we denote the set of all objects x ∈ U satisfying Φ in S and refer to as the meaning of Φ in S. The meaning Φ S of Φ in S is deﬁned inductively as follows: (a, v) S = {x ∈ U : a(v) = x} for all a ∈ B and v ∈ Va , Φ ∧ Ψ S = Φ S ∪ Ψ S , Φ ∧ Ψ S = Φ S ∩ Ψ S , ∼ Φ S = U − Φ S . If S = (U, C, D) is a decision table then with every row of the decision table we associate a decision rule, which is deﬁned next. A decision rule in S is an expression Φ →S Ψ or simply Φ → Ψ if S is understood, read if Φ then Ψ , where Φ ∈ F or(C), Ψ ∈ F or(D) and C, D are condition and decision attributes, respectively; Φ and Ψ are referred to as conditions part and decisions part of the rule, respectively. The number suppS (Φ, Ψ ) = card(( Φ ∧ Ψ S )) will be called the support of the rule Φ → Ψ in S. We consider a probability distribution pU (x) = 1/card(U ) for x ∈ U where U is the (non-empty) universe of objects of S; we have pU (X) = card(X)/card(U ) for X ⊆ U . For any formula Φ we associate its probability in S deﬁned by (50) πS (Φ) = pU ( Φ S ). With every decision rule Φ → Ψ we associate a conditional probability πS (Ψ | Φ) = pU ( Ψ S | Φ S )

(51)

called the certainty factor of the decision rule, denoted cerS (Φ, Ψ ). We have cerS (Φ, Ψ ) = πS (Ψ | Φ) = where Φ S = ∅.

card( Φ ∧ Ψ S ) , card( Φ S )

(52)

28

Zdzislaw Pawlak

If πS (Ψ | Φ) = 1, then Φ → Ψ will be called a certain decision rule; if 0 < πS (Ψ | Φ) < 1 the decision rule will be referred to as a uncertain decision rule. There is an interesting relationship between decision rules and their approximations: certain decision rules correspond to the lower approximation, whereas the uncertain decision rules correspond to the boundary region. Besides, we will also use a coverage factor of the decision rule, denoted covS (Φ, Ψ ) deﬁned by πS (Φ | Ψ ) = pU ( Φ S | Ψ S ).

(53)

Obviously we have covS (Φ, Ψ ) = πS (Φ | Ψ ) =

card( Φ ∧ Ψ S ) . card( Ψ S )

(54)

There are three possibilities to interpret the certainty and the coverage factors: statistical (frequency), logical (degree of truth) and mereological (degree of inclusion). We will use here mainly the statistical interpretation, i.e., the certainty factors will be interpreted as the frequency of objects having the property Ψ in the set of objects having the property Φ and the coverage factor – as the frequency of objects having the property Φ in the set of objects having the property Ψ . Let us observe that the factors are not assumed arbitrarily but are computed from the data. The number σS (Φ, Ψ ) =

suppS (Φ, Ψ ) = πS (Ψ | Φ) · πS (Φ), card(U )

(55)

will be called the strength of the decision rule Φ → Ψ in S. We will need also the notion of an equivalence of formulas. Let Φ, Ψ be formulas in F or(A) where A is the set of attributes in S = (U, A). We say that Φ and Ψ are equivalent in S, or simply, equivalent if S is understood, in symbols Φ ≡ Ψ , if and only if Φ → Ψ and Ψ → Φ. This means that Φ ≡ if and only if Φ S = Ψ S . We need also approximate equivalence of formulas which is deﬁned as follows: Φ ≡S Ψ if and only if cer(Φ, Ψ ) = cov(Φ, Ψ ) = k.

(56)

Besides, we deﬁne also approximate equivalence of formulas with the accuracy ε (0 ≤ ε ≤ 1, which is deﬁned as follows: Φ ≡k,ε Ψ if and only if

k = min{(cer(Φ, Ψ ), cov(Φ, Ψ )}

(57)

and |cer(Φ, Ψ ) − cov(Φ, Ψ )| ≤ ε. Now, we deﬁne the notion of a decision algorithm, which is a logical counterpart of a decision table. Let Dec(S) = {Φi → Ψ }m i=1 , m ≥ 2, be a set of decision rules in a decision table S = (U, C, D).

Some Issues on Rough Sets

29

1) If for every Φ → Ψ , Φ → Ψ ∈ Dec(S) we have Φ = Φ or Φ ∧ Φ S = ∅, and Ψ = Ψ or Ψ ∧ Ψ S = ∅, then we will say that Dec(S) is the set of pairwise mutually exclusive (independent) decision rules in S. m m 2) If Φi S = U and Ψi S = U we will say that the set of decision i=1

i=1

rules Dec(S) covers U. 3) If Φ → Ψ ∈ Dec(S) and suppS (Φ, Ψ ) = 0 we will say that the decision rule Φ → Ψ is admissible in S. C∗ (X) = Φ S , where Dec+ (S) is the set of all 4) If X∈U/D

Φ→Ψ ∈Dec+ (S)

certain decision rules from Dec(S), we will say that the set of decision rules Dec(S) preserves the consistency part of the decision table S = (U, C, D). The set of decision rules Dec(S) that satisﬁes 1), 2) 3) and 4), i.e., is independent, covers U , preserves the consistency of S and all decision rules Φ → Ψ ∈ Dec(S) are admissible in S – will be called a decision algorithm in S. Hence, if Dec(S) is a decision algorithm in S then the conditions of rules from Dec(S) deﬁne in S a partition of U. Moreover, the positive region of D with respect to C, i.e., the set C∗ (X), (58) X∈U/D

is partitioned by the conditions of some of these rules, which are certain in S. If Φ → Ψ is a decision rule then the decision rule Ψ → Ψ will be called an inverse decision rule of Φ → Ψ . Let Dec∗ (S) denote the set of all inverse decision rules of Dec(S). It can be shown that Dec∗ (S) satisﬁes 1), 2), 3) and 4), i.e., it is a decision algorithm in S. If Dec(S) is a decision algorithm then Dec∗ (S) will be called an inverse decision algorithm of Dec(S). The inverse decision algorithm gives reasons (explanations) for decisions pointed out by the decision algorithms. A decision algorithm is a description of a decision table in the decision language. Generation of decision algorithms from decision tables is a complex task and we will not discuss this issue here, for it does not lie in the scope of this paper. The interested reader is advised to consult the references (see, e.g., [18, 66, 90–97, 50, 98–104] and the bibliography in these articles). 4.5

An Example

Let us now consider an example of decision table, shown in Table 7. Attributes Disease, Age and Sex are condition attributes, whereas test is the decision attribute. We want to explain the test result in terms of patients state, i.e., to describe attribute Test in terms of attributes Disease, Age and Sex.

30

Zdzislaw Pawlak Table 7. Exemplary decision table Fact Disease Age Sex 1 yes old man 2 yes middle woman 3 no old man 4 yes old man 5 no young woman 6 yes middle woman

Test Support + 400 + 80 − 100 − 40 − 220 − 60

Table 8. Certainty and coverage factors for decision table shown in Table 7 Fact Strength Certaint Coverage 1 0.44 0.92 0.83 2 0.09 0.56 0.17 3 0.11 1.00 0.24 4 0.04 0.08 0.10 5 0.24 1.00 0.52 6 0.07 0.44 0.14

The strength, certainty and coverage factors for decision table are shown in Table 8. Below a decision algorithm associated with Table 7 is presented. 1) 2) 3) 4) 5)

if if if if if

(Disease, (Disease, (Disease, (Disease, (Disease,

yes) and yes) and no) then yes) and yes) and

(Age, old) then (Test, +); (Age, middle) then (Test, +); (Test, −); (Age, old) then (Test, −); (Age, middle) then (Test, −).

The certainty and coverage factors for the above algorithm are given in Table 9. Table 9. Certainty and coverage factors for the decision algorithm Rule Strength Certaint Coverage 1 0.44 0.92 0.83 2 0.09 0.56 0.17 3 0.36 1.00 0.76 4 0.04 0.08 0.10 5 0.24 0.44 0.14

The certainty factors of the decision rules lead the following conclusions: – – – – –

92% ill and old patients have positive test result, 56% ill and middle age patients more positive test result, all healthy patients have negative test result, 8% ill and old patients have negative test result, 44% ill and old patients have negative test result.

Some Issues on Rough Sets

31

In other words: – ill and old patients most probably have positive test result (probability = 0.92), – ill and middle age patients most probably have positive test result (probability = 0.56), – healthy patients have certainly negative test result (probability = 1.00). Now let us examine the inverse decision algorithm, which is given below: 1’) 2’) 3’) 4’) 5’)

if if if if if

(Test, (Test, (Test, (Test, (Test,

+) +) −) −) −)

then then then then then

(Disease, (Disease, (Disease, (Disease, (Disease,

yes) yes) no); yes) yes)

and (Age, old); and (Age, middle); and (Age, old); and (Age, middle).

Employing the inverse decision algorithm and the coverage factor we get the following explanation of test results: – reason for positive test results are most probably patients disease and old age (probability = 0.83), – reason for negative test result is most probably lack of the disease (probability = 0.76). It follows from Table 7 that there are two interesting approximate equivalences of test results and the disease. According to rule 1) the disease and old age are approximately equivalent to positive test result (k = 0.83, ε = 0.11), and lack of the disease according to rule 3) is approximately equivalent to negative test result (k = 0.76, ε = 0.24).

5 5.1

Rough Sets and Conflict Analysis Introduction

Knowledge discovery in databases considered in the previous sections boiled down to searching for functional dependencies in the data set. In this section we will discuss another kind of relationship in the data – not dependencies, but conﬂicts. Formally, the conﬂict relation can be seen as a negation (not necessarily, classical) of indiscernibility relation which was used as a basis of rough set theory. Thus dependencies and conﬂict are closely related from logical point of view. It turns out that the conﬂict relation can be used to the conﬂict analysis study. Conﬂict analysis and resolution play an important role in business, governmental, political and lawsuits disputes, labor-management negotiations, military operations and others. To this end many mathematical formal models of conﬂict situations have been proposed and studied, e.g., [105–110].

32

Zdzislaw Pawlak

Various mathematical tools, e.g., graph theory, topology, diﬀerential equations and others, have been used to that purpose. Needless to say that game theory can be also considered as a mathematical model of conﬂict situations. In fact there is no, as yet, “universal” theory of conﬂicts and mathematical models of conﬂict situations are strongly domain dependent. We are going to present in this paper still another approach to conﬂict analysis, based on some ideas of rough set theory – along the lines proposed in [110]. We will illustrate the proposed approach by means of a simple tutorial example of voting analysis in conﬂict situations. The considered model is simple enough for easy computer implementation and seems adequate for many real life applications but to this end more research is needed. 5.2

Basic Concepts of Conflict Theory

In this section we give after [110] deﬁnitions of basic concepts of the proposed approach. Let us assume that we are given a ﬁnite, non-empty set U called the universe. Elements of U will be referred to as agents. Let a function v : U → {−1, 0, 1}, or in short {−, 0, +}, be given assigning to every agent the number −1, 0 or 1, representing his opinion, view, voting result, etc. about some discussed issue, and meaning against, neutral and favorable, respectively. The pair S = (U, v) will be called a conflict situation. In order to express relations between agents we deﬁne three basic binary relations on the universe: conflict, neutrality and alliance. To this end we ﬁrst deﬁne the following auxiliary function: 1, if v(x)v(y) = 1 or x = y (59) φv (x, y) = 0, if v(x)v(y) = 0 and x = y −1, if v(x)v(y) = −1. This means that, if φv (x, y) = 1, agents x and y have the same opinion about issue v (are allied) on v); if φv (x, y) = 0 means that at least one agent x or y has neutral approach to issue a (is neutral on a), and if φv (x, y) = −1, means that both agents have diﬀerent opinions about issue v (are in conflict on v). In what follows we will deﬁne three basic relations Rv+ ,Rv0 and Rv− on U 2 called alliance, neutrality and conflict relations respectively, and deﬁned as follows: Rv+ (x, y) iﬀ φv (x, y) = 1,

(60)

Rv0 (x, y) iﬀ φv (x, y) = 0, Rv− (x, y) iﬀ φv (x, y) = −1. It is easily seen that the alliance relation has the following properties: Rv+ (x, x), Rv+ (x, y) implies Rv+ (y, x), Rv+ (x, y) and Rv+ (y, z) implies Rv+ (x, z),

(61)

Some Issues on Rough Sets

33

i.e., Rv+ is an equivalence relation. Each equivalence class of alliance relation will be called coalition with respect to v. Let us note that the last condition in (61) can be expressed as “a friend of my friend is my friend”. For the conﬂict relation we have the following properties: not Rv− (x, x),

Rv− (x, y) Rv− (x, y) Rv− (x, y)

(62) Rv− (y, x),

implies and Rv− (y, z) implies Rv+ (x, z), and Rv+ (y, z) implies Rv− (x, z).

The last two conditions in (62) refer to well known sayings “an enemy of my enemy is my friend” and “a friend of my enemy is my enemy”. For the neutrality relation we have: not Rv0 (x, x), Rv0 (x, y) = Rv0 (y, x).

(63)

Let us observe that in the conﬂict and neutrality relations there are no coalitions. The following property holds: Rv+ ∪ Rv0 ∪ Rv− = U 2 because if (x, y) ∈ U 2 then Φv (x, y) = 1 or Φv (x, y) = 0 or Φv (x, y) = −1 so (x, y) ∈ Rv+ or (x, y) ∈ Rv− or (x, y) ∈ Rv− . All the three relations Rv+ , Rv0 , Rv− are pairwise disjoint, i.e., every pair of objects (x, y) belongs to exactly one of the above deﬁned relations (is in conﬂict, is allied or is neutral). With every conﬂict situation we will associate a conflict graph GS = (Rv+ , Rv0 , Rv− ).

(64)

An example of a conﬂict graph is shown in Figure 3. Solid lines are denoting conﬂicts, doted line – alliance, and neutrality, for simplicity, is not shown explicitly in the graph. Of course, B, C, and D form a coalition.

Fig. 3. Exemplary conﬂict graph

34

5.3

Zdzislaw Pawlak

An Example

In this section we will illustrate the above presented ideas by means of a very simple tutorial example using concepts presented in the previous. Table 10 presents a decision table in which the only condition attribute is Party, whereas the decision attribute is Voting. The table describes voting results in a parliament containing 500 members grouped in four political parties denoted A, B, C and D. Suppose the parliament discussed certain issue (e.g., membership of the country in European Union) and the voting result is presented in column Voting, where +, 0 and − denoted yes, abstention and no respectively. The column support contains the number of voters for each option. Table 10. Decision table with one condition attribute Party and the decision Voting Fact Party Voting Support 1 A + 200 2 A 0 30 3 A − 10 4 B + 15 5 B − 25 6 C 0 20 7 C − 40 8 D + 25 9 D 0 35 10 D − 100 Table 11. Certainty and the coverage factors for Table 10 Fact Strength Certainty Coverage 1 0.40 0.83 0.83 2 0.06 0.13 0.35 3 0.02 0.04 0.06 4 0.03 0.36 0.06 5 0.05 0.63 0.14 6 0.04 0.33 0.23 7 0.08 0.67 0.23 8 0.05 0.16 0.10 9 0.07 0.22 0.41 10 0.20 0.63 0.57

The strength, certainty and the coverage factors for Table 10 are given in Table 11. From the certainty factors we can conclude, for example, that: – 83.3% of party A voted yes, – 12.5% of party A abstained, – 4.2% of party A voted no.

Some Issues on Rough Sets

35

From the coverage factors we can get, for example, the following explanation of voting results: – 83.3% yes votes came from party A, – 6.3% yes votes came from party B, – 10.4% yes votes came from party C.

6 6.1

Data Analysis and Flow Graphs Introduction

Pursuit for data patterns considered so far referred to data tables. In this section we will consider data represented not in a form of data table but by means of graphs. We will show that this method od data representation leads to a new look on knowledge discovery, new eﬃcient algorithms, and vide spectrum of novel applications. The idea presented here are based on some concepts given by L ukasiewicz [88]. In [88] L ukasiewicz proposed to use logic as mathematical foundation of probability. He claims that probability is “purely logical concept” and that his approach frees probability from its obscure philosophical connotation. He recommends to replace the concept of probability by truth values of indefinite propositions, which are in fact propositional functions. Let us explain this idea more closely. Let U be a non empty ﬁnite set, and let Φ(x) be a propositional function. The meaning of Φ(x) in U , denoted by Φ(x), is the set of all elements of U , that satisﬁes Φ(x) in U. The truth value of Φ(x) is deﬁned by card(Φ(x))/card(U ). For example, if U = {1, 2, 3, 4, 5, 6} and Φ(x) is the propositional function x > 4, then the truth value of Φ(x) = 2/6 = 1/3. If the truth value of Φ(x) is 1, then the propositional function is true, and if it is 0, then the function is false. Thus the truth value of any propositional function is a number between 0 and 1. Further, it is shown that the truth values can be treated as probability and that all laws of probability can be obtained by means of logical calculus. In this paper we show that the idea of L ukasiewicz can be also expressed diﬀerently. Instead of using truth values in place of probability, stipulated by L ukasiewicz, we propose, in this paper, using of deterministic ﬂow analysis in ﬂow networks (graphs). In the proposed setting, ﬂow is governed by some probabilistic rules (e.g., Bayes’ rule), or by the corresponding logical calculus proposed by L ukasiewicz, though, the formulas have entirely deterministic meaning, and need neither probabilistic nor logical interpretation. They simply describe ﬂow distribution in ﬂow graphs. However, ﬂow graphs introduced here are diﬀerent from those proposed by Ford and Fulkerson [111] for optimal ﬂow analysis, because they model rather, e.g., ﬂow distribution in a plumbing network, than the optimal ﬂow. The ﬂow graphs considered in this paper are basically meant not to physical media (e.g., water) ﬂow analysis, but to information ﬂow examination in decision algorithms. To this end branches of a ﬂow graph are interpreted as decision

36

Zdzislaw Pawlak

rules. With every decision rule (i.e. branch) three coeﬃcients are associated, the strength, certainty and coverage factors. In classical decision algorithms language they have probabilistic interpretation. Using L ukasiewicz’s approach we can understand them as truth values. However, in the proposed setting they can be interpreted simply as ﬂow distribution ratios between branches of the ﬂow graph, without referring to their probabilistic or logical nature. This interpretation, in particular, leads to a new look on Bayes’ theorem, which in this setting, has entirely deterministic explanation (see also [86]). The presented idea can be used, among others, as a new tool for data analysis, and knowledge representation. We start our considerations giving fundamental deﬁnitions of a ﬂow graph and related notions. Next, basic properties of ﬂow graphs are deﬁned and investigated. Further, the relationship between ﬂow graphs and decision algorithms is discussed. Finally, a simple tutorial example is used to illustrate the consideration. 6.2

Flow Graphs

A ﬂow graph is a directed, acyclic, ﬁnite graph G = (N, B, φ), where N is a set of nodes, B ⊆ N × N is a set of directed branches, φ : B → R+ is a flow function and R+ is the set of non-negative reals. If (x, y) ∈ B then x is an input of y and y is an output of x. If x ∈ N then I(x) is the set of all inputs of x and O(x) is the set of all outputs of x. Input and output of a graph G are deﬁned I(G) = {x ∈ N : I(x) = ∅}, O(G) = {x ∈ N : O(x) = ∅}. Inputs and outputs of G are external nodes of G; other nodes are internal nodes of G. If (x, y) ∈ B then φ(x, y) is a troughflow from x to y. We will assume in what follows that φ(x, y) = 0 for every (x, y) ∈ B. With every node x of a ﬂow graph G we associate its inflow φ(y, x), (65) φ+ (x) = y∈I(x)

and outflow φ− (x) =

φ(x, y).

(66)

y∈O(x)

Similarly, we deﬁne an inﬂow and an outﬂow for the whole ﬂow graph G, which are deﬁned as φ− (x), (67) φ+ (G) = x∈I(G)

φ− (G) =

x∈O(G)

φ+ (x).

(68)

Some Issues on Rough Sets

37

We assume that for any internal node x, φ+ (x) = φ− (x) = φ(x), where is a troughflow of node x. Obviously,φ+ (G) = φ− (G) = φ(G) , where φ(G) is a troughflow of graph G. The above formulas can be considered as flow conservation equations [111]. We will deﬁne now a normalized flow graph. A normalized ﬂow graph is a directed, acyclic, finite graph G = (N, B, σ), where N is a set of nodes, B ⊆ N × N is a set of directed branches and σ : B →< 0, 1 > is a normalized flow of (x, y) and σ(x, y) =

σ(x, y) , σ(G)

(69)

is strength of (x, y). Obviously, 0 ≤ σ(x, y) ≤ 1. The strength of the branch expresses simply the percentage of a total ﬂow through the branch. In what follows we will use normalized ﬂow graphs only, therefore by a ﬂow graphs we will understand normalized ﬂow graphs, unless stated otherwise. With every node x of a ﬂow graph G we associate its normalized inflow and outflow deﬁned as σ+ (x) =

φ+ (x) = σ(y, x), φ(G)

(70)

φ− (x) = σ(y, x). φ(G)

(71)

y∈I(x)

σ− (x) =

y∈O(x)

Obviously for any internal node x, we have σ+ (X) = σ− = σ(x), where σ(x) is a normalized troughflow of x. Moreover, let σ+ (G) =

φ+ (G) = σ− (x), φ(G)

(72)

φ− (G) = σ+ (x). φ(G)

(73)

x∈I(G)

σ− (G) =

x∈O(G)

Obviously, σ+ (G) = σ− (G) = σ(G) = 1. 6.3

Certainty and Coverage Factors

With every branch (x, y) of a ﬂow graph G we associate the certainty and the coverage factors. The certainty and the coverage of are deﬁned as cer(x, y) =

σ(x, y) , σ(x)

(74)

38

Zdzislaw Pawlak

and cov(x, y) =

σ(x, y) . σ(y)

(75)

respectively, where σ(x) = 0 and σ(y) = 0. Below some properties, which are immediate consequences of deﬁnitions given above are presented: cer(x, y) = 1, (76) y∈O(x)

cov(x, y) = 1,

(77)

y∈I(y)

σ(x) =

cer(x, y)σ(x) =

y∈O(x)

σ(y) =

σ(x, y),

(78)

σ(x, y),

(79)

y∈O(x)

cov(x, y)σ(y) =

x∈I(y)

x∈I(y)

cer(x, y) =

cov(x, y)σ(y) , σ(x)

(80)

cov(x, y) =

cer(x, y)σ(x) . σ(y)

(81)

Obviously the above properties have a probabilistic ﬂavor, e.g., equations (78) and (79) have a form of total probability theorem, whereas formulas (80) and (81) are Bayes’ rules. However, these properties in our approach are interpreted in a deterministic way and they describe ﬂow distribution among branches in the network. A (directed) path from x to y, x = y in G is a sequence of nodes x1 , . . . , xn such that x1 = x, xn = y and (xi , xi+1 ) ∈ B for every i, 1 ≤ i ≤ n − 1. A path from x to y is denoted by [x . . . y]. The certainty, the coverage and the strength of the path [x1 . . . xn ] are deﬁned as cer[x1 . . . xn ] =

n−1

cer(xi , xi+1 ),

(82)

cov(xi , xi+1 ),

(83)

i=1

cov[x1 . . . xn ] =

n−1 i=1

σ[x . . . y] = σ(x)cer[x . . . y] = σ(y)cov[x . . . y],

(84)

Some Issues on Rough Sets

39

respectively. The set of all paths from x to y(x = y) in G denoted < x, y >, will be called a connection from x to y in G. In other words, connection < x, y > is a sub-graph of G determined by nodes x and y. For every connection < x, y > we deﬁne its certainty, coverage and strength as shown below: cer < x, y >= cer[x . . . y], (85) [x...y]∈<x,y>

the coverage of the connection < x, y > is cov < x, y >=

cov[x . . . y],

(86)

[x...y]∈<x,y>

and the strength of the connection < x, y > is σ[x . . . y] = σ(x)cer < x, y >= σ(y)cov < x, y > .(87) σ < x, y >= [x...y]∈<x,y>

Let [x . . . y] be a path such that x and y are input and output of the graph G, respectively. Such a path will be referred to as complete. The set of all complete paths from x to y will be called a complete connection from x to y in G. In what follows we will consider complete paths and connections only, unless stated otherwise. Let x and y be an input and output of a graph G respectively. If we substitute for every complete connection < x, y > in G a single branch (x, y) such σ(x, y) = σ < x, y >, cer(x, y) = cer < x, y >, cov(x, y) = cov < x, y > then we obtain a new ﬂow graph G such that σ(G) = σ(G ). The new ﬂow graph will be called a combined ﬂow graph. The combined ﬂow graph for a given ﬂow graph represents a relationship between its inputs and outputs. 6.4

Dependencies in Flow Graphs

Let (x, y) ∈ B. Nodes x and y are independent on each other if σ(x, y) = σ(x)σ(y).

(88)

σ(x, y) = cer(x, y) = σ(y), σ(x)

(89)

σ(x, y) = cov(x, y) = σ(x). σ(y)

(90)

Consequently

and

This idea refers to some concepts proposed by L ukasiewicz [88] in connection with statistical independence of logical formulas.

40

Zdzislaw Pawlak

If cer(x, y) > σ(y),

(91)

cov(x, y) > σ(x),

(92)

or

then x and y depend positively on each other. Similarly, if cer(x, y) < σ(y),

(93)

cov(x, y) < σ(x),

(94)

or

then x and y depend negatively on each other. Let us observe that relations of independency and dependencies are symmetric ones, and are analogous to that used in statistics. For every (x, y) ∈ B we deﬁne a dependency factor η(x, y) deﬁned as η(x, y) =

cer(x, y) − σ(y) cov(x, y) − σ(x) = . cer(x, y) + σ(y) cov(x, y) + σ(x)

(95)

It is easy to check that if η(x, y) = 0, then x and y are independent on each other, if −1 < η(x, y) < 0, then x and y are negatively dependent and if 0 < η(x, y) < 1 then x and y are positively dependent on each other. Thus the dependency factor expresses a degree of dependency, and can be seen as a counterpart of correlation coeﬃcient used in statistics (see also [112]). 6.5

An Example

Now we will illustrate ideas introduced in the previous sections by means of a simple example concerning votes distribution of various age groups and social classes of voters between political parties. Consider three disjoint age groups of voters y1 (old), y2 (middle aged) and y3 (young) – belonging to three social classes x1 (high), x2 (middle) and x3 (low). The voters voted for four political parties z1 (Conservatives), z2 (Labor), z3 (Liberal Democrats) and z4 (others). Social class and age group votes distribution is shown in Figure 4. First we want to ﬁnd votes distribution with respect to age group. The result is shown in Figure 5. From the ﬂow graph presented in Figure 5 we can see that, e.g., party z1 obtained 19% of total votes, all of them from age group y1 ; party z2 – 44% votes, which 82% are from age group y2 and 18% – from age group y3 , etc. If we want to know how votes are distributed between parties with respects to social classes we have to eliminate age groups from the ﬂow graph. Employing the algorithm presented in Section 6.3 we get results shown in Figure 6.

Some Issues on Rough Sets

41

Fig. 4. Social class and age group votes distribution

From the ﬂow graph presented in Figure 6 we can see that party z1 obtained 22% votes from social class x1 and 78% – from social class x2 , etc. We can also present the obtained results employing decision rules. For simplicity we present only some decision rules of the decision algorithm. For example, from Figure 5 we obtain decision rules: If Party (z1 ) then Age group (y1 )(0.19); If Party (z2 ) then Age group (y2 (0.36); If Party (z2 ) then Age group (y3 )(0.08), etc. The number at the end of each decision rule denotes strength of the rule. Similarly, from Figure 6 we get: If Party (z1 ) then Soc. class (x1 )(0.04); If Party (z1 ) then Soc. class (x2 )(0.14), etc.

Fig. 5. Votes distribution with respect to the age group

42

Zdzislaw Pawlak

Fig. 6. Votes distribution between parties with respects to the social classes

From Figure 6 we have: If Soc. class (x1 ) then Party (z1 )(0.04); If Soc. class (x1 ) then Party (z2 )(0.02); If Soc. class (x1 ) then Party (z3 )(0.04), etc. Dependencies between Social class and Parties are shown in Figure 6. 6.6

An Example

In this section we continue the example from Section 5.3. The ﬂow graph associated with Table 11 is shown in Figure 7. Branches of the ﬂow graph represent decision rules together with their certainty and coverage factors. For example, the decision rule A → 0 has the certainty and coverage factors 0.13 and 0.35, respectively. The ﬂow graph gives a clear insight into the voting structure of all parties. For many applications exact values of certainty of coverage factors of decision rules are not necessary. To this end we introduce “approximate” decision rules, denoted C D and read C mostly implies D. C D if and only if cer(C, D) > 0.5. Thus, we can replace ﬂow graph shown in Figure 7 by approximate ﬂow graph presented in Figure 8. From this graph we can see that parties B, C and D form a coalition, which is in conﬂict with party A, i.e., every member of the coalition is in conﬂict with party A. The corresponding conﬂict graph is shown in Figure 9. Moreover, from the ﬂow graph shown in Figure 7 we can obtain an “inverse” approximate ﬂow graph which is shown in Figure 10. This ﬂow graph contains all inverse decision rules with certainty factor greater than 0.5. From this graph we can see that yes votes were obtained mostly from party A and no votes – mostly from party D.

Some Issues on Rough Sets

43

Fig. 7. Flow graph for Table 11

Fig. 8. “Approximate” ﬂow graph

Fig. 9. Conﬂict graph

We can also compute dependencies between parties and voting results the results are shown in Figure 11. 6.7

Decision Networks

Ideas given in the previous sections can be also presented in logical terms, as shown in what follows.

44

Zdzislaw Pawlak

Fig. 10. An “inverse” approximate ﬂow graph

Fig. 11. Dependencies between parties and voting results

The main problem in data mining consists in discovering patterns in data. The patterns are usually expressed in form of decision rules, which are logical expressions in the form if Φ then Ψ , where Φ and Ψ are logical formulas (propositional functions) used to express properties of objects of interest. Any set of decision rules is called a decision algorithm. Thus knowledge discovery from data consists in representing hidden relationships between data in a form of decision algorithms. However, for some applications, it is not enough to give only set of decision rules describing relationships in the database. Sometimes also knowledge of relationship between decision rules is necessary in order to understand better data structures. To this end we propose to employ a decision algorithm in which also relationship between decision rules is pointed out, called a decision network. The decision network is a ﬁnite, directed acyclic graph, nodes of which represent logical formulas, whereas branches – are interpreted as decision rules. Thus

Some Issues on Rough Sets

45

every path in the graph represents a chain of decisions rules, which will be used to describe compound decisions. Some properties of decision networks will be given and a simple example will be used to illustrate the presented ideas and show possible applications. Let U be a non empty ﬁnite set, called the universe and let Φ , Ψ be logical formulas. The meaning of Φ in U , denoted by Φ, is the set of all elements of U , that satisﬁes Φ in U. The truth value of Φ denoted val(Φ) is deﬁned as card(Φ)/card(U ), where card(X) denotes cardinality of X and F is a set of formulas. By decision network over S = (U, F ) we mean a pair N = (F , R), where R ⊆ F × F is a binary relation, called a consequence relation and F is a set of logical formulas. Any pair (Φ, Ψ ) ∈ R, Φ = Ψ is referred to as a decision rule (in N ). We assume that S is known and we will not refer to it in what follows. A decision rule (Φ, Ψ ) will be also presented as an expression Φ → Ψ , read if Φ then Ψ , where Φ and Ψ are referred to as predecessor (conditions) and successor (decisions) of the rule, respectively. The number supp(Φ, Ψ ) = card(Φ ∧ Ψ ) will be called a support of the rule Φ → Ψ . We will consider nonvoid decision rules only, i.e., rules such that supp(Φ, Ψ ) = 0. With every decision rule Φ → Ψ we associate its strength deﬁned as str(Φ, Ψ ) =

supp(Φ, Ψ ) . card(U )

(96)

Moreover, with every decision rule Φ → Ψ we associate the certainty factor deﬁned as str(Φ, Ψ ) cer(Φ, Ψ ) = , (97) val(Φ) and the coverage factor of Φ → Ψ cov(Φ, Ψ ) =

str(Φ, Ψ ) , val(Ψ )

(98)

where val(Φ) = 0 and val(Ψ ) = 0. The coeﬃcients can be computed from data or can be a subjective assessment. We assume that val(Φ) = str(Φ, Ψ ) (99) Ψ ∈Suc(Φ)

and val(Ψ ) =

str(Φ, Ψ ),

(100)

Φ∈P re(Ψ )

where Suc(Φ) and P re(Ψ ) are sets of all successors and predecessors of the corresponding formulas, respectively. Consequently we have cer(φ, Ψ ) = cov(Φ, Ψ ) = 1. (101) Suc(Φ)

P re(Ψ )

46

Zdzislaw Pawlak

If a decision rule Φ → Ψ uniquely determines decisions in terms of conditions, i.e., if cer(Φ, Ψ ) = 1, then the rule is certain, otherwise the rule is uncertain. If a decision rule Φ → Ψ covers all decisions, i.e., if cov(Φ, Ψ ) = 1 then the decision rule is total, otherwise the decision rule is partial. Immediate consequences of (97) and (98) are: cer(Φ, Ψ ) =

cov(Φ, Ψ )val(Ψ ) , val(Φ)

(102)

cov(Φ, Ψ ) =

cer(Φ, Ψ )val(Φ) . val(Ψ )

(103)

Note, that (102) and (103) are Bayes’ formulas. This relationship, as mentioned previously, ﬁrst was observed by L ukasiewicz [88]. Any sequence of formulas Φ1 , . . . , Φn , Φi ∈ F and for every i, 1 ≤ i ≤ n − 1, (Φi , Φi+1 ) ∈ R will be called a path from Φ1 to Φn and will be denoted by [Φ1 . . . Φn ]. We deﬁne n−1 cer[Φi , Φi+1 ], (104) cer[Φ1 . . . Φn ] = i=1

cov[Φ1 . . . Φn ] =

n−1

cov[Φi , Φi+1 ],

(105)

i=1

str[Φ1 . . . Φn ] = val(Φ1 )cer[Φ1 . . . Φn ] = val(Φn )cov[Φ1 . . . Φn ].

(106)

The set of all paths form Φ to Ψ , denoted < Φ, Ψ >, will be called a connection from Φ to Ψ. For connection we have cer[Φ . . . Ψ ], (107) cer < Φ, Ψ >= [Φ...Ψ ]∈

cov < Φ, Ψ >=

cov[Φ . . . Ψ ],

(108)

[Φ...Ψ ]∈

str < Φ, Ψ > =

str[Φ . . . Ψ ] =

[Φ...Ψ ]∈

= val(Φ)cer < Φ, Ψ >= val(Ψ )cov < Φ, Ψ > .

(109)

With every decision network we can associate a ﬂow graph [70, 71]. Formulas of the network are interpreted as nodes of the graph, and decision rules – as directed branches of the ﬂow graph, whereas strength of a decision rule is interpreted as ﬂow of the corresponding branch.

Some Issues on Rough Sets

47

Let Φ → Ψ be a decision rule. Formulas Φ and Ψ are independent on each other if str(Φ, Ψ ) = val(Φ)val(Ψ ).

(110)

str(Φ, Ψ ) = cer(Φ, Ψ ) = val(Ψ ), val(Φ)

(111)

str(Φ, Ψ ) = cov(Φ, Ψ ) = val(Φ). val(Ψ )

(112)

cer(Φ, Ψ ) > val(Ψ ),

(113)

cov(Φ, Ψ ) > val(Φ),

(114)

Consequently

and

If

or

then Φ and Ψ depend positively on each other. Similarly, if cer(Φ, Ψ ) < val(Ψ ),

(115)

cov(Φ, Ψ ) < val(Φ),

(116)

or

then Φ and Ψ depend negatively on each other. For every decision rule Φ → Ψ we deﬁne a dependency factor η(Φ, Ψ ) deﬁned as η(Φ, Ψ ) =

cov(Φ, Ψ ) − val(Φ) cer(Φ, Ψ ) − val(Ψ ) = . cer(Φ, Ψ ) + val(Ψ ) cov(Φ, Ψ ) + val(Φ)

(117)

It is easy to check that if η(Φ, Ψ ) = 0, then Φ and Ψ are independent on each other, if −1 < η(Φ, Ψ ) < 0, then Φ and Ψ are negatively dependent and if 0 < η(Φ, Ψ ) < 1 then Φ and Ψ are positively dependent on each other. 6.8

An Example

Flow graphs given in Figures 4–6 can be now presented as shown in Figures 12– 14, respectively. These ﬂow graphs show clearly the relational structure between formulas involved in the voting process.

48

Zdzislaw Pawlak

Fig. 12. Decision network for ﬂow graph from Figure 4

Fig. 13. Decision network for ﬂow graph from Figure 5

6.9

Inference Rules and Decision Rules

In this section we are going to show relationship between previously discussed concepts and reasoning schemes used in logical inference. Basic rules of inference used in classical logic are Modus Ponens (MP) and Modus Tollens (MT). These two reasoning patterns start from some general knowledge about reality, expressed by true implication, ”if Φ then Ψ ”. Then basing on true premise Φ we arrive at true conclusion Ψ (MP), or if negation of conclusion Ψ is true we infer that negation of premise Φ is true (MT). In reasoning from data (data mining) we also use rules if Φ then Ψ , called decision rules, to express our knowledge about reality, but the meaning of decision rules is diﬀerent. It does not express general knowledge but refers to partial facts. Therefore decision rules are not true or false but probable (possible) only.

Some Issues on Rough Sets

49

Fig. 14. Decision network for ﬂow graph from Figure 6

In this paper we compare inference rules and decision rules in the context of decision networks, proposed by the author as a new approach to analyze reasoning patterns in data. Decision network is a set of logical formulas F together with a binary relation over the set R ⊆ F × F of formulas, called a consequence relation. Elements of the relation are called decision rules. The decision network can be perceived as a directed graph, nodes of which are formulas and branches – are decision rules. Thus the decision network can be seen as a knowledge representation system, revealing data structure of a data base. Discovering patterns in the database represented by a decision network boils down to discovering some patterns in the network. Analogy to the modus ponens and modus tollens inference rules will be shown and discussed. Classical rules of inference used in logic are Modus Ponens and Modus Tollens, which have the form if Φ → Ψ is true and Φ is true then Ψ is true and if Φ → Ψ is true and ∼ Ψ is true then ∼ Φ is true respectively.

50

Zdzislaw Pawlak

Modus Ponens allows us to obtain true consequences from true premises, whereas Modus Tollens yields true negation of premise from true negation of conclusion. In reasoning about data (data analysis) the situation is diﬀerent. Instead of true propositions we consider propositional functions, which are true to a “degree”, i.e., they assume truth values which lie between 0 and 1, in other words, they are probable, not true. Besides, instead of true inference rules we have now decision rules, which are neither true nor false. They are characterized by three coeﬃcients, strength, certainty and coverage factors. Strength of a decision rule can be understood as a counterpart of truth value of the inference rule, and it represents frequency of the decision rule in a database. Thus employing decision rules to discovering patterns in data boils down to computation probability of conclusion in terms of probability of the premise and strength of the decision rule, or – the probability of the premise from the probability of the conclusion and strength of the decision rule. Hence, the role of decision rules in data analysis is somehow similar to classical inference patterns, as shown by the schemes below. Two basic rules of inference for data analysis are as follows: if Φ→Ψ and Φ then Ψ

has cer(Φ, Ψ ) and cov(Φ, Ψ ) is true with the probability val(Φ) is true with the probability val(Ψ ) = αval(Φ).

Similarly if and then

Φ→Ψ Ψ Φ

has cer(Φ, Ψ ) and cov(Φ, Ψ ) is true with the probability val(Ψ ) is true with the probability val(Φ) = α−1 val(Φ).

The above inference rules can be considered as counterparts of Modus Ponens and Modus Tollens for data analysis and will be called Rough Modus Ponens (RMP) and Rough Modus Tollens (RMT), respectively. There are however essential diﬀerences between MP (MT) and RMP (RMT). First, instead of truth values associated with inference rules we consider certainly and coverage factors (conditional probabilities) assigned to decision rules. Second, in the case of decision rules, in contrast to inference rules, truth value of a conclusion (RMP) depends not only on a single premise but in fact depends on truth values of premises of all decision rules having the same conclusions. Similarly, for RMT. Let us also notice that inference rules are transitive, i.e., if Φ → Ψ and Ψ → Θ then Φ → Θ and decision rules are not. If Φ → Ψ and Ψ → Θ, then we have to compute the certainty, coverage and strength of the rule Φ → Θ, employing formulas (104),(105),(107),(108). This shows clearly the diﬀerence between reasoning patterns using classical inference rules in logical reasoning and using decision rules in reasoning about data.

Some Issues on Rough Sets

6.10

51

An Example

Suppose that three models of cars Φ1 , Φ2 and Φ3 are sold to three disjoint groups of customers Θ1 , Θ2 and Θ3 through four dealers Ψ1 , Ψ2 , Ψ3 and Ψ4 . Moreover, let us assume that car models and dealers are distributed as shown in Figure 15. Applying RMP to data shown in Figure 15 we get results shown in Figure 16. In order to ﬁnd how car models are distributed among customer

Fig. 15. Distributions of car models and dealers

Fig. 16. The result of application of RMP to data from Figure 15

52

Zdzislaw Pawlak

Fig. 17. Distribution of car models among customer groups

groups we have to compute all connections among cars models and consumers groups, i.e., to apply RMP to data given in Figure 16. The results are shown in Figure 17. For example, we can see from the decision network that consumer group Θ2 bought 21% of car model Φ1 , 35% of car model Φ2 and 44% of car model Φ3 . Conversely, for example, car model Φ1 is distributed among customer groups as follows: 31% cars bought group Θ1 , 57% group Θ2 and 12% group Θ3 .

7

Summary

Basic concept of mathematics, the set, leads to antinomies, i.e., it is contradictory. This deﬁciency of sets, has rather philosophical than practical meaning, for sets used in mathematics are free from the above discussed faults. Antinomies are associated with very “artiﬁcial” sets constructed in logic but not found in sets used in mathematics. That is why we can use mathematics safely. Philosophically, fuzzy set theory and rough set theory are two diﬀerent approaches to vagueness and are not remedy for classical set theory diﬃculties. Both theories represent two diﬀerent approaches to vagueness. Fuzzy set theory addresses gradualness of knowledge, expressed by the fuzzy membership whereas rough set theory addresses granularity of knowledge, expressed by the indiscernibility relation. Practically, rough set theory can be viewed as a new method of intelligent data analysis. Rough set theory has found many applications in medical data analysis, ﬁnance, voice recognition, image processing, and others. However the approach presented in this paper is too simple to many real-life applications and was extended in many ways by various authors. The detailed discussion of the above issues can be found in be found in books (see, e.g., [18–27, 12, 28–30]), special issues of journals (see, e.g., [31–34, 34–38]), proceedings of international conferences (see, e.g., [39–49] ), tutorials (e.g., [50–53]), and on the internet (see, e.g., www.roughsets.org, logic.mimuw.edu.pl,rsds.wsiz.rzeszow.pl).

Some Issues on Rough Sets

53

Besides, rough set theory inspired new look on Bayes’ theorem. Bayesian inference consists in update prior probabilities by means of data to posterior probabilities. In the rough set approach Bayes’ theorem reveals data patterns, which are used next to draw conclusions from data, in form of decision rules. Moreover, we have shown a new mathematical model of ﬂow networks, which can be used to decision algorithm analysis. In particular it has been revealed that the ﬂow in the ﬂow network is governed by Bayes’ rule, which has entirely deterministic meaning, and can be used to decision algorithm study. Also, a new look of dependencies in databases, based on L ukasiewiczs ideas of independencies of logical formulas, is presented. Acknowledment I would like to thank to Prof. Andrzej Skowron for useful discussion and help in preparation of this paper.

References 1. Zadeh, L.A.: Fuzzy sets. Information and Control 8 (1965) 338–353 2. Pawlak, Z.: Rough sets. International Journal of Computer and Information Sciences 11 (1982) 341–356 3. Ziarko, W.: Variable precision rough set model. Journal of Computer and System Sciences 46 (1993) 39–59 ˙ 4. Polkowski, L., Skowron, A., Zytkow, J.: Rough foundations for rough sets. In [40] 55–58 5. Skowron, A., Stepaniuk, J.: Tolerance approximation spaces. Fundamenta Informaticae 27 (1996) 245–253 6. Polkowski, L., Skowron, A.: Rough mereology: A new paradigm for approximate reasoning. International Journal of Approximate Reasoning 15 (1996) 333–365 7. Slowi´ nski, R., Vanderpooten, D.: Similarity relation as a basis for rough approximations. In Wang, P.P., ed.: Machine Intelligence & Soft-Computing, Vol. IV. Bookwrights, Raleigh, NC (1997) 17–33 8. Slowi´ nski, R., Vanderpooten, D.: A generalized deﬁnition of rough approximations based on similarity. IEEE Transactions on Data and Knowledge Engineering 12(2) (2000) 331–336 9. Stepaniuk, J.: Knowledge discovery by application of rough set models. In [26] 137–233 10. Skowron, A.: Toward intelligent systems: Calculi of information granules. Bulletin of the International Rough Set Society 5 (2001) 9–30 11. Greco, A., Matarazzo, B., Slowi´ nski, R.: Rough approximation by dominance relations. International Journal of Intelligent Systems 17 (2002) 153–171 12. Polkowski, L., ed.: Rough Sets: Mathematical Foundations. Advances in Soft Computing. Physica-Verlag, Heidelberg (2002) 13. Skowron, A., Stepaniuk, J.: Information granules and rough-neural computing. In [30] 43–84 14. Skowron, A.: Approximation spaces in rough neurocomputing. In [29] 13–22 15. Wr´ oblewski, J.: Adaptive aspects of combining approximation spaces. In [30] 139–156

54

Zdzislaw Pawlak

16. Yao, Y.Y.: Informaton granulation and approximation in a decision-theoretical model of rough sets. In [30] 491–520 17. Skowron, A., Swiniarski, R., Synak, P.: Approximation spaces and information granulation (submitted). In: Fourth International Conference on Rough Sets and Current Trends in Computing (RSCTC’04), Uppsala, Sweden, June 1-5, 2004. Lecture Notes in Computer Science. Springer-Verlag, Heidelberg, Germany (2004) 18. Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning about Data. Volume 9 of System Theory, Knowledge Engineering and Problem Solving. Kluwer Academic Publishers, Dordrecht, The Netherlands (1991) 19. Slowi´ nski, R., ed.: Intelligent Decision Support - Handbook of Applications and Advances of the Rough Sets Theory. Volume 11 of System Theory, Knowledge Engineering and Problem Solving. Kluwer Academic Publishers, Dordrecht, The Netherlands (1992) 20. Lin, T.Y., Cercone, N., eds.: Rough Sets and Data Mining - Analysis of Imperfect Data. Kluwer Academic Publishers, Boston, USA (1997) 21. Orlowska, E., ed.: Incomplete Information: Rough Set Analysis. Volume 13 of Studies in Fuzziness and Soft Computing. Springer-Verlag/Physica-Verlag, Heidelberg, Germany (1997) 22. Polkowski, L., Skowron, A., eds.: Rough Sets in Knowledge Discovery 1: Methodology and Applications. Volume 18 of Studies in Fuzziness and Soft Computing. Physica-Verlag, Heidelberg, Germany (1998) 23. Polkowski, L., Skowron, A., eds.: Rough Sets in Knowledge Discovery 2: Applications, Case Studies and Software Systems. Volume 19 of Studies in Fuzziness and Soft Computing. Physica-Verlag, Heidelberg, Germany (1998) 24. Pal, S.K., Skowron, A., eds.: Rough Fuzzy Hybridization: A New Trend in Decision-Making. Springer-Verlag, Singapore (1999) 25. Duentsch, I., Gediga, G.: Rough set data analysis: A road to non-invasive knowledge discovery. Methodos Publishers, Bangor, UK (2000) 26. Polkowski, L., Lin, T.Y., Tsumoto, S., eds.: Rough Set Methods and Applications: New Developments in Knowledge Discovery in Information Systems. Volume 56 of Studies in Fuzziness and Soft Computing. Springer-Verlag/Physica-Verlag, Heidelberg, Germany (2000) 27. Lin, T.Y., Yao, Y.Y., Zadeh, L.A., eds.: Rough Sets, Granular Computing and Data Mining. Studies in Fuzziness and Soft Computing. Physica-Verlag, Heidelberg (2001) 28. Demri, S., Orlowska, E., eds.: Incomplete Information: Structure, Inference, Complexity. Monographs in Theoretical Cpmputer Sience. Springer-Verlag, Heidelberg, Germany (2002) 29. Inuiguchi, M., Hirano, S., Tsumoto, S., eds.: Rough Set Theory and Granular Computing. Volume 125 of Studies in Fuzziness and Soft Computing. SpringerVerlag, Heidelberg (2003) 30. Pal, S.K., Polkowski, L., Skowron, A., eds.: Rough-Neural Computing: Techniques for Computing with Words. Cognitive Technologies. Springer-Verlag, Heidelberg, Germany (2003) 31. Slowi´ nski, R., Stefanowski, J., eds.: Special issue: Proceedings of the First International Workshop on Rough Sets: State of the Art and Perspectives, Kiekrz, Pozna´ n, Poland, September 2–4 (1992). Volume 18(3-4) of Foundations of Computing and Decision Sciences. (1993) 32. Ziarko, W., ed.: Special issue. Volume 11(2) of Computational Intelligence: An International Journal. (1995)

Some Issues on Rough Sets

55

33. Ziarko, W., ed.: Special issue. Volume 27(2-3) of Fundamenta Informaticae. (1996) 34. Lin, T.Y., ed.: Special issue. Volume 2(2) of Journal of the Intelligent Automation and Soft Computing. (1996) 35. Peters, J., Skowron, A., eds.: Special issue on a rough set approach to reasoning about data. Volume 16(1) of International Journal of Intelligent Systems. (2001) 36. Cercone, N., Skowron, A., Zhong, N., eds.: (Special issue). Volume 17(3) of Computational Intelligence. (2001) 37. Pal, S.K., Pedrycz, W., Skowron, A., Swiniarski, R., eds.: Special volume: Roughneuro computing. Volume 36 of Neurocomputing. (2001) 38. Skowron, A., Pal, S.K., eds.: Special volume: Rough sets, pattern recognition and data mining. Volume 24(6) of Pattern Recognition Letters. (2003) 39. Ziarko, W., ed.: Rough Sets, Fuzzy Sets and Knowledge Discovery: Proceedings of the Second International Workshop on Rough Sets and Knowledge Discovery (RSKD’93), Banﬀ, Alberta, Canada, October 12–15 (1993). Workshops in Computing. Springer–Verlag & British Computer Society, London, Berlin (1994) 40. Lin, T.Y., Wildberger, A.M., eds.: Soft Computing: Rough Sets, Fuzzy Logic, Neural Networks, Uncertainty Management, Knowledge Discovery. Simulation Councils, Inc., San Diego, CA, USA (1995) 41. Tsumoto, S., Kobayashi, S., Yokomori, T., Tanaka, H., Nakamura, A., eds.: Proceedings of the The Fourth Internal Workshop on Rough Sets, Fuzzy Sets and Machine Discovery, November 6-8, University of Tokyo , Japan. The University of Tokyo, Tokyo (1996) 42. Polkowski, L., Skowron, A., eds.: First International Conference on Rough Sets and Soft Computing (RSCTC’98), Warsaw, Poland, June 22-26, 1998. Volume 1424 of Lecture Notes in Artiﬁcial Intelligence. Springer-Verlag, Heidelberg (1998) 43. Zhong, N., Skowron, A., Ohsuga, S., eds.: Proceedings of the 7-th International Workshop on Rough Sets, Fuzzy Sets, Data Mining, and Granular-Soft Computing (RSFDGrC’99), Yamaguchi, November 9-11, 1999. Volume 1711 of Lecture Notes in Artiﬁcial Intelligence. Springer-Verlag, Heidelberg (1999) 44. Ziarko, W., Yao, Y., eds.: Proceedings of the 2-nd International Conference on Rough Sets and Current Trends in Computing (RSCTC’2000), Banﬀ, Canada, October 16-19, 2000. Volume 2005 of Lecture Notes in Artiﬁcial Intelligence. Springer-Verlag, Heidelberg (2001) 45. Hirano, S., Inuiguchi, M., Tsumoto, S., eds.: Proceedings of International Workshop on Rough Set Theory and Granular Computing (RSTGC-2001), Matsue, Shimane, Japan, May 20-22, 2001. Volume 5(1-2) of Bulletin of the International Rough Set Society. International Rough Set Society, Matsue, Shimane (2001) 46. Terano, T., Nishida, T., Namatame, A., Tsumoto, S., Ohsawa, Y., Washio, T., eds.: New Frontiers in Artiﬁcial Intelligence, Joint JSAI’01 Workshop PostProceedings. Volume 2253 of Lecture Notes in Artiﬁcial Intelligence. SpringerVerlag, Heidelberg (2001) 47. Alpigini, J.J., Peters, J.F., Skowron, A., Zhong, N., eds.: Third International Conference on Rough Sets and Current Trends in Computing (RSCTC’02), Malvern, PA, October 14-16, 2002. Volume 2475 of Lecture Notes in Artiﬁcial Intelligence. Springer-Verlag, Heidelberg (2002) 48. Skowron, A., Szczuka, M., eds.: Proceedings of the Workshop on Rough Sets in Knowledge Discovery and Soft Computing at ETAPS 2003 (RSKD’03), April 12-13, 2003. Volume 82(4) of Electronic Notes in Computer Science. Elsevier, Amsterdam, Netherlands (2003)

56

Zdzislaw Pawlak

49. Wang, G., Liu, Q., Yao, Y., Skowron, A., eds.: Proceedings of the 9-th International Conference on Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing (RSFDGrC’03), Chongqing, China, May 26-29, 2003. Volume 2639 of Lecture Notes in Artiﬁcial Intelligence. Springer-Verlag, Heidelberg (2003) 50. Komorowski, J., , Pawlak, Z., Polkowski, L., Skowron, A.: Rough sets: a tutorial. In [24] 3–98 51. Pawlak, Z., Polkowski, L., Skowron, A.: Rough sets and rough logic: A KDD perspective. In [26] 583–646 52. Skowron, A., Pawlak, Z., Komorowski, J., Polkowski, L.: A rough set perspective ˙ on data and knowledge. In Kloesgen, W., Zytkow, J., eds.: Handbook of KDD. Oxford University Press, Oxford (2002) 134–149 53. Pawlak, Z., Polkowski, L., Skowron, A.: Rough set theory. In Wah, B., ed.: EncyClopedia Of Computer Science and Engineering. Wiley, New York, USA (2004) 54. Cantor, G.: Grundlagen einer allgemeinen Mannigfaltigkeitslehre, Leipzig, Germany (1883) 55. Russell, B.: The Principles of Mathematics. George Allen & Unwin Ltd., London, Great Britain (1903) 56. Russell, B.: Vagueness. The Australasian Journal of Psychology and Philosophy 1 (1923) 84–92 57. Black, M.: Vagueness: An exercise in logical analysis. Philosophy of Science 4(4) (1937) 427–455 58. Hempel, C.G.: Vagueness and logic. Philosophy of Science 6 (1939) 163–180 59. Fine, K.: Vagueness, truth and logic. Synthese 30 (1975) 265–300 60. Keefe, R., Smith, P.: Vagueness: A Reader. MIT Press, Cambridge, MA (1999) 61. Keefe, R.: Theories of Vagueness. Cambridge University Press, Cambridge, U.K. (2000) 62. Frege, G.: Grundgesetzen der Arithmetik, 2. Verlag von Herman Pohle, Jena, Germany (1903) 63. Read, S.: Thinking about Logic - An Introduction to Philosophy of Logic. Oxford University Press, Oxford (1995) 64. Le´sniewski, S.: Grungz¨ uge eines neuen systems der grundlagen der mathematik. Fundamenta Matematicae 14 (1929) 1–81 65. Pawlak, Z., Skowron, A.: Rough membership functions. In Yager, R., Fedrizzi, M., Kacprzyk, J., eds.: Advances in the Dempster-Shafer Theory of Evidence, New York, NY, John Wiley & Sons (1994) 251–271 66. Skowron, A., Rauszer, C.: The discernibility matrices and functions in information systems. In [19] 331–362 67. Berthold, M., Hand, D.J.: Intelligent Data Analysis. An Introduction. SpringerVerlag, Berlin, Heidelberg, New York (1999) 68. Box, G.E.P., Tiao, G.C.: Bayesian Inference in Statistical Analysis. John Wiley and Sons, Inc., New York, Chichester, Brisbane, Toronto, Singapore (1992) 69. Pawlak, Z.: Rough sets and decision algorithms. In [44] 30–45 70. Pawlak, Z.: In pursuit of patterns in data reasoning from data – the rough set way. In [47] 1–9 71. Pawlak, Z.: Probability, truth and ﬂow graphs. In [48] 1–9 72. Wong, S., Ziarko, W.: Algebraic versus probabilistic independence in decision theory. In Ras, Z.W., Zemankova, M., eds.: Proceedings of the ACM SIGART First International Symposium on Methodologies for Intelligent Systems Knoxville (ISMIS’86), Tennessee, USA, October 22-24, 1986. ACM SIGART, USA (1986) 207–212

Some Issues on Rough Sets

57

73. Wong, S., Ziarko, W.: On learning and evaluation of decision rules in the context of rough sets. In Ras, Z.W., Zemankova, M., eds.: Proceedings of the ACM SIGART First International Symposium on Methodologies for Intelligent Systems Knoxville (ISMIS’86), Tennessee, USA, October 22-24, 1986. ACM SIGART, USA (1986) 308–324 74. Pawlak, Z., Wong, S.K.M., Ziarko, W.: Rough sets: Probabilistic versus deterministic approach. International Journal of Man-Machine Studies 29(1) (1988) 81–95 75. Yamauchi, Y., Mukaidono, M.: Probabilistic inference and bayeasian theorem based on logical implication. In [43] 334–342 76. Intan, R., an Y. Y. Yao, M.M.: Generalization of rough sets with alpha-coverings of the universe induced by conditional probability relations. In [46] 311–315 ´ ezak, D.: Approximate decision reducts (in Polish). PhD thesis, Warsaw Uni77. Sl¸ versity, Warsaw, Poland (2002) ´ ezak, D.: Approximate bayesian networks. In Bouchon-Meunier, B., Gutierrez78. Sl¸ Rios, J., Magdalena, L., Yager, R., eds.: Technologies for Constructing Intelligent Systems 2: Tools. Volume 90 of Studies in Fuzziness and Soft Computing. Springer-Verlag, Heidelberg, Germany (2002) 313–326 ´ ezak, D., Wr´ 79. Sl¸ oblewski, J.: Approximate bayesian network classiﬁers. In [47] 365–372 80. Yao, Y.Y.: Information granulation and approximation. In [30] 491–516 ´ ezak, D.: Approximate markov boundaries and bayesian networks: Rough set 81. Sl¸ approach. In [29] 109–121 ´ ezak, D., Ziarko, W.: Attribute reduction in the bayesian version of variable 82. Sl¸ precision rough set model. In [48] ´ ezak, D., Ziarko, W.: Variable precision bayesian rough set model. In [49] 83. Sl¸ 312–315 84. Wong, S.K.M., Wu, D.: A common framework for rough sets, databases, and bayesian networks. In [49] 99–103 ´ ezak, D.: The rough bayesian model for distributed decision systems (submit85. Sl¸ ted). In: Fourth International Conference on Rough Sets and Current Trends in Computing (RSCTC’04), Uppsala, Sweden, June 1-5, 2004. Lecture Notes in Computer Science. Springer-Verlag, Heidelberg, Germany (2004) 86. Swinburne, R.: Bayes Theorem. Volume 113 of Proceedings of the British Academy. Oxford University Press, Oxford, UK (2003) 87. Bernardo, J.M., Smith, A.F.M.: Bayesian Theory. Wiley Series in Probability and Mathematical Statistics. John Wiley & Sons, Chichester, New York, Brisbane, Toronto, Singapore (1994) 88. L ukasiewicz, J.: Die logischen grundlagen der wahrscheinilchkeitsrechnung, Krak´ ow 1913. In Borkowski, L., ed.: Jan L ukasiewicz - Selected Works. North Holland Publishing Company, Amstardam, London, Polish Scientiﬁc Publishers, Warsaw (1970) 89. Adams, E.W.: The Logic of Conditionals. An Application of Probability to Deductive Logic. D. Reidel Publishing Company, Dordrecht, Boston (1975) 90. Grzymala-Busse, J.W.: LERS - a system for learning from examples based on rough sets. In [19] 3–18 91. Skowron, A.: Boolean reasoning for decision rules generation. In Komorowski, J., Ra´s, Z.W., eds.: Seventh International Symposium for Methodologies for Intelligent Systems (ISMIS’93), Trondheim, Norway, June 15-18. Volume 689 of Lecture Notes in Artiﬁcial Intelligence., Heidelberg, Springer-Verlag (1993) 295–305

58

Zdzislaw Pawlak

92. Pawlak, Z., Skowron, A.: A rough set approach for decision rules generation. In: Thirteenth International Joint Conference on Artiﬁcial Intelligence (IJCAI’93), Chamb´ery, France, Morgan Kaufmann (1993) 114–119 93. Shan, N., Ziarko, W.: An incremental learning algorithm for constructing decision rules. In Ziarko, W., ed.: Rough Sets, Fuzzy Sets and Knowledge Discovery, Berlin, Germany, Springer Verlag (1994) 326–334 94. Nguyen, H.S.: Discretization of Real Value Attributes, Boolean Reasoning Approach. PhD thesis, Warsaw University, Warsaw, Poland (1997) 95. Slowi´ nski, R., Stefanowski, J.: Rough family – software implementation of the rough set theory. In [23] 581–586 96. Nguyen, H.S., Nguyen, S.H.: Pattern extraction from data. Fundamenta Informaticae 34 (1998) 129–144 97. Nguyen, H.S., Nguyen, S.H.: Discretization methods for data mining. In [22] 451–482 98. Skowron, A.: Rough sets in KDD - plenary talk. In Shi, Z., Faltings, B., Musen, M., eds.: 16-th World Computer Congress (IFIP’00): Proceedings of Conference on Intelligent Information Processing (IIP’00). Publishing House of Electronic Industry, Beijing (2002) 1–14 99. Bazan, J., Nguyen, H.S., Nguyen, S.H., Synak, P., Wr´ oblewski, J.: Rough set algorithms in classiﬁcation problems. In [26] 49–88 100. Grzymala-Busse, J.W., Shah, P.: A comparison of rule matching methods used in aq15 and lers. In: Proceedings of the Twelfth International Symposium on Methodologies for Intelligent Systems (ISMIS’00), Charlotte, NC, October 11-14, 2000. Volume 1932 of Lecture Nites in Artiﬁcial Intelligence., Berlin, Germany, Springer-Verlag (2000) 148–156 101. Grzymala-Busse, J., Hu, M.: A comparison of several approaches to missing attribute values in data mining. In [44] 340 – 347 102. Greco, S., Matarazzo, B., Slowi´ nski, R., Stefanowski, J.: An algorithm for induction of decision rules consistent with dominance principle. In [44] 304–313 103. Skowron, A.: Rough sets and boolean reasoning. In Pedrycz, W., ed.: Granular Computing: an Emerging Paradigm. Volume 70 of Studies in Fuzziness and Soft Computing. Springer-Verlag/Physica-Verlag, Heidelberg, Germany (2001) 95–124 104. Greco, S., Matarazzo, B., Slowi´ nski, R.: Rough sets theory for multicriteria decision analysis. European J. of Operational Research 129(1) (2001) 1–47 105. Casti, J.L.: Alternate Realities: Mathematical Models of Nature and Man. John Wiley and Sons, Inc., New York, Chichester, Brisbane, Toronto, Singapore (1989) 106. Coombs, C.H., Avruin, G.S.: The Structure of Conﬂicts. Lawrence Erlbaum, London (1988) 107. Deja, R.: Conﬂict analysis, rough set methods and applications. In [26] 491–520 108. Maeda, Y., Senoo, K., Tanaka, H.: Interval density function in conﬂict analysis. In [43] 382–389 109. Nakamura, A.: Conﬂict logic with degrees. In [24] 136–150 110. Pawlak, Z.: An inquiry into anatomy of conﬂicts. Journal of Information Sciences 109 (1998) 65–68 111. Ford, L.R., Fulkerson, D.R.: Flows in Networks. Princeton University Press, Princeton, New Jersey (1973) 112. Slowi´ nski, R., Greco, S.: A note on dependency factor. (2004) (manuscript).

Learning Rules from Very Large Databases Using Rough Multisets Chien-Chung Chan Department of Computer Science University of Akron Akron, OH 44325-4003 [email protected]

Abstract. This paper presents a mechanism called LERS-M for learning production rules from very large databases. It can be implemented using objectrelational database systems, it can be used for distributed data mining, and it has a structure that matches well with parallel processing. LERS-M is based on rough multisets and it is formulated using relational operations with the objective to be tightly coupled with database systems. The underlying representation used by LERS-M is multiset decision tables, which are derived from information multisystems. In addition, it is shown that multiset decision tables provide a simple way to compute Dempster-Shafer’s basic probability assignment functions from. data sets.

1 Introduction The development of computer technologies has provided many useful and efficient tools to produce, disseminate, store, and retrieve data in electronics forms. As a consequence, ever-increasing streams of data are recorded in all types of databases. For example, in automated business activities, even simple transactions such as telephone calls, credit card charges, items in shopping carts, etc. are typically recorded in databases. These data are potentially beneficial to enterprises, because they may be used for designing effective marketing and sales plans based on consumer's shopping patterns and preferences collectively recorded in the databases. From databases of credit card charges, some patterns of fraud charges may be detected, hence, preventive actions may be taken. The raw data stored in databases are potentially lodes of useful information. In order to extract the ore, effective mining tools must be developed. The task of extracting useful information from data is not a new one. It has been a common interest in research areas such as statistical data analysis, machine learning, and pattern recognition. Traditional techniques developed in these areas are fundamental to the task, but there are limitations of these methods. For example, these tools usually assume that the collection of data in the databases is small enough to be fit into the memory of a computer system so that they can be processed. This condition is no longer true in very large databases. Another limitation is that these tools are usually applicable to only static data sets. However, most databases are updated frequently by large streams of data. It is typical that databases of an enterprise are distributed in different locations. Issues and techniques related to finding useful information from distributed data need to be studied and developed. J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 59–77, 2004. © Springer-Verlag Berlin Heidelberg 2004

60

Chien-Chung Chan

There are three classical data mining problems: market basket analysis, clustering, and classification. Traditional machine learning systems are usually developed independent of database technology. One of the recent trends is to develop learning systems that are tightly coupled with relational or object-relational database systems for mining association rules and for mining tree classifiers [1–4]. Due to the maturity of database technology, these systems are more portable and scalable than traditional systems, and they are easier to integrate with OLAP (On Line Analytical Processing) and data warehousing systems. Another trend is that more and more data are stored into distributed databases. Some distributed data mining systems have been developed [5]. However, not many have been tightly coupled with database system technology. In this paper, we introduce a mechanism called LERS-M for learning production rules from very large databases. It can be implemented using object-relational database systems, it can be used for distributed data mining, and it has a structure that matches well with parallel processing. LERS-M is similar to the LERS family of learning programs [6], which is based on rough set theory [7–9]. The main differences are LERS-M is based on rough multisets [10] and it is formulated using relational operations with the objective to be tightly coupled with database systems. The underlying representation used by LERS-M is multiset decision tables [11], which are derived from information multisystems [10]. In addition to facilitate the learning of rules, multiset decision tables can also be used to compute Dempster-Shafer’s belief functions from data [12], [14]. The methodology developed here can be used to design learning systems for knowledge discovery from distributed databases and to develop distributed rule-based expert systems and decision support systems. The paper is organized as follows. The problem addressed by this paper is formulated in Section 2. In Section 3, we review some related concepts. The concept of multiset decision tables and its properties are presented in Section 4. In Section 5, we present the LERS-M learning algorithm with example and discussion. Conclusions are given in Section 6.

2 Problem Statements In this paper we consider the problem of learning production rules from very large databases. For simplicity, a very large database is considered as a very large data table U defined by a finite nonempty set A of attributes. We assume that a very large data table can be store in one single database or distributed over databases. By distributed databases, we means that the data table U is divided into N smaller tables with sizes manageable by a database management system. In the abstraction, we do not consider communication mechanisms used by a distributed database system. Nor do we consider the costs of transferring data from A to B. Briefly speaking, the problem of inductive learning of production rules from examples is to generate descriptions or rules to characterize the logical implication C → D from a collection U of examples, where C and D are sets of attributes used to describe the examples. The set C is called condition attributes, and the set D is called decision attributes. Usually, set D is a singleton set, and the sets C and D are not overlapped. The objective of learning is to find rules that can be used to predict the logical implication as accurate as possible when applied to new examples.

Learning Rules from Very Large Databases Using Rough Multisets

61

The objective of this paper is to develop a mechanism for generating production rules by taking into account the following issues: (1) The implication of C → D may be uncertain, (2) If the set U of examples is divided into N smaller sets, how to determine the implication of C → D, and (3) The result can be implemented using objectrelational database technology.

3 Related Concepts In the following, we will review the concepts of rough sets, information systems, decision tables, rough multisets, information multisystems, and partition of boundary sets. 3.1 Rough Sets, Information Systems, and Decision Tables The fundamental assumption of the rough set theory is that objects from the domain are perceived only through the accessible information about them, that is, the values of attributes that can be evaluated on these objects. Objects with the same information are indiscernible. Consequently, the classification of objects is based on the accessible information about them, not on objects themselves. The notion of information systems was introduced by Pawlak [8] to represent knowledge about objects in a domain. In this paper, we use a special case of information systems called decision tables or data tables to represent data sets. In a decision table there is a designated attribute called decision attribute and another set of attributes are called condition attributes. A decision attribute can be interpreted as a classification of objects in the domain given by an expert. Given a decision table, values of the decision attribute determine a partition on U. The problem of learning rules from examples is to find a set of classification rules using condition attributes that will produce the partition generated by the decision attribute. An example of a decision table adapted from [13] is shown in Table 1, where the universe U consists of 28 objects or examples. The set of condition attributes is {A, B, C, E, F}, and D is the decision attribute with values 1, 2, and 3. The partition on U determined by the decision attribute D is X1 = [1, 2, 4, 8, 10, 15, 22, 25], X2 = [3, 5, 11, 12, 16, 18, 19, 21, 23, 24, 27], X3 = [6, 7, 9, 13, 14, 17, 20, 26, 28] where Xi is the set of objects whose value of attribute d is i, for i = 1, 2, and 3. Note that Table 1 is an inconsistent decision table. Both objects 8 and 12 have the same condition values (1, 1, 1, 1, 1), but their decision values are different. Object 8 has decision value 1, but object 12 has decision value 2. Inconsistent data sets are also called noisy data sets. This kind of data sets is quite common in real world situations. It is an issue must be addressed by machine learning algorithms. In rough set approach, inconsistency is represented by the concepts of lower and upper approximations. Let A = (U, R) be an approximation space, where U is a nonempty set of objects and R is an equivalence relation defined on U. Let X be a nonempty subset of U. Then, the lower approximation of X by R in A is defined as

62

Chien-Chung Chan Table 1. Example of a decision table. U 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28

A 0 1 0 1 0 1 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 0 1 0 1 1 1 1

B 0 1 1 0 1 0 0 1 0 0 1 1 1 1 0 1 0 0 0 1 1 0 1 0 0 0 0 1

C 1 1 0 0 0 0 0 1 0 1 1 1 0 0 1 0 0 0 0 1 0 1 1 1 1 1 1 1

E 0 0 0 0 0 0 1 1 1 0 0 1 1 0 1 1 1 0 0 0 0 0 0 1 0 0 0 1

F 0 0 0 1 0 1 1 1 1 0 0 1 1 1 1 1 1 0 0 0 1 1 0 1 1 1 1 0

D 1 1 2 1 2 3 3 1 3 1 2 2 3 3 1 2 3 2 2 3 2 1 2 2 1 3 2 3

RX = { e ∈ U | [e] ⊆ X} and the upper approximation of X by R in A is defined as R X = { e ∈ U | [e] ∩ X ≠ ∅},

where [e] denotes the equivalence class containing e. The difference R X – RX is called the boundary set of X in A. A subset X of U is said to be R-definable in A if and only if RX = R X. The pair (RX, R X) defines a rough set in A, which is a family of subsets of U with the same lower and upper approximations as RX and R X. In terms of decision tables, the pair (U, A) defines an approximation space. When a decision class Xi ⊆ U is inconsistent, it means that Xi is not A-definable. In this case, we can find classification rules from AXi and A Xi. These rules are called certain rules and possible rules, respectively [16]. Thus, rough set approach can be used to learn rules from both consistent and inconsistent examples [17], [18]. 3.2 Rough Multisets and Information Multisystems The concepts of rough multisets and information multisystems were introduced by Grzymala-Busse [10]. The basic idea is to represent an information system using multisets [15]. Object identifiers represented explicitly in an information system is not

Learning Rules from Very Large Databases Using Rough Multisets

63

represented in an information multisystem. Thus, the resulting data tables are more compact. More precisely, an information multisystem is a triple S = (Q, V, Q~ ), where Q is a set of attributes, V is the union of domains of attributes in Q, and Q~ is a multirelation on

×V

q∈Q

q

. In addition, the concepts of lower and upper approximations in

rough sets are extended to multisets. Let M be a multiset, and let e be an element of M whose number of occurrences in M is w. The sub-multiset {w⋅e} will be denoted by [e]M. Thus M may be represented as union of all [e]M’s where e is in M. A multiset [e]M is called an elementary multiset in M. The empty multiset is elementary. A finite union of elementary multisets is called a definable multiset in M. Let X be a sub-multiset of M. Then, the lower approximation of X in M is the multiset defined as X = { e ∈ M | [e]M ⊆ X} and the upper approximation of X in M is the multiset defined as = { e ∈ M | [e]M ∩ X ≠ ∅}, where the operations on sets are defined by multisets. Therefore, a rough multiset in M is the family of all sub-multisets of M having the same lower and upper approximations in M. ~ Let P be a subset of Q, a projection of Q~ onto P is defined as the multirelation P , obtained by deleting columns corresponding to attributes in Q – P. Note that Q~ and ~ ~ P have same cardinality. Let X be a sub-multiset of P . A P-lower approximation of ~ X in S is the lower approximation X of X in P . A P-upper approximation of X in S is ~ ~ the upper approximation X of X in P . A multiset X in P is P-definable in S iff PX = P X. A multipartition χ on a multiset X is a multiset {X1, X2, …, Xn} of sub-multisets of X such that X

n

∑

Xi = X

i =1

where the sum of two multisets X and Y, denoted X + Y, is a multiset of all elements that are members of X or Y with the number of occurrences of each element e in X + Y is the sum of the number of occurrences of e in X and the number of occurrences of e in Y. Follow from [9], classifications are multipartitions on information multisystems generated with respect to subsets of attributes. Specifically, let S = (Q, V, Q~ ) be an ~ information multisystem. Let A and B be subsets of Q with |A| = i and |B| = j. Let A ~ ~ be a projection of Q onto A. The subset B generates a multipartition BA on A defined as follows: each two i-tuples determined by A are in the same multiset X in BA if and only if their associated j-tuples, determined by B, are equal. The mulitpartition BA is ~ called a classification on A generated by B. Table 2 shows a multirelation representation of the data table given in Table 1 where the number of occurrences of each row is denoted by integers in the W column. The projection of the multirealtion onto the set P of attributes {A, B, C, E, F} is shown in Table 3.

64

Chien-Chung Chan Table 2. An information multisystem S. A 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

B 0 0 0 0 0 0 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1

C 0 0 1 1 1 1 0 0 0 1 1 1 0 0 0 0 1 1 1 1 1 1

E 0 1 0 0 1 1 0 0 0 0 0 0 0 0 1 1 0 0 0 1 1 1

F 0 1 0 1 1 1 0 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1

D 2 3 1 1 1 2 2 1 3 1 2 3 2 3 2 3 1 2 3 3 1 2

W 2 3 2 1 1 1 2 1 1 1 1 1 1 1 1 1 1 2 1 1 1 1

~

Table 3. An information multisystem P . A 0 0 0 0 0 0 1 1 1 1 1 1 1

B 0 0 0 0 0 1 0 0 1 1 1 1 1

C 0 0 1 1 1 0 0 1 0 0 1 1 1

E 0 1 0 0 1 0 0 0 0 1 0 1 1

F 0 1 0 1 1 0 1 1 1 1 0 0 1

W 2 3 2 1 2 2 2 3 2 2 4 1 2

~ Let X be a sub-multiset of P with elements shown in Table 4. ~

Table 4. A sub-multiset X of P . A 0 1 1 1 0 0 1

B 0 1 0 1 0 0 0

C 1 1 0 1 1 1 1

E 0 0 0 1 1 0 0

F 0 0 1 1 1 1 1

W 2 1 1 1 1 1 1

Learning Rules from Very Large Databases Using Rough Multisets

65

Table 5. P-lower approximation of X. A 0 0

B 0 0

C 1 1

E 0 0

F 0 1

W 2 1

Table 6. P-upper approximation of X. A 0 1 1 1 0 0 1

B 0 1 0 1 0 0 0

C 1 1 0 1 1 1 1

E 0 0 0 1 1 0 0

F 0 0 1 1 1 1 1

W 2 4 2 2 2 1 3

~ The P-lower and P-upper approximations of X in P are shown in Table 5 and 6. ~ The classification of P generated by attribute D in S consists of three submultisets which are given in the following Tables 7, 8, and 9 which correspond to the cases where D = 1, D = 2, and D = 3, respectively. Table 7. Sub-multiset of the multipartition DP with D = 1. A 0 0 0 1 1 1 1

B 0 0 0 0 0 1 1

C 1 1 1 0 1 1 1

E 0 0 1 0 0 0 1

F 0 1 1 1 1 0 1

W 2 1 1 1 1 1 1

Table 8. Sub-multiset of the m ultipartition DP with D = 2. A 0 0 0 1 1 1 1 1

B 0 0 1 0 1 1 1 1

C 0 1 0 1 0 0 1 1

E 0 1 0 0 0 1 0 1

F 0 1 0 1 1 1 0 1

W 2 1 2 1 1 1 2 1

Table 9. Sub-multiset of the multipartition DP with D = 3. A 0 1 1 1 1 1 1

B 0 0 0 1 1 1 1

C 0 0 1 0 0 1 1

E 1 0 0 0 1 0 1

F 1 1 1 1 1 0 0

W 3 1 1 1 1 1 1

66

Chien-Chung Chan

3.3 Partition of Boundary Sets The relationship between rough set theory and Dempster-Shafer’s theory of evidence was first shown in [14] and further developed in [13]. The concept of partition of boundary sets was introduced in [13]. The basic idea is to represent an expert’s classification on a set of objects in terms of lower approximations and a partition on the boundary set. In information multisystems, the concept of boundary sets is represented by boundary multisets, which is defined as the difference of upper and lower approximations of a multiset. Thus, the partition of a boundary set can be extended as a multipartition on a boundary multiset. The computation of this multipartition will be discussed in next section.

4 Multiset Decision Tables 4.1 Basic Concepts The idea of multiset decision tables (MDT) was first informally introduced in [11]. We will formalize the concept in the following. Let S = (Q = C ∪ D, V, Q~ ) be an information multisystem, where C are condition attributes and D is a decision attribute. A multiset decision table is an ordered pair A = ( C~ , CD), where C~ is a projection of ~ ~ ~ Q onto C and CD is a multipartition on D generated by C in A. We will call C the LHS (Left Hand Side) and CD the RHS (Right Hand Side). Each sub-multiset in CD is represented by two vectors: a Boolean bit-vector and an integer vector. Similar representational scheme has been used in [19], [20], [21]. The size of each vector is the number of values in the domain VD of decision attribute D. The Boolean bit-vector labeled by Di’s denotes that a decision value Di is in a sub-multiset of CD iff Di = 1 and its number of occurrences is denoted in the integer vector entry labeled by wi. The information multisystem of Table 2 is represented as a multiset decision table in Table 10 with C = {A, B, C, E, F} and decision attribute D. The Boolean vector is denoted by [D1, D2, D3], and the integer vector is denoted by [w1, w2, w3]. Note that W = w1 + w2 + w3 on each row. Table 10. Example of MDT. A 0 0 0 0 0 0 1 1 1 1 1 1 1

B 0 0 0 0 0 1 0 0 1 1 1 1 1

C 0 0 1 1 1 0 0 1 0 0 1 1 1

E 0 1 0 0 1 0 0 0 0 1 0 1 1

F 0 1 0 1 1 0 1 1 1 1 0 0 1

W 2 3 2 1 2 2 2 3 2 2 4 1 2

D1 0 0 1 1 1 0 1 1 0 0 1 0 1

D2 1 0 0 0 1 1 0 1 1 1 1 0 1

D3 0 1 0 0 0 0 1 1 1 1 1 1 0

w1 0 0 2 1 1 0 1 1 0 0 1 0 1

w2 2 0 0 0 1 2 0 1 1 1 2 0 1

w3 0 3 0 0 0 0 1 1 1 1 1 1 0

Learning Rules from Very Large Databases Using Rough Multisets

67

4.2 Properties of Multiset Decision Tables Based on multiset decision table representation, we can use relational operations on the table to compute the concepts of rough sets reviewed in Section 3. Let A be a multiset decision table. We will show how to determine the lower and upper approximations of decision classes and partitions of boundary multisets from A. The lower approximation of Di in terms of the LHS columns is defined as the multiset where Di = 1 and W = wi, and the upper approximation of Di is defined as the multiset where Di = 1 and W >= wi, or simply Di = 1. The boundary multiset of Di is defined as the multiset where Di = 1 and W > wi. The multipartition of boundary multisets can be identified by a equivalence multirelation defined over the Boolean vector denoted by the decision-value columns D1, D2, and D3. It is clear that one row of a multiset decision table is in some boundary multiset if and only if the sum over D1, D2, and D3 of the row is greater than 1. Therefore, to compute the multipartition of boundary multisets, we will first identify those rows with D1 + D2 + D3 > 1, then the rows in the multirelation over D1, D2, and D3 define blocks of the multipartition of the boundary multisets. The above computations are shown in the following example. Example: Consider the decision class D1 in Table 10. The C-lower approximation of D1 is the multiset that satisfies D1 = 1 and W = w1, in table form we have: Table 11. C-lower approximation of D1. A 0 0

B 0 0

C 1 1

E 0 0

F 0 1

W 2 1

The C-upper approximation of D1 is the multiset that satisfies D1 = 1, in table form we have: Table 12. C-upper approximation of D1. A 0 0 0 1 1 1 1

B 0 0 0 0 0 1 1

C 1 1 1 0 1 1 1

E 0 0 1 0 0 0 1

F 0 1 1 1 1 0 1

W 2 1 2 2 3 4 2

To determine the partition of boundary multisets, we use the following two steps. Step 1. Identify rows with D1 + D2 + D3 > 1, we have the following multiset in table form: Table 13. Elements in the boundary sets. A

B

C

E

F

W

D1

D2

D3

0 1 1 1 1 1 1

0 0 0 1 1 1 1

1 0 1 0 0 1 1

1 0 0 0 1 0 1

1 1 1 1 1 0 1

2 2 3 2 2 4 2

1 1 1 0 0 1 1

1 0 1 1 1 1 1

0 1 1 1 1 1 0

68

Chien-Chung Chan

Step 2. Grouping the above table in terms of D1, D2, and D3, we have the following blocks in the partition. Table 14 shows the block where D1 = 1 and D2 = 1 and D3 = 0, i.e., (1 1 0): Table 14. The block denotes D = {1, 2}. A 0 1

B 0 1

C 1 1

E 1 1

F 1 1

W 2 2

D1 1 1

D2 1 1

D3 0 0

Table 15 shows the block where D1 = 1 and D2 = 0 and D3 = 1, i.e., (1 0 1): Table 15. The block denotes D = {1, 3}. A 1

B 0

C 0

E 0

F 1

W 2

D1 1

D2 0

D3 1

Table 16 shows the block where D1 = 0 and D2 = 1 and D3 = 1, i.e., (0 1 1): Table 16. The block denotes D = {2, 3}. A 1 1

B 1 1

C 0 0

E 0 1

F 1 1

W 2 2

D1 0 0

D2 1 1

D3 1 1

Table 17 shows the block where D1 = 1 and D2 = 1 and D3 = 1, i.e., (1 1 1): Table 17. The block denotes D = {1, 2, 3}. A 1 1

B 0 1

C 1 1

E 0 0

F 1 0

W 3 4

D1 1 1

D2 1 1

D3 1 1

From the above example, it is clear that an expert’s classification on the decision attribute D can be obtained by grouping similar values over columns D1, D2, and D3 and by taking the sum over the W column in a multiset decision table. Based on this grouping and summing operation, we can derive a basic probability assignment (bpa) function as required in Dempster-Shafer theory for computing belief functions. This is shown in Table 18. Table 18. Grouping over D1, D2, D3 and sum over W. D1 1 0 0 0 1 1 1

D2 0 1 0 1 0 1 1

D3 0 0 1 1 1 0 1

W 3 4 4 4 2 4 7

Learning Rules from Very Large Databases Using Rough Multisets

69

Let Θ = {1, 2, 3}. Table 19 shows the basic probability assignment function derived from the information multisystem shown in Table 2. The computation is based on the partition of boundary multisets shown in Table 18. Table 19. The bpa derived from Table 2. X m(X)

{1} 3/28

{2} 4/28

{3} 4/28

{1, 2} 4/28

{1, 3} 2/28

{2, 3} 4/28

{1, 2, 3} 7/28

5 Learning Rules From MDT 5.1 LERS-M (Learning Rules from Examples Using Rough MultiSets) In this section, we will present an algorithm LERS-M for learning production rules from a database table based on multiset decision table. A multiset decision table can be computed directly using typical SQL commands from a database table once the condition and decision attributes are specified. For efficiency reason, we will associate entries in an MDT with a sequence of integer numbers. This can be accomplished by using extensions to relational database management system such as the UDF (User Defined Functions) and UDT (User defined Data Type) available on IBM’s DB2 [22]. The emphasis of this paper is more on algorithms, implementation details will be covered somewhere else. The basic idea of LERS-M is to generate a multiset decision table with a sequence of integer numbers. Then, for each value di of the decision attribute D, the upper approximation of di, UPPER(di), is computed, and a set of rules is generated for each UPPER(di). The algorithm LERS-M is given in the following. The detail for generation of rules is presented in Section 5.2. procedure LERS-M Inputs: a table S with condition attributes C1, C2, …, Cn and decision attribute D. Outputs: a set of production rules represented as a multiset data table. begin Create a Multiset Decision Table (MDT) from S with sequence numbers; for each decision value di of D do begin find the upper approximation UPPER(di) of di; Generate rules for UPPER(di); end; end; 5.2 Rule Generation Strategy The basic idea of rule generation is to create an AVT (Attribute-Value pairs Table) table containing all a-v pairs appeared in the set UPPER(di). Then, we will partition the a-v pairs into different groups based on a grouping criterion such as degree of relevancy, which is also used to rank the groups. The left hand sides of rules are identi-

70

Chien-Chung Chan

fied by taking conjunctions of a-v pairs within the same group (intra-group conjuncts) and by taking natural join over different groups (inter-group conjuncts). Strategies for generating and validating candidate conjuncts are encapsulated in a module called GenerateAndTestConjuncts. Once a set of valid conjuncts is identified, minimal conjuncts can be generated using the method of dropping conditions. The process of rule generation is an iterative one. It starts with the set UPPER(di) as an initial TargetSet. In each iteration, a set of rules is generated, and the instances covered by the rule-set are removed from the TargetSet. It stops when all instances in UPPER(di) are covered by the generated rules. In LERS-M, the stopping condition is guaranteed by the fact that upper approximations are always definable based on the theory of rough sets. The above strategy is presented in the following procedures RULE_GEN, GroupAVT, and GenerateAndTestConjuncts. A working example will be given in next section. Specifically, we have adopted the following notions. The extension of an a-v pair (a, v) denoted by [(a, v)], i.e., the set of instances covered by the a-v pair, is a subset of the sequence numbers in the original MDT. The extension of an a-v pair is encoded by a Boolean bit-vector. A conjunct is a nonempty finite set of a-v pairs. The extension of a conjunct is the intersection of extensions of all the a-v pairs in the conjunct. Note that the extension of a group of conjunct is the union of extensions of all the conjuncts in the group, and the extension of an empty group of conjuncts is an empty set. procedure RULE_GEN Inputs: an upper approximation of a decision value di, UPPER(di) and an MDT. Outputs: a set of rules for UPPER(di) represented as a multiset decision table. begin TargetSet := UPPER(di); Ruleset := empty set; Select a grouping criteria G := degree of relevance; Create an a-v pair table AVT contains all a-v pairs appeared in UPPER(di); while TargetSet is not empty do begin AVT := GroupAVT(G, TargetSet); NewRules := GenerateAndTestConjuncts(AVT, UPPER(di)); RuleSet := RuleSet + NewRules; TargetSet := TargetSet – [NewRules]; end; minimalCover(RuleSet); /* applying dropping condition technique to remove redundant rules from RuleSet linearly starting from the first rule to the last rule in the set */ end; // RULE_GEN procedure GroupAVT Inputs: a grouping criterion such as degree of relevance and a subset of the upper approximation of a decision value di. Outputs: a list of groups of equivalent a-v pairs relevant to the target set. begin Initialize the AVT table to be empty;

Learning Rules from Very Large Databases Using Rough Multisets

71

Select a subtable T from the target set where decision value = di; Create a query to get a vector of condition attributes from the subtable T; for each condition attribute do /* Generate distinct values for each condition attribute */ begin Create query string to select distinct values; for each distinct value do begin Create a query string to select count of occurrences; relevance := count of occurrences; if (relevance > 0) Add the condition-value pair to AVT table; end;// for each distinct value end; // end of for each condition Select the list of distinct values of the relevance column; Sort the list of distinct values in descending order; Use the list of distinct values to generate a list of groups of a-v pairs; end; // GroupAVT procedure GenerateAndTestConjuncts Inputs: a list AVT of groups of equivalent a-v pairs and the upper approximation of decision value di. Outputs: a set of rules. begin RuleList := ∅; CarryOverList := ∅; // a list of groups of a-v pairs CandidateList := ∅; // a list of TargetSet := UPPER(di); // Generate Candidate List repeat L := getNext(AVT); // L is a list of equivalent a-v pairs if (L is empty) then break; if ([conjunct(L)] ⊆ TargetSet) then Add conjunct(L) to CandidateList; /* conjunct(L) returns a conjunction of all a-v pairs in L */ if (CarryOverList is empty) then Add all a-v pairs in L to CarryOverList else begin FilterList := ∅; Add join(CarryOverList, L) to FilterList; /*join is a function that creates new lists of a-v pairs by taking and joining one element each from the CarryOverList and L */ CarryOverList := ∅; for each list in FilterList do if ([list] ⊆ TargetSet) then Add list to CandidateList

72

Chien-Chung Chan

else Add list to CarryOverList; end; until (CandidateList is not empty); // Test CandidateList for each list in CandidateList do begin list := minimalConjunct(list); /* applying dropping condition to get minimal list of a-v pairs */ Add list to RuleList; end; return RuleList; end; // GenerateAndTestConjuncts Example Consider the information multisystem in Table 2 as input to LERS-M. The result of generating an MDT with sequence numbers is shown in Table 20. Table 20. MDT with sequence numbers. Seq 1 2 3 4 5 6 7 8 9 10 11 12 13

A 0 0 0 0 0 0 1 1 1 1 1 1 1

B 0 0 0 0 0 1 0 0 1 1 1 1 1

C 0 0 1 1 1 0 0 1 0 0 1 1 1

E 0 1 0 0 1 0 0 0 0 1 0 1 1

F 0 1 0 1 1 0 1 1 1 1 0 0 1

W 2 3 2 1 2 2 2 3 2 2 4 1 2

D1 0 0 1 1 1 0 1 1 0 0 1 0 1

D2 1 0 0 0 1 1 0 1 1 1 1 0 1

D3 0 1 0 0 0 0 1 1 1 1 1 1 0

w1 0 0 2 1 1 0 1 1 0 0 1 0 1

w2 2 0 0 0 1 2 0 1 1 1 2 0 1

w3 0 3 0 0 0 0 1 1 1 1 1 1 0

The C-upper approximation of the class D = 1 is the sub-MDT shown in Table 21. Table 21. Table of UPPER(D1). Seq

3 4 5 7 8 11 13

A 0 0 0 1 1 1 1

B 0 0 0 0 0 1 1

C 1 1 1 0 1 1 1

E

F

W

0 0 1 0 0 0 1

0 1 1 1 1 0 1

2 1 2 2 3 4 2

D1

1 1 1 1 1 1 1

D2

0 0 1 0 1 1 1

D3

0 0 0 1 1 1 0

w1

2 1 1 1 1 1 1

w2

0 0 1 0 1 2 1

w3

0 0 0 1 1 1 0

The following is how RULE_GEN will generate rules for UPPER(D1). Table 22 shows the AVT table created by procedure GroupAVT before sorting is applied to the

Learning Rules from Very Large Databases Using Rough Multisets

73

table to generate the final list of groups of equivalent a-v pairs. The grouping criterion used is based on the size of intersection between the extension of an a-v pair and the set UPPER(D1). Each entry in the Relevance column denotes the number of rows in the UPPER(D1) table matched with the a-v pair. For example, the relevance of (A, 0) is 3 means that there are three rows in UPPER(D1) that satisfy A = 0. The ranking of a-v pairs is based on maximum degree of relevance, i.e., larger relevance number has higher priority. The ranks are ordered in ascending order, i.e., smaller rank number has higher priority. The encoding for extensions of a-v pairs in the AVT is shown in Table 23, and the Target set UPPER(D1) = {3, 4, 5, 7, 8, 11, 13} is considered with the encoding (0, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1). Table 22. AVT table created from UPPER(D1). Name A A B B C C E E F F

Value 0 1 0 1 0 1 0 1 0 1

Relevance 3 4 5 2 1 6 5 2 2 5

Rank 4 3 2 5 6 1 2 5 5 2

Table 23. Extensions of a-v pairs encoded as Boolean bit-vector. N A A B B C C E E F F

V 0 1 0 1 0 1 0 1 0 1

1 1 0 1 0 1 0 1 0 1 0

2 1 0 1 0 1 0 0 1 0 1

3 1 0 1 0 0 1 1 0 1 0

4 1 0 1 0 0 1 1 0 0 1

5 1 0 1 0 0 1 0 1 0 1

6 1 0 0 1 1 0 1 0 1 0

7 0 1 1 0 1 0 1 0 0 1

8 0 1 1 0 0 1 1 0 0 1

9 0 1 0 1 1 0 1 0 0 1

10 0 1 0 1 1 0 0 1 0 1

11 0 1 0 1 0 1 1 0 1 0

12 0 1 0 1 0 1 0 1 1 0

13 0 1 0 1 0 1 0 1 0 1

Based on the Rank of the AVT table shown in Table 22, the a-v pairs are grouped into the following six groups listed from higher to lower rank: {(C, 1)} {(B, 0), (E, 0), (F, 1)} {(A, 1)} {(A, 0)} {(B, 1), (E, 1), (F, 0)} {(C, 0)} Candidate conjuncts are generated and tested by the GenerateAndTestConjuncts procedure based on the above list. The basic strategy used here is to generate the intra-group conjuncts first, then followed by generating inter-group conjuncts. The procedure proceeds sequentially starting from the highest ranked group downward. It stops when at least one rule is found. The heuristics employed here is trying to find rules with maximum coverage of instances in UPPER(di).

74

Chien-Chung Chan

In our example, the first group contains only one a-v pair (C, 1); therefore, no need to generate intra-group conjuncts. From Table 21, we can see that [{(C, 1)}] is not a subset of the UPPER(D1). Thus, inter-group join is needed. In addition, the second group {(B, 0), (E, 0), (F, 1)} is also included in the candidate list. This results in the following list of candidate conjuncts, which are listed with their corresponding externsions. [{(C, 1), (B, 0)}] = {3, 4, 5, 8} [{(C, 1), (E, 0)}] = {3, 4, 8, 11} [{(C, 1), (F, 1)}] = {4, 5, 8, 13} [{(B, 0), (E, 0), (F, 1)}] = {4, 7, 8} Following the generating stage, a testing stage is performed to identify valid conjuncts. Because all the conjuncts are valid, i.e., their extensions are subset of UPPER(di). Four new rules are found in this iteration. The next step is to find minimal conjuncts by using dropping condition method. Consider the conjunction of {(B, 0), (E, 0), (F, 1)}. Dropping the a-v pair (B, 0) from the group, we have [{(E, 0), (F, 1)}] = {1, 4, 7, 8, 9}, which is not a subset of TargetSet, {3, 4, 5, 7, 8, 11, 13}. Next, try to drop the a-v pair (E, 0) from the group, we have [{(B, 0), (F, 1)}] = {1, 4, 5, 7, 8}, which is not a subset of TargetSet, {3, 4, 5, 7, 8, 11, 13}. Finally, try to drop the a-v pair (F, 1) from the group, we have [{(B, 0), (E, 0)}] = {1, 3, 4, 7, 8}, which is not a subset of TargetSet, {3, 4, 5, 7, 8, 11, 13}. We can conclude that the conjunction of {(B, 0), (E, 0), (F, 1)} contains no redundant a-v pairs, and it is a minimal conjunct. Similarly, it can be verified that the conjuncts {(C, 1), (B, 0)}, {(C, 1), (E, 0)}, and {(C, 1), (F, 1)} are minimal. All minimal conjuncts found are added to the new rule set R. Thus, we have the extension [R] of the new rules as [R] = [{(C, 1), (B, 0)}] + [{(C, 1), (E, 0)}]

+ [{(C, 1), (F, 1)}] + [{(B, 0), (E, 0), (F, 1)}] = {3, 4, 5, 7, 8, 11, 13}. The target set is updated by the following TargetSet = {3, 4, 5, 7, 8, 11, 13} – [R] = empty set. Therefore, we have found the rule set. The last step in procedure RULE_GEN is to remove redundant rules from the rule set. The basic idea is similar to finding minimal conjuncts. Here, we try to remove one rule at a time and to test if the remaining rules cover all examples of the target set. More specifically, we try to remove the conjunct {(C, 1), (B, 0)} from the collection. Then, we have [R] = [{(C, 1), (E, 0)}] + [{(C, 1), (F, 1)}] + [{(B, 0), (E, 0), (F, 1)}]

= {3, 4, 5, 7, 8, 11, 13} = TargetSet = {3, 4, 5, 7, 8, 11, 13}. Therefore, the conjunct {(C, 1), (B, 0)} is redundant and is removed from the rule set. Next, we try to remove the conjunct {(C, 1), (E, 0)} from the rule set, we have

Learning Rules from Very Large Databases Using Rough Multisets

75

[R] = [{(C, 1), (F, 1)}] + [{(B, 0), (E, 0), (F, 1)}]

= {4, 5, 7, 8, 13} ≠ TargetSet. Therefore, the conjunct {(C, 1), (B, 0)} is not redundant, and it is kept in the rule set. Similarly, it can be verified that both conjuncts {(C, 1), (F, 1)} and {(B, 0), (E, 0), (F, 1)} are not redundant. The resulting rule set generated is shown in Table 22, where the w1, w2, and w3 are column sums extracted from the table UPPER(D1) of Table 21. Table 24. Rules generated for UPPER(D1). A

B

C

E

F

D

w1

w2

w3

null null null

null null 0

1 1 null

0 null 0

null 1 1

1 1 1

5 4 3

3 3 1

2 1 2

The LERS-M algorithm tries to find only one minimal set of rules, it does not try to find all minimal sets of rules. 5.3 Discussion There are several advantages of developing LERS-M using relational database technology. Relational database systems have been highly optimized and scalable in dealing with large amount of data. They are very portable. They provide smooth integration with OLAP or data warehousing systems. However, one typical disadvantage of SQL implementation is extra computational overhead. Experiments are needed to identify impacts of computational overhead to the performance of LERS-M. When a database is very large, we can divide the database into smaller n databases and run LERS-M on each small database. Similarly, this scheme can be applied to homogeneous distributed databases. To integrate the distributed answers provided by multiple LERS-M programs, we can take the sum over the number of occurrences (i.e., w1, w2, and w3 in previous example) provided by local LERS-M programs. When single answer is desirable, then the decision value Di with maximum sum of wi can be returned, or the entire vector of number of occurrences can be returned as an answer. It is possible to develop other inference mechanisms that will make use of the number of occurrences when performing the task of classification. Based on our discussion, there are two major parameters of LERS-M, namely, grouping criteria and generation of conjuncts. New criteria and heuristics based on numerical measures such as gini index and entropy function may be used. In the paper, we have used the minimal length criterion for the generation of candidate conjuncts. The search strategy is not exhaustive, and it stops when at least one candidate conjunct is identified. There are rooms for developing more extensive and efficient strategies for generating candidate conjuncts. The proposed algorithm is under implementation on IBM’s DB2 database system running on Redhat Linux with web-based interface implemented using Java servlets and JSP. Performance evaluation and comparison to systems based on classical rough set methods will need further work.

76

Chien-Chung Chan

6 Conclusions In this paper we have formulated the concept of multiset decision tables based on the concept of information multisystems. The concept is then used to develop an algorithm LERS-M for learning rules from databases. Based on the concept of partition of boundary sets, we have shown that it is straightforward to compute basic probability assignment functions of the Dempster-Shafer theory from multiset decision tables. A nice feature of multiset decision tables is that we can use the sum over number of occurrences of decision values as a simple mechanism to integrate distributed answers. Developing LERS-M on top of relational database technology will make the system scalable and portable. Our next step is to evaluate the time and space complexities of LERS-M over very large data sets. It would be interesting to compare the SQL-based implementation to classical rough set methods for learning rules from very large data sets. In addition, we have considered only homogenous data tables, which may be very large or distributed. Generalization to multiple heterogeneous tables needs further work.

References 1. Sarawagi, S., S. Thomas, and R. Agrawal, “Integrating association rule mining with relational database systems: alternatives and implications,” Data Mining and Knowledge Discovery, 4, 89–125, (2000). 2. Agrawal, R. and K. Shim, “Developing tightly-coupled data mining applications on a relational database systems,” Proc. of the 2nd Int. Conference on Knowledge Discovery in Databases and Data Mining, Portland, Oregon, (1996). 3. Wang, M., B. Iyer, and J.S. Vitter, “Scalable mining for classification rules in relational databases,” IDEAS, 58-67, (1998). 4. Fernández-Baizán, M.C., Menasalvas Ruiz E., Peña Sánchez J.M., Pardo Pastrana B., “Integrating KDD Algorithms and RDBMS Code,” Rough Sets and Current Trends in Computing (1998), 210-213. 5. Stolfo, S., A. Prodromidis, S. Tselepis, W. Lee, W. Fan, and P. Chan, “JAM: Java agents for meta-learning over distributed databases,” Proc. Third Intl. Conf. Knowledge Discovery and Data Mining, 74-81, (1997). 6. Grzymala-Busse, J.W., “The LERS family of learning systems based on rough sets,” Proc. of the 3rd Midwest Artificial Intelligence and Cognitive Science Society Conference, Carbondale, IL, April 12-14, 103-107, (1991). 7. Pawlak, Z., “Rough sets: basic notion,” Int. J. of Computer and Information Science 11, 344-56, (1982). 8. Pawlak, Z., “Rough sets and decision tables,” Lecture Notes in Computer Science 208, 186196, Berlin, Heidelberg, Springer-Verlag, (1985). 9. Pawlak, Z., J. Grzymala-Busse, R. Slowinski, and W. Ziarko, “Rough sets,” Communication of ACM, Vol. 38, No. 11, November, (1995), 89-95. 10. Grzymala-Busse, J.W., “Learning from examples based on rough multisets,” Proc. of the 2nd Int. Symposium on Methodologies for Intelligent Systems, Charlotte, North Carolina, October 14-17, 325-332, (1987). 11. Chan, C.-C., “Distributed incremental data mining from very large databases: a rough multiset approach,” Proc. the 5th World Multi-Conference on Systemics, Cybernetics and Informatics, SCI 2001, Orlando, Florida, July 22-25, (2001), 517-522.

Learning Rules from Very Large Databases Using Rough Multisets

77

12. Shafer, G., A Mathematical Theory of Evidence. Princeton, NJ, Princeton University Press, (1976). 13. Skowron, A. and J. Grzymala-Busse, “From rough set theory to evidence theory.” in Advances in the Dempster-Shafer Theory of Evidence, edited by R. R. Yager, J. Kacprzyk, and M. Fedrizzi, 193-236, John Wiley & Sons, Inc, New York, (1994). 14. Grzymala-Busse, J.W., “Rough set and Dempster-Shafer approaches to knowledge acquisition under uncertainty - a comparison,” manuscript, (1987). 15. Knuth, D.E., The Art of Computer Programming. Vol. III, Sorting and Searching. AddisonWesley, (1973). 16. Grzymala-Busse, J.W., “Knowledge acquisition under uncertainty: a rough set approach,” J. of Intelligent and Robotic Systems, Vol. 1, 3-16, (1988). 17. Chan, C.-C., “Incremental learning of production rules from examples under uncertainty: a rough set approach,” Int. J. of Software Engineering and Knowledge Engineering, Vol. 1, No. 4, 439 - 461, (1991). 18. Grzymala-Busse, J.W., Managing Uncertainty in Expert Systems. Morgan Kaufmann Pub., San Mateo, CA, (1991). 19. Hu, X., T.Y. Lin, E. Louie, “Bitmap techniques for optimizing decision support queries and association rule algorithms,” IDEAS, (2003), pp. 34-43. 20. Kryszkiewicz, M., “Rough Set Approach to Rules Generation from Incomplete Information Systems,” In The Encyclopedia of Computer Science and Technology, Marcel Dekker, Inc., New York, Vol. 44, 319–346, (2001). 21. Ślęzak, D., “Various approaches to reasoning with frequency based decision reducts: a survey,” in Rough Set Methods and Applications, L. Polkowski, S. Tsumoto, T.Y. Lin (eds.), Physica-Verlag, Heidelberg, New York, (2000). 22. Chamberlin, D. A Complete Guide to DB2 Universal Database. Morgan Kaufmann Publishers. (1998).

Data with Missing Attribute Values: Generalization of Indiscernibility Relation and Rule Induction Jerzy W. Grzymala-Busse1,2 1 2

Department of Electrical Engineering and Computer Science, University of Kansas Lawrence, KS 66045, USA Institute of Computer Science, Polish Academy of Sciences, 01-237 Warsaw, Poland [email protected] http://lightning.eecs.ku.edu/index.html

Abstract. Data sets, described by decision tables, are incomplete when for some cases (examples, objects) the corresponding attribute values are missing, e.g., are lost or represent “do not care” conditions. This paper shows an extremely useful technique to work with incomplete decision tables using a block of an attribute-value pair. Incomplete decision tables are described by characteristic relations in the same way complete decision tables are described by indiscernibility relations. These characteristic relations are conveniently determined by blocks of attributevalue pairs. Three diﬀerent kinds of lower and upper approximations for incomplete decision tables may be easily computed from characteristic relations. All three deﬁnitions are reduced to the same deﬁnition of the indiscernibility relation when the decision table is complete. This paper shows how to induce certain and possible rules for incomplete decision tables using MLEM2, an outgrow of the rule induction algorithm LEM2, again, using blocks of attribute-value pairs. Additionally, the MLEM2 may induce rules from incomplete decision tables with numerical attributes as well.

1

Introduction

We will assume that data sets are presented as decision tables. In such a table columns are labeled by variables and rows by case names. In the simplest case such case names, also called cases, are numbers. Variables are categorized as either independent, also called attributes, or dependent, called decisions. Usually only one decision is given in a decision table. The set of all cases that correspond to the same decision value is called a concept (or a class). In most articles on rough set theory it is assumed that for all variables and all cases the corresponding values are speciﬁed. For such tables the indiscernibility relation, one of the most fundamental ideas of rough set theory, describes cases that can be distinguished from other cases. However, in many real-life applications, data sets have missing attribute values, or, in other words, the corresponding decision tables are incompletely specJ.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 78–95, 2004. c Springer-Verlag Berlin Heidelberg 2004

Data with Missing Attribute Values

79

iﬁed. For simplicity, incompletely speciﬁed decision tables will be called incomplete decision tables. In this paper we will assume that there are two reasons for decision tables to be incomplete. The ﬁrst reason is that an attribute value, for a speciﬁc case, is lost. For example, originally the attribute value was known, however, due to a variety of reasons, currently the value is not recorded. Maybe it was recorded but is erased. The second possibility is that an attribute value was not relevant – the case was decided to be a member of some concept, i.e., was classiﬁed, or diagnosed, in spite of the fact that some attribute values were not known. For example, it was feasible to diagnose a patient regardless of the fact that some test results were not taken (here attributes correspond to tests, so attribute values are test results). Since such missing attribute values do not matter for the ﬁnal outcome, we will call them “do not care” conditions. The main objective of this paper is to study incomplete decision tables, i.e., incomplete data sets, or, yet in diﬀerent words, data sets with missing attribute values. We will assume that in the same decision table some attribute values may be lost and some may be “do not care” conditions. The ﬁrst paper dealing with such decision tables was [6]. For such incomplete decision tables there are two special cases: in the ﬁrst case, all missing attribute values are lost, in the second case, all missing attribute values are “do not care” conditions. Incomplete decision tables in which all attribute values are lost, from the viewpoint of rough set theory, were studied for the ﬁrst time in [8], where two algorithms for rule induction, modiﬁed to handle lost attribute values, were presented. This approach was studied later in [13–15], where the indiscernibility relation was generalized to describe such incomplete decision tables. On the other hand, incomplete decision tables in which all missing attribute values are “do not care” conditions, from the view point of rough set theory, were studied for the ﬁrst time in [3], where a method for rule induction was introduced in which each missing attribute value was replaced by all values from the domain of the attribute. Originally such values were replaced by all values from the entire domain of the attribute, later, by attribute values restricted to the same concept to which a case with a missing attribute value belongs. Such incomplete decision tables, with all missing attribute values being “do not care conditions”, were extensively studied in [9], [10], including extending the idea of the indiscernibility relation to describe such incomplete decision tables. In general, incomplete decision tables are described by characteristic relations, in a similar way as complete decision tables are described by indiscernibility relations [6]. In rough set theory, one of the basic notions is the idea of lower and upper approximations. For complete decision tables, once the indiscernibility relation is ﬁxed and the concept (a set of cases) is given, the lower and upper approximations are unique. For incomplete decision tables, for a given characteristic relation and concept, there are three diﬀerent possibilities to deﬁne lower and upper approximations,

80

Jerzy W. Grzymala-Busse

called singleton, subset, and concept approximations [6]. Singleton lower and upper approximations were studied in [9], [10], [13–15]. Note that similar three deﬁnitions of lower and upper approximations, though not for incomplete decision tables, were studied in [16–18]. In this paper we further discuss applications to data mining of all three kinds of approximations: singleton, subset and concept. As it was observed in [6], singleton lower and upper approximations are not applicable in data mining. The next topic of this paper is demonstrating how certain and possible rules may be computed from incomplete decision tables. An extension of the well-known LEM2 algorithm [1], [4], MLEM2, was introduced in [5]. Originally, MLEM2 induced certain rules from incomplete decision tables with missing attribute values interpreted as lost and with numerical attributes. Using the idea of lower and upper approximations for incomplete decision tables, MLEM2 was further extended to induce both certain and possible rules from a decision table with some missing attribute values being lost and some missing attribute values being “do not care” conditions, while some attributes may be numerical.

2

Blocks of Attribute-Value Pairs and Characteristic Relations

Let us reiterate that our basic assumption is that the input data sets are presented in the form of a decision table. An example of a decision table is shown in Table 1. Table 1. A complete decision table Attributes Decision Case Temperature Headache Nausea Flu 1 high yes no yes 2 very high yes yes yes 3 high no no no 4 high yes yes yes 5 high yes yes no 6 normal yes no no 7 normal no yes no 8 normal yes no yes

Rows of the decision table represent cases, while columns are labeled by variables. The set of all cases will be denoted by U . In Table 1, U = {1, 2, ..., 8}. Independent variables are called attributes and a dependent variable is called a decision and is denoted by d. The set of all attributes will be denoted by A. In Table 1, A = {Temperature, Headache, Nausea}. Any decision table deﬁnes a function ρ that maps the direct product of U and A into the set of all values. For example, in Table 1, ρ(1, T emperature) = high. Function ρ describing Table 1 is completely speciﬁed (total). A decision table with completely speciﬁed function ρ will be called completely specified, or, for the sake of simplicity, complete.

Data with Missing Attribute Values

81

Rough set theory [11], [12] is based on the idea of an indiscernibility relation, deﬁned for complete decision tables. Let B be a nonempty subset of the set A of all attributes. The indiscernibility relation IN D(B) is a relation on U deﬁned for x, y ∈ U as follows (x, y) ∈ IN D(B) if and only if ρ(x, a) = ρ(y, a) f or all a ∈ B. The indiscernibility relation IN D(B) is an equivalence relation. Equivalence classes of IN D(B) are called elementary sets of B and are denoted by [x]B . For example, for Table 1, elementary sets of IN D(A) are {1}, {2}, {3}, {4, 5}, {6, 8}, {7}. The indiscernibility relation IN D(B) may be computed using the idea of blocks of attribute-value pairs. Let a be an attribute, i.e., a ∈ A and let v be a value of a for some case. For complete decision tables if t = (a, v) is an attribute-value pair then a block of t, denoted [t], is a set of all cases from U that for attribute a have value v. For Table 1, [(Temperature, high)] = {1, 3, 4, 5}, [(Temperature, very high)] = {2}, [(Temperature, normal)] = {6, 7, 8}, [(Headache, yes)] = {1, 2, 4, 5, 6, 8}, [(Headache, no)] = {3, 7}, [(Nausea, no)] = {1, 3, 6}, [(Nausea, yes)] = {2, 4, 5, 7}. The indiscernibility relation IN D(B) is known when all elementary blocks of IND(B) are known. Such elementary blocks of B are intersections of the corresponding attribute-value pairs, i.e., for any case x ∈ U , [x]B = ∩{[(a, v)]|a ∈ B, ρ(x, a) = v}. We will illustrate the idea how to compute elementary sets of B for Table 1 and B = A. [1]A = [(T emperature, high)] ∩ [(Headache, yes)] ∩ [(N ausea, no)] = {1}, [2]A = [(T emperature, very high)] ∩ [(Headache, yes)] ∩ [(N ausea, yes)] = {2}, [3]A = [(T emperature, high)] ∩ [(Headache, no)] ∩ [(N ausea, no)] = {3}, [4]A = [5]A = [(T emperature, high)] ∩ [(Headache, yes)] ∩ [(N ausea, yes)] = {4, 5}, [6]A = [8]A = [(T emperature, normal)] ∩ [(Headache, yes)] ∩ [(N ausea, no] = {6, 8}, [7]A = [(T emperature, normal)] ∩ [(Headache, no] ∩ [(N ausea, yes)] = {7}. In practice, input data for data mining are frequently aﬀected by missing attribute values. In other words, the corresponding function ρ is incompletely speciﬁed (partial). A decision table with an incompletely speciﬁed function ρ will be called incompletely specified, or incomplete. For the rest of the paper we will assume that all decision values are speciﬁed, i.e., they are not missing. Also, we will assume that all missing attribute values are denoted either by “?” or by “*”, lost values will be denoted by “?”, “do not

82

Jerzy W. Grzymala-Busse Table 2. An incomplete decision table Attributes Decision Case Temperature Headache Nausea Flu 1 high ? no yes 2 very high yes yes yes 3 ? no no no 4 high yes yes yes 5 high ? yes no 6 normal yes no no 7 normal no yes no 8 * yes * yes

care” conditions will be denoted by “*”. Additionally, we will assume that for each case at least one attribute value is speciﬁed. Incomplete decision tables are described by characteristic relations instead of indiscernibility relations. Also, elementary blocks are replaced by characteristic sets. An example of an incomplete table is presented in Table 2. For incomplete decision tables the deﬁnition of a block of an attribute-value pair must be modiﬁed. If for an attribute a there exists a case x such that ρ(x, a) =?, i.e., the corresponding value is lost, then the case x is not included in the block [(a, v)] for any value v of attribute a. If for an attribute a there exists a case x such that the corresponding value is a “do not care” condition, i.e., ρ(x, a) = ∗, then the corresponding case x should be included in blocks [(a, v)] for all values v of attribute a. This modiﬁcation of the deﬁnition of the block of attribute-value pair is consistent with the interpretation of missing attribute values, lost and “do not care” condition. Thus, for Table 2 [(Temperature, high)] = {1, 4, 5, 8}, [(Temperature, very high)] = {2, 8}, [(Temperature, normal)] = {6, 7, 8}, [(Headache, yes)] = {2, 4, 6, 8}, [(Headache, no)] = {3, 7}, [(Nausea, no)] = {1, 3, 6, 8}, [(Nausea, yes)] = {2, 4, 5, 7, 8}. The characteristic set KB (x) is the intersection of blocks of attribute-value pairs (a, v) for all attributes a from B for which ρ(x, a) is speciﬁed and ρ(x, a) = v. For Table 2 and B = A, KA (1) = {1, 4, 5, 8} ∩ {1, 3, 6, 8} = {1, 8}, KA (2) = {2, 8} ∩ {2, 4, 6, 8} ∩ {2, 4, 5, 7, 8} = {2, 8}, KA (3) = {3, 7} ∩ {1, 3, 6, 8} = {3}, KA (4) = {1, 4, 5, 8} ∩ {2, 4, 6, 8} ∩ {2, 4, 5, 7, 8} = {4, 8}, KA (5) = {1, 4, 5, 8} ∩ {2, 4, 5, 7, 8} = {4, 5, 8}, KA (6) = {6, 7, 8} ∩ {2, 4, 6, 8} ∩ {1, 3, 6, 8} = {6, 8}, KA (7) = {6, 7, 8} ∩ {3, 7} ∩ {2, 4, 5, 7, 8} = {7}, and KA (8) = {2, 4, 6, 8}.

Data with Missing Attribute Values

83

Characteristic set KB (x) may be interpreted as the smallest set of cases that are indistinguishable from x using all attributes from B, using a given interpretation of missing attribute values. Thus, KA (x) is the set of all cases that cannot be distinguished from x using all attributes. The characteristic relation R(B) is a relation on U deﬁned for x, y ∈ U as follows (x, y) ∈ R(B) if and only if y ∈ KB (x). The characteristic relation R(B) is reﬂexive but – in general – does not need to be symmetric or transitive. Also, the characteristic relation R(B) is known if we know characteristic sets K(x) for all x ∈ U . In our example, R(A) = {(1, 1), (1, 8), (2, 2), (2, 8), (3, 3), (4, 4), (4, 8), (5, 4), (5, 5), (5, 8), (6, 6), (6, 8), (7, 7), (8, 2), (8, 4), (8, 6), (8, 8)}. The most convenient way is to deﬁne the characteristic relation through the characteristic sets. Nevertheless, the characteristic relation R(B) may be deﬁned independently of characteristic sets in the following way: (x, y) ∈ R(B) if and only if ρ(x, a) = ρ(y, a) or ρ(x, a) = ∗ orρ(y, a) = ∗ f or all a ∈ B such that ρ(x, a)?. For decision tables, in which all missing attribute values are lost, a special characteristic relation was deﬁned by J. Stefanowski and A. Tsoukias in [14], see also, e.g., [13], [15]. In this paper that characteristic relation will be denoted by LV (B), where B is a nonempty subset of the set A of all attributes. For x, y ∈ U characteristic relation LV (B) is deﬁned as follows: (x, y) ∈ LV (B) if and only if ρ(x, a) = r(y, a) f or all a ∈ B such that ρ(x, a) =?. For any decision table in which all missing attribute values are lost, the characteristic relation LV (B) is reﬂexive, but – in general – does not need to be symmetric or transitive. For decision tables where all missing attribute values are “do not care” conditions a special characteristic relation, in this paper denoted by DCC(B), was deﬁned by M. Kryszkiewicz in [9], see also, e.g., [10]. For x, y ∈ U , the characteristic relation DCC(B) is deﬁned as follows: (x, y) ∈ DCC(B) if and only if ρ(x, a) = ρ(y, a) orρ(x, a) = ∗ or ρ(y, a) = ∗ f or all a ∈ B. Relation DCC(B) is reﬂexive and symmetric but – in general – not transitive. Obviously, characteristic relations LV (B) and DCC(B) are special cases of the characteristic relation R(B). For a completely speciﬁed decision table, the characteristic relation R(B) is reduced to IN D(B).

84

3

Jerzy W. Grzymala-Busse

Lower and Upper Approximations

For completely speciﬁed decision tables lower and upper approximations are deﬁned on the basis of the indiscernibility relation. Any ﬁnite union of elementary sets, associated with B, will be called a B-definable set. Let X be any subset of the set U of all cases. The set X is called a concept and is usually deﬁned as the set of all cases deﬁned by a speciﬁc value of the decision. In general, X is not a B-deﬁnable set. However, set X may be approximated by two B-deﬁnable sets, the ﬁrst one is called a B-lower approximation of X, denoted by BX and deﬁned as follows {x ∈ U |[x]B ⊆ X}. The second set is called a B-upper approximation of X, denoted by BX and deﬁned as follows {x ∈ U |[x]B ∩ X = ∅. The above shown way of computing lower and upper approximations, by constructing these approximations from singletons x, will be called the first method. The B-lower approximation of X is the greatest B-deﬁnable set, contained in X. The B-upper approximation of X is the smallest B-deﬁnable set containing X. As it was observed in [12], for complete decision tables we may use a second method to deﬁne the B-lower approximation of X, by the following formula ∪{[x]B |x ∈ U, [x]B ⊆ X}, and the B-upper approximation of x may de deﬁned, using the second method, by ∪{[x]B |x ∈ U, [x]B ∩ X = ∅). For incompletely speciﬁed decision tables lower and upper approximations may be deﬁned in a few diﬀerent ways. First, the deﬁnition of deﬁnability should be modiﬁed. Any ﬁnite union of characteristic sets of B is called a B-definable set. In this paper we suggest three diﬀerent deﬁnitions of lower and upper approximations. Again, let X be a concept, let B be a subset of the set A of all attributes, and let R(B) be the characteristic relation of the incomplete decision table with characteristic sets K(x), where x ∈ U . Our ﬁrst deﬁnition uses a similar idea as in the previous articles on incompletely speciﬁed decision tables [9], [10], [13], [14], [15], i.e., lower and upper approximations are sets of singletons from the universe U satisfying some properties. Thus, lower and upper approximations are deﬁned by analogy with the above ﬁrst method, by constructing both sets from singletons. We will call these approximations singleton. A singleton B-lower approximation of X is deﬁned as follows: BX = {x ∈ U |KB (x) ⊆ X}. A singleton B-upper approximation of X is BX = {x ∈ U |KB (x) ∩ X = ∅}.

Data with Missing Attribute Values

85

In our example of the decision table presented in Table 2 let us say that B = A. Then the singleton A-lower and A-upper approximations of the two concepts: {1, 2, 4, 8} and {3, 5, 6, 7} are: A{1, 2, 4, 8} = {1, 2, 4}, A{3, 5, 6, 7} = {3, 7}, A{1, 2, 4, 8} = {1, 2, 4, 5, 6, 8}, A{3, 5, 6, 7} = {3, 5, 6, 7, 8}. The second method of deﬁning lower and upper approximations for complete decision tables uses another idea: lower and upper approximations are unions of elementary sets, subsets of U . Therefore we may deﬁne lower and upper approximations for incomplete decision tables by analogy with the second method, using characteristic sets instead of elementary sets. There are two ways to do this. Using the ﬁrst way, a subset B-lower approximation of X is deﬁned as follows: BX = ∪{KB (x)|x ∈ U, KB (x) ⊆ X}. A subset B-upper approximation of X is BX = ∪{KB (x)|x ∈ U, KB (x) ∩ X = ∅}. Since any characteristic relation R(B) is reﬂexive, for any concept X, singleton B-lower and B-upper approximations of X are subsets of the subset B-lower and B-upper approximations of X, respectively. For the same decision table, presented in Table 2, the subset A-lower and A-upper approximations are A{1, 2, 4, 8} = {1, 2, 4, 8}, A{3, 5, 6, 7} = {3, 7}, A{1, 2, 4, 8} = {1, 2, 4, 5, 6, 8}, A{3, 5, 6, 7} = {2, 3, 4, 5, 6, 7, 8}. The second possibility is to modify the subset deﬁnition of lower and upper approximation by replacing the universe U from the subset deﬁnition by a concept X. A concept B-lower approximation of the concept X is deﬁned as follows: BX = ∪{KB (x)|x ∈ X, KB (x) ⊆ X}. Obviously, the subset B-lower approximation of X is the same set as the concept B-lower approximation of X. A concept B-upper approximation of the concept X is deﬁned as follows: BX = ∪{KB (x)|x ∈ X, KB (x) ∩ X = ∅} = ∪{KB (x)|x ∈ X}. The concept B-upper approximation of X is a subset of the subset B-upper approximation of X. Besides, the concept B-upper approximations are truly the

86

Jerzy W. Grzymala-Busse

smallest B-deﬁnable sets containing X. For the decision table presented in Table 2, the concept A-lower and A-upper approximations are A{1, 2, 4, 8} = {1, 2, 4, 8}, A{3, 5, 6, 7} = {3, 7}, A{1, 2, 4, 8} = {1, 2, 4, 6, 8}, A{3, 5, 6, 7} = {3, 4, 5, 6, 7, 8}. Note that for complete decision tables, all three deﬁnitions of lower approximations, singleton, subset and concept, coalesce to the same deﬁnition. Also, for complete decision tables, all three deﬁnitions of upper approximations coalesce to the same deﬁnition. This is not true for incomplete decision tables, as our example shows.

4

Rule Induction

In the ﬁrst step of processing the input data ﬁle, the data mining system LERS (Learning from Examples based on Rough Sets) checks if the input data ﬁle is consistent (i.e., if the ﬁle does not contain conﬂicting examples). Table 1 is inconsistent because the fourth and the ﬁfth examples are conﬂicting. For these examples, the values of all three attributes are the same (high, yes, yes), but the decision values are diﬀerent, yes for the fourth example and no for the ﬁfth example. If the input data ﬁle is inconsistent, LERS computes lower and upper approximations of all concepts. Rules induced from the lower approximation of the concept certainly describe the concept, so they are called certain. On the other hand, rules induced from the upper approximation of the concept describe the concept only possibly (or plausibly), so they are called possible [2]. The same idea of blocks of attribute-value pairs is used in a rule induction algorithm LEM2 (Learning from Examples Module, version 2), a component of LERS. LEM2 learns discriminant description, i.e., the smallest set of minimal rules, describing the concept. The option LEM2 of LERS is most frequently used since – in most cases – it gives best results. LEM2 explores the search space of attribute-value pairs. Its input data ﬁle is a lower or upper approximation of a concept, so its input data ﬁle is always consistent. In general, LEM2 computes a local covering and then converts it into a rule set. We will quote a few deﬁnitions to describe the LEM2 algorithm. Let B be a nonempty lower or upper approximation of a concept represented by a decision-value pair (d, w). Set B depends on a set T of attribute-value pairs t = (a, v) if and only if [t] ⊆ B. ∅ = [T ] = t∈T

Set T is a minimal complex of B if and only if B depends on T and no proper subset T of T exists such that B depends on T . Let T be a nonempty collection

Data with Missing Attribute Values

87

of nonempty sets of attribute-value pairs. Then T is a local covering of B if and only if the following conditions are satisﬁed: (1) each member T of T is a minimal complex of B, (2) t∈T [T ] = B, and T is minimal, i.e., T has the smallest possible number of members. The procedure LEM2 is presented below. Procedure LEM2 (input: a set B, output: a single local covering T of set B); begin G := B; T := ∅; while G = ∅ begin T := ∅; T (G) := {t|[t] ∩ G = ∅} ; while T = ∅ or [T ] ⊆ B begin select a pair t ∈ T (G) such that |[t] ∩ G| is maximum; if a tie occurs, select a pair t ∈ T (G) with the smallest cardinality of [t]; if another tie occurs, select ﬁrst pair; T := T ∪ {t} ; G := [t] ∩ G ; T (G) := {t|[t] ∩ G = ∅}; T (G) := T (G) − T ; end {while} for each t ∈ T do if [T − {t}] ⊆ B then T := T − {t}; T := T ∪ {T }; G := B − ∪T ∈T [T ]; end {while}; for each T ∈ T do if S∈T −{T } [S] = B then T := T − {T }; end {procedure}. MLEM2 is a modiﬁed version of the algorithm LEM2. The original algorithm LEM2 needs discretization, a preprocessing, to deal with numerical attributes. The MLEM2 algorithm can induce rules from incomplete decision tables with numerical attributes. Its previous version induced certain rules from incomplete decision tables with missing attribute values interpreted as lost and with numerical attributes. Recently, MLEM2 was further extended to induce both certain and possible rules from a decision table with some missing attribute values being lost and some missing attribute values being “do not care” conditions, while

88

Jerzy W. Grzymala-Busse

some attributes may be numerical. Rule induction from decision tables with numerical attributes will be described in the next section. In this section we will describe a new way in which MLEM2 handles incomplete decision tables. Since all characteristic sets KB (x), where x ∈ U , are intersections of attributevalue pair blocks for attributes from B, and for subset and concept deﬁnitions of B–lower and B–upper approximations are unions of sets of the type KB (x), it is most natural to use an algorithm based on blocks of attribute-value pairs, such as LEM2 [1], [4] for rule induction. First of all let us examine rule induction usefulness for the three diﬀerent deﬁnition of lower and upper approximations: singleton, subset and concept. The ﬁrst observation is that singleton lower and upper approximations should not be used for rule induction. Let us explain that on the basis of our example of the decision table from Table 2. The singleton A-lower approximation of the concept {1, 2, 4, 8} is the set {1, 2, 4}. Our expectation is that we should be able to describe the set {1, 2, 4} using given interpretation of missing attribute values, while in the rules we are allowed to use conditions being attribute-value pairs. However, this is impossible, because, as follows from the list of all sets KA (x) there is no way to describe case 1 not describing at the same time case 8, but {1, 8} ⊆ {1, 2, 4}. Similarly, there is no way to describe the singleton A-upper approximation of the concept {3, 5, 6, 7}, i.e., the set {3, 5, 6, 7, 8}, since there is no way to describe case 5 not describing, at the same time, cases 4 and 8, however, {4, 5, 8} ⊆ {3, 5, 6, 7, 8}. On the other hand, both subset and concept A-lower and A-upper approximations are unions of the characteristic sets of the type KA (x), therefore, it is always possible to induce certain rules from subset and concept A-lower approximations and possible rules from concept and subset A-upper approximations. Subset A-lower approximations are identical with concept A-lower approximations so it does not matter which approximations we are going to use. Since concept A-upper approximations are subsets of the corresponding subset A-upper approximations, it is more feasible to use concept A-upper approximations, since they are closer to the concept X, and rules will more precisely describe the concept X. Moreover, it better ﬁts into the idea that the upper approximation should be the smallest set containing the concept. Therefore, we will use for rule induction only concept lower and upper approximations. In order to induce certain rules for our example of the decision table presented in Table 2, we have to compute concept A-lower approximations for both concepts, {1, 2, 4, 8} and {3, 5, 6, 7}. The concept lower approximation of {1, 2, 4, 8} is the same set {1, 2, 4, 8}, so we are going to pass to the procedure LEM2 as the set B. Initially G = B. The set T (G) is the following set {(Temperature, high), (Temperature, very high), (Temperature, normal), (Headache, yes), (Nausea, no), Nausea, yes)}. For three attribute-value pairs from T (G), namely, (Temperature, high), (Headache, yes) and (Nausea, yes), the following value [(attribute, value)] ∩ G

Data with Missing Attribute Values

89

is maximum. The second criterion, the smallest cardinality of [(attribute, value)], indicates (Temperature, high), (Headache, yes) (in both cases that cardinality is equal to four). The last criterion, “ﬁrst pair”, selects (Temperature, high). Thus T = {(Temperature, high)}, G = {1, 4, 8}, and the new T (G) is equal to {(Temperature, very high), (Temperature, normal), (Headache, yes), (Nausea, no), Nausea, yes)}. Since [(T emperature, high)] ⊆ B, we have to perform the next iteration of the inner WHILE loop. This time (Headache, yes) will be selected, the new T = {(Temperature, high), (Headache, yes)} and new G is equal to {4, 8}. Since [T ] = [(T emperature, high)] ∩ [(Headache, yes)] = {4, 8} ⊆ B, the ﬁrst minimal complex is computed. It is not diﬃcult to see that we cannot drop any of these two attribute-value pairs, so T = {T }, and the new G is equal to B − {4, 8} = {1, 2}. During the second iteration of the outer WHILE loop, the next minimal complex T is identiﬁed as {(Temperature, very high)}, so T = {{(Temperature, high), (Headache, yes)}, {(Temperature, very high)}} and G = {1}. We need one additional iteration of the outer WHILE loop, the next minimal complex T is computed as {(Temperature, high), (Nausea, no)}, and T = {{(Temperature, high), (Headache, yes)}, {(Temperature, very high)}, {( Temperature, high), (Nausea, no)}} becomes the ﬁrst local covering, since we cannot drop any of minimal complexes from T . The set of certain rules, corresponding to T and describing the concept {1, 2, 4, 8}, is (Temperature, high) & (Headache, yes) -> (Flu, yes), (Temperature, very high) -> (Flu, yes), (Temperature, high) & (Nausea, no) -> (Flu, yes). Remaining rule sets, certain for the second concept equal to {3, 5, 6, 7}, and both sets of possible rules are compute in a similar manner. Eventually, rules in the LERS format (every rule is equipped with three numbers, the total number of attribute-value pairs on the left-hand side of the rule, the total number of examples correctly classiﬁed by the rule during training, and the total number of training cases matching the left-hand side of the rule) are: certain rule set: 2, 2, 2 (Temperature, high) & (Headache, yes) -> (Flu, yes) 1, 2, 2 (Temperature, very high) -> (Flu, yes) 2, 2, 2 (Temperature, high) & (Nausea, no) -> (Flu, yes) 1, 2, 2 (Headache, no) -> (Flu, no) and possible rule set:

90

Jerzy W. Grzymala-Busse

1, 3, 4 (Headache, yes) -> (Flu, yes) 2, 2, 2 (Temperature, high) & (Nausea, no) -> (Flu, yes) 2, 1, 3 (Nausea, yes) & (Temperature, high) -> (Flu, no) 1, 2, 2 (Headache, no) -> (Flu, no) 1, 2, 3 (Temperature, normal) -> (Flu, no)

5

Other Approaches to Missing Attribute Values

So far we have used two approaches to missing attribute values, in the ﬁrst one a missing attribute value was interpreted as lost, in the second as a “do not care” condition. There are many other possible approaches to missing attribute values, for some discussion on this topic see [7]. Our belief is that for any possible interpretation of a missing attribute vale, blocks of attribute-value pairs may be re-deﬁned, a new characteristic relation may be computed, corresponding lower and upper approximations computed as well, and eventually, corresponding certain and possible rules induced. As an example we may consider another interpretation for “do not care” conditions. So far, in computing the block for an attribute-value pair (a, v) we added all cases with value “*” to such block [(a, v)]. Following [7], we may consider another interpretation of “do not care conditions”: If for an attribute a there exists a case x such that the corresponding value is a “do not care” condition, i.e., ρ(x, a) = ∗, then the corresponding case x should be included in blocks [(a, v)] for all values v of attribute a with the same decision value as for x (i.e., we will add x only to members of the same concept to which x belongs). With this new interpretation of “*”s, blocks of attribute-value pairs for Table 2 are: [(Temperature, high)] = {1, 4, 5, 8}, [(Temperature, very high)] = {2, 8}, [(Temperature, normal)] = {6, 7}, [(Headache, yes)] = {2, 4, 6, 8}, [(Headache, no)] = {3, 7}, [(Nausea, no)] = {1, 3, 6}, [(Nausea, yes)] = {2, 4, 5, 7, 8}. The characteristic set KB (x) for Table 2, a new interpretation of “*”s, and B = A, are: KA (1) = {1, 4, 5, 8} ∩ {1, 3, 6} = {1, 8}, KA (2) = {2, 8} ∩ {2, 4, 6, 8} ∩ {2, 4, 5, 7, 8} = {2, 8}, KA (3) = {3, 7} ∩ {1, 3, 6} = {3}, KA (4) = {1, 4, 5, 8} ∩ {2, 4, 6, 8} ∩ {2, 4, 5, 7, 8} = {4, 8},

Data with Missing Attribute Values

91

KA (5) = {1, 4, 5, 8} ∩ {2, 4, 5, 7, 8} = {4, 5, 8}, KA (6) = {6, 7} ∩ {2, 4, 6, 8} ∩ {1, 3, 6} = {6}, KA (7) = {6, 7} ∩ {3, 7} ∩ {2, 4, 5, 7, 8} = {7}, and KA (8) = {2, 4, 6, 8}. The characteristic relation R(B) is {(1, 1), (1, 8), (2, 2), (2, 8), (3, 3), (4, 4), (4, 8), (5, 4), (5, 5), (5, 8), (6, 6), (7, 7), (8, 2), (8, 4), (8, 6), (8, 8)}. Then we may deﬁne lower and upper approximations and induce rules using a similar technique as in the previous section.

6

Incomplete Decision Tables with Numerical Attributes

An example of an incomplete decision table with a numerical attribute is presented in Table 3. Table 3. An incomplete decision table with a numerical attribute Attributes Decision Case Temperature Headache Nausea Flu 1 98 ? no yes 2 101 yes yes yes 3 ? no no no 4 99 yes yes yes 5 99 ? yes no 6 96 yes no no 7 96 no yes no 8 * yes * yes

Numerical attributes should be treated in a little bit diﬀerent way as symbolic attributes. First, for computing characteristic sets, numerical attributes should be considered as symbolic. For example, for Table 3 the blocks of the numerical attribute Temperature are: [(Temperature, [(Temperature, [(Temperature, [(Temperature,

96)] = {6, 7, 8}, 98)] = {1, 8}, 99)] = {4, 5, 8}, 101)] = {2, 8}.

Remaining blocks of attribute-value pairs, for attributes Headache and Nausea, are the same as for Table 2. The characteristic sets KB (x) for Table 3 and B = A are: KA (1) = {1, 8} ∩ {1, 3, 6, 8} = {1, 8}, KA (2) = {2, 8} ∩ {2, 4, 6, 8} ∩ {2, 4, 5, 7, 8} = {2, 8}, KA (3) = {3, 7} ∩ {1, 3, 6, 8} = {3}, KA (4) = {4, 5, 8} ∩ {2, 4, 6, 8} ∩ {2, 4, 5, 7, 8} = {4, 8}, KA (5) = {4, 5, 8} ∩ {2, 4, 5, 7, 8} = {4, 5, 8},

92

Jerzy W. Grzymala-Busse

KA (6) = {6, 7, 8} ∩ {2, 4, 6, 8} ∩ {1, 3, 6, 8} = {6, 8}, KA (7) = {6, 7, 8} ∩ {3, 7} ∩ {2, 4, 5, 7, 8} = {7}, and KA (8) = {2, 4, 6, 8}. The characteristic relation R(B) is {(1, 1), (1, 8), (2, 2), (2, 8), (3, 3), (4, 4), (4, 8), (5, 4), (5, 5), (5, 8), (6, 6), (6, 8), (7, 7), (8, 2), (8, 4), (8, 6), (8, 8)}. For the decision presented in Table 3, the concept A-lower and A-upper approximations are A{1, 2, 4, 8} = {1, 2, 4, 8}, A{3, 5, 6, 7} = {3, 7}, A{1, 2, 4, 8} = {1, 2, 4, 6, 8}, A{3, 5, 6, 7} = {3, 4, 5, 6, 7, 8}. For inducing rules, blocks of attribute-value pairs are deﬁned diﬀerently than in computing characteristic sets. MLEM2 has an ability to recognize integer and real numbers as values of attributes, and labels such attributes as numerical. For numerical attributes MLEM2 computes blocks in a diﬀerent way than for symbolic attributes. First, it sorts all values of a numerical attribute, ignoring missing attribute values. Then it computes cutpoints as averages for any two consecutive values of the sorted list. For each cutpoint c MLEM2 creates two blocks, the ﬁrst block contains all cases for which values of the numerical attribute are smaller than c, the second block contains remaining cases, i.e., all cases for which values of the numerical attribute are larger than c. The search space of MLEM2 is the set of all blocks computed this way, together with blocks deﬁned by symbolic attributes. Starting from that point, rule induction in MLEM2 is conducted the same way as in LEM2. Note that if in a rule there are two attribute value pairs with overlapping intervals, a new condition is computed with the intersection of both intervals. Thus, the corresponding blocks for Temperature are: [(Temperature, [(Temperature, [(Temperature, [(Temperature, [(Temperature, [(Temperature,

96..97)] = {6, 7, 8}, 97..101)] = {1, 2, 4, 5, 8}, 96..98.5)] = {1, 6, 7, 8}, 98.5..101)] = {2, 4, 5, 8}, 96..100)] = {1, 4, 5, 6, 7, 8}, 100..101)] = {2, 8}.

Remaining blocks of attribute-value pairs, for attributes Headache and Nausea, are the same as for Table 2. Using the MLEM2 algorithm, the following rules are induced from the concept approximations: certain rule set: 2, 3, 3 (Temperature, 98.5..101) & (Headache, yes) -> (Flu, yes) 1, 2, 2 (Temperature, 97..98.5) -> (Flu, yes) 1, 2, 2 (Headache, no) -> (Flu, no)

Data with Missing Attribute Values

93

possible rule set: 1, 3, 4 (Headache, yes) -> (Flu, yes) 2, 2, 3 (Temperature, 96..98.5) & (Nausea, no) -> (Flu, yes) 2, 2, 4 (Temperature, 96..100) & (Nausea, yes) -> (Flu, no) 1, 2, 3 (Temperature, 96..97) -> (Flu, no) 1, 2, 2 (Headache, no) -> (Flu, no)

7

Conclusions

It was shown in the paper that the idea of attribute-value pair blocks is an extremely useful tool. That idea may be used for computing characteristic relations for incomplete decision tables; in turn, characteristic sets are used for determining lower and upper approximations. Furthermore, the same idea of

Computing attribute-value pair blocks

? Computing characteristic sets

? Computing characteristic relations

? Computing lower and upper approximations

? Computing additional blocks of attribute-value pairs for numerical attributes

? Inducing certain and possible rules

Fig. 1. Using attribute-value pair blocks for rule induction from incomplete decision tables

94

Jerzy W. Grzymala-Busse

attribute-value pair blocks may be used for rule induction, for example, using the MLEM2 algorithm. The process is depicted in Figure 1. Note that it is much more convenient to deﬁne the characteristic relations through the two-stage process of determining blocks of attribute-value pairs and then computing characteristic sets than to deﬁne characteristic relations, for every interpretation of missing attribute values, separately. For completely speciﬁed decision tables any characteristic relation is reduced to an indiscernibility relation. Also, it is shown that the most useful way of deﬁning lower and upper approximations for incomplete decision tables is a new idea of concept lower and upper approximations. Two new ways to deﬁne lower and upper approximations for incomplete decision tables, called subset and concept, and the third way, deﬁned previously in a number of papers [9], [10], [13], [14], [15] and called here singleton lower and upper approximations, are all reduced to respective well-known deﬁnitions of lower and upper approximations for complete decision tables.

References 1. Chan, C.C. and Grzymala-Busse, J.W.: On the attribute redundancy and the learning programs ID3, PRISM, and LEM2. Department of Computer Science, University of Kansas, TR-91-14, December 1991, 20 pp. 2. Grzymala-Busse, J.W.: Knowledge acquisition under uncertainty – A rough set approach. Journal of Intelligent & Robotic Systems 1 (1988), 3–16. 3. Grzymala-Busse, J.W.: On the unknown attribute values in learning from examples. Proc. of the ISMIS-91, 6th International Symposium on Methodologies for Intelligent Systems, Charlotte, North Carolina, October 16–19, 1991. Lecture Notes in Artiﬁcial Intelligence, vol. 542, Springer-Verlag, Berlin, Heidelberg, New York (1991) 368–377. 4. Grzymala-Busse, J.W.: LERS – A system for learning from examples based on rough sets. In Intelligent Decision Support. Handbook of Applications and Advances of the Rough Sets Theory, ed. by R. Slowinski, Kluwer Academic Publishers, Dordrecht, Boston, London (1992) 3–18. 5. Grzymala-Busse., J.W.: MLEM2: A new algorithm for rule induction from imperfect data. Proceedings of the 9th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems, IPMU 2002, July 1–5, Annecy, France, 243–250. 6. Grzymala-Busse, J.W.: Rough set strategies to data with missing attribute values. Workshop Notes, Foundations and New Directions of Data Mining, the 3-rd International Conference on Data Mining, Melbourne, FL, USA, November 19–22, 2003, 56–63. 7. Grzymala-Busse, J.W. and Hu, M.: A comparison of several approaches to missing attribute values in data mining. Proceedings of the Second International Conference on Rough Sets and Current Trends in Computing RSCTC’2000, Banﬀ, Canada, October 16–19, 2000, 340–347. 8. Grzymala-Busse, J.W. and A. Y. Wang A.Y.: Modiﬁed algorithms LEM1 and LEM2 for rule induction from data with missing attribute values. Proc. of the Fifth International Workshop on Rough Sets and Soft Computing (RSSC’97) at the Third Joint Conference on Information Sciences (JCIS’97), Research Triangle Park, NC, March 2–5, 1997, 69–72.

Data with Missing Attribute Values

95

9. Kryszkiewicz, M.: Rough set approach to incomplete information systems. Proceedings of the Second Annual Joint Conference on Information Sciences, Wrightsville Beach, NC, September 28–October 1, 1995, 194–197. 10. Kryszkiewicz, M.: Rules in incomplete information systems. Information Sciences 113 (1999) 271–292. 11. Pawlak, Z.: Rough Sets. International Journal of Computer and Information Sciences 11 (1982) 341–356. 12. Pawlak, Z.: Rough Sets. Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Dordrecht, Boston, London (1991). 13. Stefanowski, J.: Algorithms of Decision Rule Induction in Data Mining. Poznan University of Technology Press, Poznan, Poland (2001). 14. Stefanowski, J. and Tsoukias, A.: On the extension of rough sets under incomplete information. Proceedings of the 7th International Workshop on New Directions in Rough Sets, Data Mining, and Granular-Soft Computing, RSFDGrC’1999, Ube, Yamaguchi, Japan, November 8–10, 1999, 73–81. 15. Stefanowski, J. and Tsoukias, A.: Incomplete information tables and rough classiﬁcation. Computational Intelligence 17 (2001) 545–566. 16. Yao, Y.Y.: Two views of the theory of rough sets in ﬁnite universes. International J. of Approximate Reasoning 15 (1996) 291–317. 17. Yao, Y.Y.: Relational interpretations of neighborhood operators and rough set approximation operators. Information Sciences 111 (1998) 239–259. 18. Yao, Y.Y.: On the generalizing rough set theory. Proc. of the 9th Int. Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing (RSFDGrC’2003), Chongqing, China, October 19–22, 2003, 44–51.

Generalizations of Rough Sets and Rule Extraction Masahiro Inuiguchi Division of Mathematical Science for Social Systems Department of Systems Innovation Graduate School of Engineering Science, Osaka University 1-3, Machikaneyama, Toyonaka, Osaka 560-8531, Japan [email protected] http://www-inulab.sys.es.osaka-u.ac.jp/~inuiguti/

Abstract. In this paper, two kinds of generalizations of rough sets are proposed based on two diﬀerent interpretations of rough sets: one is an interpretation of rough sets as approximation of a set by means of elementary sets and the other is an interpretation of rough sets as classiﬁcation of objects into three diﬀerent classes, i.e., positive objects, negative objects and boundary objects. Under each interpretation, two diﬀerent definitions of rough sets are given depending on the problem setting. The fundamental properties are shown. The relations between generalized rough sets are given. Moreover, rule extraction underlying each rough set is discussed. It is shown that rules are extracted based on modiﬁed decision matrices. A simple example is given to show the diﬀerences in the extracted rules by underlying rough sets.

1

Introduction

Rough sets [7] are useful in applications to data mining, knowledge discovery, decision making, conﬂict analysis, and so on. Rough set approaches [7] have been developed under equivalence relations. The equivalence relation implies that attributes are all nominal. Because of this weak assumption, unreasonable results for human intuition have been exempliﬁed when some attributes are ordinal [3]. To overcome such unreasonableness, the dominance-based rough set approach has been proposed by Greco et al. [3]. On the other hand, the generalization of rough sets is an interesting topic not only in mathematical point of view but also in practical point of view. Along this direction, rough sets have been generalized under similarity relations [5, 10], covers [1, 5] and general relations [6, 11–13]. Those results demonstrate a diversity of generalizations. Moreover, recently, the introduction of fuzziness into rough set approaches attracts researchers in order to obtain more realistic and useful tools (see, for example, [2]). Considering applications of rough sets in the generalized setting, the interpretation of rough sets plays an important role. This is because any mathematical model cannot be properly applied without its interpretation. In other words, the interpretation should be proper for the aim of application. The importance of the J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 96–119, 2004. c Springer-Verlag Berlin Heidelberg 2004

Generalizations of Rough Sets and Rule Extraction

97

interpretation increases as the problem setting becomes more generalized such as a fuzzy setting. This is because the diversity of deﬁnitions and treatments which are same in the original setting is increased by the generalization. Two major interpretations have been traditionally given to rough sets. One is an interpretation of rough sets as approximation of a set by means of elementary sets. The other is an interpretation of rough sets as classiﬁcation of objects into three diﬀerent classes, i.e., positive objects, negative objects and boundary objects. Those interpretations can be found in the terminologies, ‘lower approximation’ (resp. ‘upper approximation’) and ‘positive region’ (resp. ‘possible region’) in the classical rough sets. The lower approximation of a set equals to the positive region of the set in the classical rough sets, i.e., rough sets under equivalence relations. However, they can be diﬀerent in a general setting. For example, Inuiguchi and Tanino [5] showed the diﬀerence under a similarity relation. They described the diﬀerence under a more generalized setting (see [6]). However, fundamental properties have not been considerably investigated yet. Moreover, from deﬁnitions of rough sets in the previous papers, we may have some other deﬁnitions of rough sets under generalized settings. When generalized rough sets are given, we may have a question how we can extract decision rules based on them. The type of extracted decision rules would be diﬀerent depending on the underlying generalized rough set. To this question, Inuiguchi and Tanino [6] demonstrated the diﬀerence in rule extraction based on generalized rough sets. In this paper, we discuss the generalized rough sets in the two diﬀerent interpretations restricting ourselves into crisp setting as extensions of a previous paper [6]. Such investigations are necessary and important also for proper deﬁnitions and applications of fuzzy rough sets. We introduce some new deﬁnitions of generalized rough sets. The fundamental properties of those generalized rough sets are newly given. The relations between rough sets under two diﬀerent interpretations are discussed. In order to see the diﬀerences of those generalized rough sets in applications, we discuss rule extraction based on the generalized rough sets. We demonstrate the diﬀerence in the types of decision rules depending on underlying generalized rough sets. Moreover, we show that decision rules with minimal conditions can be extracted by modifying the decision matrix. This paper is organized as follows. The classical rough sets are brieﬂy reviewed in the next section. In Section 3, interpreting rough sets as classiﬁcation of objects, we deﬁne rough sets under general relations. The fundamental properties of the generalized rough sets are investigated. In Section 4, using the interpretation of rough sets as approximation by means of elementary sets, we deﬁne rough sets under a family of sets. The fundamental properties of this generalized rough sets are also investigated. Section 5 is devoted to relations between those two kinds of rough sets. In Section 6, we discuss decision rule extraction based on generalized rough sets. Extraction methods using modiﬁed decision matrices are proposed. In Section 7, a few numerical examples are given to demonstrate the diﬀerences among the extracted decision rules based on diﬀerent generalized rough sets. Some concluding remarks are given in Section 8.

98

2 2.1

Masahiro Inuiguchi

Classical Rough Sets Definitions, Interpretations and Fundamental Properties

Let R be an equivalence relation in the ﬁnite universe U , i.e., R ⊆ U × U . In rough set literature, R is referred to as an indiscernibility relation and a pair (U, R) is called an approximation space. By the equivalence relation R, U can be partitioned into a collection of equivalence classes or elementary sets, U |R = {E1 , E2 , . . . , Ep }. Deﬁne R(x) = {y ∈ U | (y, x) ∈ R}. Then we have x ∈ Ei if and only if Ei = R(x). Note that U |R = {R(x) | x ∈ U }. Let X be a subset of U . Using R(x), a rough set of X is deﬁned by a pair of the following lower and upper approximations; R∗ (X) = {x ∈ X | R(x) ⊆ X} = U − {R(y) | y ∈ U − X} = {Ei | Ei ⊆ X, i = 1, 2, . . . , p}, (U − Ei ) (U − Ei ) ⊆ X, I ⊆ {1, 2, . . . .p} , (1) = i∈I i∈I R∗ (X) = {R(x) | x ∈ X} = U − {y ∈ U − X | R(y) ⊆ U − X} Ei Ei ⊇ X, I ⊆ {1, 2, . . . , p} = =

i∈I

i∈I

{U − Ei | U − Ei ⊇ X}.

(2)

Let us interpret R(x) as a set of objects we intuitively identify as members of X from the fact x ∈ X. Then, from the ﬁrst expression of R∗ (X) in (1), R∗ (X) is interpreted as a set of objects which are consistent with the intuition that R(x) ⊆ X if x ∈ X. Under the same interpretation of R(x), R∗ (X) is interpreted as a set of objects which can be intuitively inferred as members of X from the ﬁrst expression of R∗ (X) in (2). In other words, R∗ (X) and R∗ (X) show positive (consistent) and possible members of X. Moreover, R∗ (X)−R∗ (X) and U − R∗ (X) show ambiguous (boundary) and negative members of X. In this way, a rough set classiﬁes objects of U into three classes, i.e., positive, negative and boundary regions. On the contrary let us interpret R(x) as a set of objects we intuitively identify as membersof U − X from the fact x ∈ U − X. In the same way as previous discussion, {R(y) | y ∈ U − X} and {y ∈ U − X | R(y) ⊆ U − X} show possible and positive members of U −X, respectively. From the second expression of R∗ (X) in (1), R∗ (X) can be regarded as a set of impossible members of U − X. In other words, R∗ (X) show certain members of X. Similarly, from the second expression of R∗ (X) in (2), R∗ (X) can be regarded as a set of nonpositive members of U − X. Namely, R∗ (X) show conceivable members of X. R∗ (X)−R∗ (X) and U −R∗ (X) show border and inconceivable members of X. In this case, a rough set again classiﬁes objects of U into three classes, i.e., certain, inconceivable and border regions.

Generalizations of Rough Sets and Rule Extraction

99

Table 1. Fundamental properties of rough sets (i) (ii) (iii) (iv) (v) (vi) (vii)

R∗ (X) ⊆ X ⊆ R∗ (X). R∗ (∅) = R∗ (∅) = ∅, R∗ (U ) = R∗ (U ) = U . R∗ (X ∩ Y ) = R∗ (X) ∩ R∗ (Y ), R∗ (X ∪ Y ) = R∗ (X) ∪ R∗ (Y ). X ⊆ Y implies R∗ (X) ⊆ R∗ (Y ), X ⊆ Y implies R∗ (X) ⊆ R∗ (Y ). R∗ (X ∪ Y ) ⊇ R∗ (X) ∪ R∗ (Y ), R∗ (X ∩ Y ) ⊆ R∗ (X) ∩ R∗ (Y ). R∗ (U − X) = U − R∗ (X), R∗ (U − X) = U − R∗ (X). R∗ (R∗ (X)) = R∗ (R∗ (X)) = R∗ (X), R∗ (R∗ (X)) = R∗ (R∗ (X)) = R∗ (X).

From the third expression of R∗ (X) in (1), R∗ (X) is the best approximation of X by means of the union of elementary sets Ei such that Ei ⊆ X. On the other hand, from the third expression of R∗ (X) in (2), R∗ (X) is the minimal superset of X by means of the union of elementary sets Ei . Finally, from the fourth expression of R∗ (X) in (1), R∗ (X) is the maximal subset of X by means of the intersection of complements of elementary sets U − Ei . From the fourth expression of R∗ (X) in (2), R∗ (X) is the best approximation of X by means of the intersection of complements of elementary sets U − Ei such that U − Ei ⊇ X. We introduced only four kinds of expressions of lower and upper approximations but there are other many expressions [5, 10–13]. The interpretation of rough sets depends on the expression of lower and upper approximations. Thus we may have more interpretations by adopting the other expressions. However the interpretations described above seem appropriate for applications of rough sets. Those interpretations can be divided into two categories: interpretation of rough sets as classiﬁcation of objects and interpretation of rough sets as approximation of a set. The fundamental properties listed in Table 1 are satisﬁed with the lower and upper approximations of classical rough sets.

3 3.1

Classification-Oriented Generalization Proposed Definitions

We generalize classical rough sets under interpretation of rough sets as classiﬁcation of objects. As described in the previous section, there are two expressions in this interpretation, i.e., the ﬁrst and second expressions of (1) and (2). First we describe the generalization based on the second expressions of (1) and (2). In this case, we assume that there exists a relation P ⊆ U × U such that P (x) = {y ∈ U | (y, x) ∈ P } means a set of objects we intuitively identify as members of X from the fact x ∈ X. Then if P (x) ⊆ X for an object x ∈ X then there is no objection against x ∈ X. In this case, x ∈ X is consistent with the intuitive knowledge based on the relation P . Such an object x ∈ X can be considered as a positive member of X. Hence the positive region of X can be deﬁned as

100

Masahiro Inuiguchi

P∗ (X) = {x ∈ X | P (x) ⊆ X}.

(3)

On the other hand, by the intuition from the relation P , an object y ∈ P (x) for x ∈ X can be a member of X. Such an object y ∈ U is a possible member of X. Moreover, every object x ∈ X is evidently a possible member of X. Hence the possible region of X can be deﬁned as (4) P ∗ (X) = X ∪ {P (x) | x ∈ X}. Using the positive region P∗ (X) and the possible region P ∗ (X), we can deﬁne a rough set of X as a pair (P∗ (X), P ∗ (X)). We can call such rough sets as classiﬁcation-oriented rough sets under a positively extensive relation P of X (for short CP-rough sets). The relation P depends on the meaning of X whose positive and possible regions we are interested in. Thus, we cannot always deﬁne the CP-rough set of U −X by using the same relation P . To deﬁne a CP-rough set of U −X, we should introduce another relation Q ⊆ U × U such that Q(x) = {y ∈ U | (y, x) ∈ Q} means a set of objects we intuitively identify as members of U − X from the fact x ∈ U − X. Using Q we have positive and possible regions of U − X by Q∗ (U − X) = {x ∈ U − X | Q(x) ⊆ U − X}, Q∗ (U − X) = (U − X) ∪ {Q(x) | x ∈ U − X}.

(5) (6)

Using those, we can deﬁne certain and conceivable regions of X by ¯ ∗ (X) = U − Q∗ (U − X) = X ∩ U − {Q(x) | x ∈ U − X} , Q

(7)

¯ ∗ (X) = U − Q∗ (U − X) = U − {x ∈ U − X | Q(x) ⊆ U − X}. Q

(8)

Those deﬁnitions correspond to the second expressions of (1) and (2). ¯ ∗ (X)) with the ¯ ∗ (X), Q We can deﬁne another rough set of X as a pair (Q ∗ ¯ ¯ certain region Q∗ (X) and the conceivable region Q (X). We can call this type of rough sets as classiﬁcation-oriented rough sets under a negatively extensive relation Q of X (for short CN-rough sets). Let Q−1 (x) = {y ∈ U | (x, y) ∈ Q}. As is shown in [10], we have {Q(x) | x ∈ U − X} = {x ∈ U | Q−1 (x) ∩ (U − X) = ∅}. (9) Therefore, we have ¯ ∗ (X) = X ∩ {x ∈ U | Q−1 (x) ∩ (U − X) = ∅} Q = {x ∈ X | Q−1 (x) ⊆ X} = QT (10) ∗ (X) ∗ ¯ Q (X) = U − {x ∈ U − X | Q(x) ∩ X = ∅} = X ∪ {x ∈ U | Q(x) ∩ X = ∅} ∗ = X ∪ {Q−1 (x) | x ∈ X} = QT (X), (11) where QT is the converse relation of Q, i.e., QT = {(x, y) | (y, x) ∈ Q}. Note that we have QT (x) = {x ∈ U | (x, y) ∈ QT } = {x ∈ U | (y, x) ∈ Q} = Q−1 (x).

Generalizations of Rough Sets and Rule Extraction

101

From (10) and (11), the classiﬁcation-oriented rough sets under a negatively extensive relation Q can be seen as the classiﬁcation-oriented rough sets under a positively extensive relation QT . By the same discussion, the classiﬁcationoriented rough sets under a positively extensive relation P can be also seen as the classiﬁcation-oriented rough sets under a negatively extensive relation P T . Moreover, when P = QT , we have the classiﬁcation-oriented rough sets under a positively extensive relation P coincides with the classiﬁcation-oriented rough sets under a negatively extensive relation Q. 3.2

Relationships to Previous Definitions

Rough sets were previously deﬁned under a general relation. We discuss the relationships of the proposed generalized rough sets with previous ones. First of all, let us review the previous generalized rough sets brieﬂy. In analogy to Kripke model in modal logic, Yao and Lin [13] and Yao [11, 12] proposed a generalized rough set with the following lower and upper approximations: T∗ (X) = {x ∈ U | T −1 (x) ⊆ X}, T ∗ (X) = {x | T −1 (x) ∩ X = ∅},

(12) (13)

where T is a general binary relation and T −1 (x) = {y ∈ U | (x, y) ∈ T }. In Yao [12], T −1 (x) is replaced with a neighborhood n(x) of x ∈ U . Slowi´ nski and Vanderpooten [10] proposed rough sets under a similarity relation S. They assume the reﬂexivity of S ((x, x) ∈ S, for each x ∈ U ). They classify all objects in U into the following four categories under the intuition that y ∈ U to which x ∈ U is similar must be in the same set containing x: (i) positive objects, i.e., objects x ∈ U such that x ∈ X and S −1 (x) ⊆ X, (ii) ambiguous objects of type I, i.e., objects x ∈ U such that x ∈ X but S −1 (x) ∩ (U − X) = ∅, (iii) ambiguous objects of type 2, i.e., objects x ∈ U such that x ∈ U − x but S −1 (x) ∩ X = ∅, and (iv) negative objects, i.e., x ∈ U − X and S −1 (x) ⊆ U − X. Based on this classiﬁcation, lower and upper approximations are deﬁned by (12) and (13) with substitution of S for T . Namely, the lower approximation is a collection of positive objects and the upper approximation is a collection of positive and ambiguous objects. Note that they expressed the upper approximation as S ∗ (X) = {S(x) | x ∈ X} which is equivalent to (13) with the substitution of S for T (see Slowi´ nski and Vanderpooten [10]), where S(x) = {y ∈ U | (y, x) ∈ S}. Greco, Matarazzo and Slowi´ nski [3] proposed rough sets under a dominance relation D. They assume the reﬂexivity of D ((x, x) ∈ D, for each x ∈ U ). Let X be a set of objects better than x. Under the intuition that y ∈ U by which x ∈ U is dominated must be better than x, i.e., y must be at least in X, they deﬁned lower and upper approximations as D∗ (X) = {x ∈ U | D(x) ⊆ X}, D∗ (X) = {D(x) | x ∈ X},

(14) (15)

where D(x) = {y ∈ U | (y, x) ∈ D}. It can be shown that D∗ (X) = {x | D−1 (x) ∩ X = ∅}, where D−1 (x) = {y ∈ U | (x, y) ∈ D}.

102

Masahiro Inuiguchi

Finally, Inuiguchi and Tanino [5] assume that a set X corresponds to an ambiguous concept so that we may have a set X composed of objects that everyone agrees their membership and a set X composed of objects that only someone agrees their membership. A given X can be considered a set of objects whose memberships are evaluated by a certain person. Thus, we assume that X ⊆ X ⊆ X. Let S be a reﬂexive similarity relation. Assume that only objects which are similar to a member of X are possible candidates for members of X for any set X. Then we have X ⊆ {S(x) | x ∈ X} = {x ∈ U | S −1 (x) ∩ X = ∅}. (16) From the deﬁnitions of X and X, we can have U − X = U − X and U − X = U − X. Hence we also have X ⊆ {x ∈ U | S −1 (x) ⊆ X}.

(17)

We do not know X and X but X. With substitution of S for T , we obtain the lower approximation of X by (12) and the upper approximation of X by (13). Now, let us discuss relationships between the previous deﬁnitions and the proposed deﬁnitions. The previous deﬁnitions are formally agreed in the deﬁnition of upper approximation by (13) with substitution of a certain relation for T . However, the proposed deﬁnition (4) is similar but diﬀerent since they take a union with X. By this union, X ⊆ P ∗ (X) is guaranteed. In order to have this property of the upper approximation, Slowi´ nski and Vanderpooten [10], Greco, Matarazzo and Slowi´ nski [3] and Inuiguchi and Tanino [5] assumed the reﬂexivity of the binary relations S and D. The idea of the proposed CP-rough set follows that of rough sets under a dominance relation proposed by Greco, Matarazzo and Slowi´ nski [3]. On the other hand the idea of the proposed CN-rough set is similar to those of Slowi´ nski and Vanderpooten [10] and Inuiguchi and Tanino [5] since we may regard S as a negatively extensive relation, i.e., S(x) means a set of objects we intuitively identify as members of U − X from the fact x ∈ U − X. However, the diﬀerences are found in the restrictions, i.e., x ∈ U in (14) versus x ∈ X in (3). In other words, we take an intersection with X, i.e., P∗ (X) = X∩{x ∈ U | P (x) ⊆ X} and ¯ ∗ (X) = X ∩ {x ∈ U | Q−1 (x) ⊆ X}. This intersection guarantees X ⊆ P∗ (X) Q ¯ ∗ (X). In order to guarantee those relations, the reﬂexivity of the and X ⊆ Q relation is assumed in Slowi´ nski and Vanderpooten [10], Greco, Matarazzo and Slowi´ nski [3] and Inuiguchi and Tanino [5]. Finally, we remark that, in deﬁnitions by Slowi´ nski and Vanderpooten [10] and Inuiguchi and Tanino [5], S acts as a positively extensive relation P and a negatively extensive relation Q at the same time. 3.3

Fundamental Properties

The fundamental properties of the CP- and CN-rough sets can be obtained as in Table 2. In property (vii), we assume that P can be regarded as positively

Generalizations of Rough Sets and Rule Extraction

103

Table 2. Fundamental properties of CP- and CN-rough sets ¯ ∗ (X) ⊆ X ⊆ Q ¯ ∗ (X). (i) P∗ (X) ⊆ X ⊆ P ∗ (X), Q ∗ ∗ (ii) P∗ (∅) = P (∅) = ∅, P∗ (U ) = P (U ) = U , ¯ ∗ (∅) = ∅, Q ¯ ∗ (U ) = Q ¯ ∗ (U ) = U . ¯ ∗ (∅) = Q Q (iii) P∗ (X ∩ Y ) = P∗ (X) ∩ P∗ (Y ), P ∗ (X ∪ Y ) = P ∗ (X) ∪ P ∗ (Y ), ¯ ∗ (X) ∩ Q ¯ ∗ (Y ), Q ¯ ∗ (X ∪ Y ) = Q ¯ ∗ (X) ∪ Q ¯ ∗ (Y ). ¯ ∗ (X ∩ Y ) = Q Q ∗ ∗ (iv) X ⊆ Y implies P∗ (X) ⊆ P∗ (Y ), P (X) ⊆ P (Y ), ¯ ∗ (Y ), Q ¯ ∗ (X) ⊆ Q ¯ ∗ (Y ). ¯ ∗ (X) ⊆ Q X ⊆ Y implies Q ∗ (v) P∗ (X ∪ Y ) ⊇ P∗ (X) ∪ P∗ (Y ), P (X ∩ Y ) ⊆ P ∗ (X) ∩ P ∗ (Y ), ¯ ∗ (X) ∪ Q ¯ ∗ (Y ), Q ¯ ∗ (X ∩ Y ) ⊆ Q ¯ ∗ (X) ∩ Q ¯ ∗ (Y ). ¯ ∗ (X ∪ Y ) ⊇ Q Q (vi) When Q is the converse of P , i.e., (x, y) ∈ P if and only if (y, x) ∈ Q, ¯ ∗ (X), P∗ (X) = U − Q∗ (U − X) = Q ¯ ∗ (X). P ∗ (X) = U − Q∗ (U − X) = Q ∗ (vii) X ⊇ P (P∗ (X)) ⊇ P∗ (X) ⊇ P∗ (P∗ (X)), X ⊆ P∗ (P ∗ (X)) ⊆ P ∗ (X) ⊆ P ∗ (P ∗ (X)), ¯ ∗ (X)) ⊇ Q ¯ ∗ (X) ⊇ Q ¯ ∗ (Q ¯ ∗ (X)), ¯ ∗ (Q X⊇Q ¯ ∗ (X)) ⊆ Q ¯ ∗ (X) ⊆ Q ¯ ∗ (Q ¯ ∗ (X)). ¯ ∗ (Q X⊆Q When P is transitive, P∗ (P∗ (X)) = P∗ (X), P ∗ (P ∗ (X)) = P ∗ (X). ¯ ∗ (X)) = Q ¯ ∗ (X), Q ¯ ∗ (Q ¯ ∗ (X)) = Q ¯ ∗ (X). ¯ ∗ (Q When Q is transitive, Q When P is reﬂexive and transitive, P ∗ (P∗ (X)) = P∗ (X) = P∗ (P∗ (X)), P∗ (P ∗ (X)) = P ∗ (X) = P ∗ (P ∗ (X)). When Q is reﬂexive and transitive, ¯ ∗ (X)) = Q ¯ ∗ (X) = Q ¯ ∗ (Q ¯ ∗ (X)), Q ¯ ∗ (Q ¯ ∗ (X)) = Q ¯ ∗ (X) = Q ¯ ∗ (Q ¯ ∗ (X)). ¯ ∗ (Q Q

extensive relations of P∗ (X), P ∗ (X), P∗ (P∗ (X)), P ∗ (P∗ (X)), P∗ (P ∗ (X)) and P ∗ (P ∗ (X)). Similarly, we assume also that Q can be regarded as negatively extensive relations of Q∗ (X), Q∗ (X), Q∗ (Q∗ (X)), Q∗ (Q∗ (X)), Q∗ (Q∗ (X)) and Q∗ (Q∗ (X)). Properties (i)–(v) are obvious. The proofs of (vi) and (vii) are given in Appendix. As shown in Table 2, (i)–(v) in Table 1 are preserved by classiﬁcation-oriented generalization. However (vi) and (vii) in Table 1 are conditionally preserved. A part of (vii) in Table 1 is unconditionally preserved. However the other part is satisﬁed totally when P is reﬂexive and transitive. When P is transitive, we have P ∗ (· · · (P ∗ (P∗ (X))) · · ·) = P ∗ (P∗ (X)) ⊆ X and P∗ (· · · (P∗ (P ∗ (X))) · · ·) = ¯ ∗ (· · · (Q ¯ ∗ (Q ¯ ∗ (X))) · · ·) = Q ¯ ∗ (Q ¯ ∗ (X)) ⊆ X P∗ (P ∗ (X)) ⊇ X. Similarly, we have Q ∗ ∗ ¯ ¯ ¯ ¯ ¯ and Q∗ (· · · (Q∗ (Q (X))) · · ·) = Q∗ (Q (X)) ⊇ X when Q is transitive. Those facts mean that the ﬁrst operation governs the relations with the original set when the relation is transitive. When relations P and Q represent the similarity between objects, P and Q can be equal each other. In such case, the condition for (vi) implies that P , or equivalently, Q is symmetric.

4 4.1

Approximation-Oriented Generalization Proposed Definitions

In order to generalize classical rough sets under the interpretation of rough sets as approximation of a set by means of elementary sets, we introduce a family with a

104

Masahiro Inuiguchi

ﬁnite number of elementary sets on U , F = {F1 , F2 , . . . , Fp }, as a generalization of a partition U |R = {E1 , E2 , . . . , Ep }. Each Fi is a group of objects collected according to some speciﬁc meaning. There are two ways to deﬁne lower and upper approximations of a set X under a family F : one is approximations by means of the union of elementary sets Fi and the other is approximations by means of the intersection of complements of elementary sets U − Fi . Namely, from the third and fourth expressions of lower and upper approximations in (1) and (2), lower and upper approximations of a set X under F are deﬁned straightforwardly in the following two ways: F∗∪ (X) = {Fi | Fi ⊆ X, i = 0, 1, . . . , p}, (18) (U − Fi ) (U − Fi ) ⊆ X, I ⊆ {1, 2, . . . , p + 1} , (19) F∗∩ (X) = F∪∗ (X) = F∩∗ (X) =

i∈I

i∈I

Fi Fi ⊇ X, I ⊆ {1, 2, . . . , p + 1} ,

i∈I

(20)

i∈I

{U − Fi | U − Fi ⊇ X, i = 0, 1, . . . , p},

(21)

where, for convenience, we deﬁne F0 = ∅ and Fp+1 = U . Because Fi ∩ Fj = ∅ for i = j does not always hold, F∗∩ (X) (resp. F∪∗ (X)) is not always an intersection of complements of elementary sets U − Fi (resp. a union of elementary sets Fi ) but a union of several maximal intersections Fj∩ (X), j = 1, 2, . . . , t1 of complements of elementary sets U − Fi (resp. an intersection ∪ of several minimal unions Fj (X), j = 1, 2, . . . , t2 of elementary sets Fi ) if (U − F ) ⊆ X (resp. i i=1,2,...,p i=1,2,...,p Fi ⊇ X) is satisﬁed. Namely, we have F∗∩ (X) = j=1,2,...,t1 Fj∩ (X) and F∪∗ (X) = j=1,2,...,t2 Fj∪ (X). We can call a pair (F∗∪ (X), F∪∗ (X)) an approximation-oriented rough set by means of the union of elementary sets Fi under a family F (for short, an AU-rough set) and a pair (F∗∩ (X), F∩∗ (X)) an approximation-oriented rough set by means of the intersection of complements of elementary sets U − Fi under a family F (for short, an AI-rough set). 4.2

Relationships to Previous Definitions

So far, rough sets have been generalized also under a ﬁnite cover and neighborhoods. In this subsection, we discuss relationships of AU- and AI-rough sets with the previous rough sets. First, we describe previous deﬁnitions. Bonikowski, Bryniarski and Wybraniec-Skardowska [1] proposed rough sets under a ﬁnite cover C = {C1 , C2 . . . , Cp } such that i=1,2,...,p Ci = U . They deﬁned the lower approximation of X ⊆ U by C∗ (X) = {Ci ∈ C | Ci ⊆ X}. (22) In order to deﬁne the upper approximation, we should deﬁne the minimal description of an object x ∈ U and the boundary of X. The minimal description of an object x ∈ U is a family deﬁned by

Generalizations of Rough Sets and Rule Extraction

105

M d(x) = {Ci ∈ C | x ∈ Ci , ∀Cj ∈ C(x ∈ Cj ∧ Cj ⊆ Ci → Ci = Cj )}. (23) Then the boundary of X is a family deﬁned by Bn(X) = {M d(x) | x ∈ X, x ∈ C∗ (X)}. The upper approximation of X is deﬁned by Bn(X) ∪ C∗ (X). (24) C ∗ (X) = Owing to M d(x), we have C ∗ (X) ⊆ C∗ (X) ∪ {Ci | Ci ∩ (X − C∗ (X)) = ∅} ⊆ {Ci | Ci ∩ X = ∅}.(25) Yao [12] proposed rough sets under neighborhoods {n(x) | x ∈ U } where n : U → 2U and n(x) is interpreted as the neighbothood of x ∈ U . Three kinds of rough sets were proposed. One of them has been described in subsection 3.2. The lower and upper approximations in the second kind of rough sets are ν∗ (X) = {n(x) | x ∈ U, n(x) ⊆ X} = {x ∈ U | ∃y(x ∈ n(y) ∧ n(y) ⊆ X)},

(26)

ν ∗ (X) = U − ν∗ (U − X) = {x ∈ U | ∀y(x ∈ n(y) → n(y) ∩ X = ∅)}. (27) As shown above, those lower and upper approximations are closely related with interior and closure operations in topology. The upper and lower approximations in the third kind of rough sets are deﬁned as follows: ν ∗ (X) = {n(x) | x ∈ U, n(x) ∩ X = ∅} = {x ∈ U | ∃y(x ∈ n(y) ∧ n(y) ∩ X = ∅)}, ν∗ (X)

∗

= U − ν (U − X) = {x ∈ U | ∀y(x ∈ n(y) → n(y) ⊆ X)}.

(28) (29)

Inuiguchi and Tanino [5] also proposed rough sets under a cover C. They deﬁned upper and lower approximations as (30) C∗ (X) = {Ci | Ci ⊆ X, i = 1, 2, . . . , p}, C ∗ (X) = U − C∗ (U − X) = {U − Ci | U − Ci ⊇ X, i = 1, 2, . . . , p}. (31) Now let us discuss the relationships with AU- and AI-rough sets. When F = ∗ ∗ ∗ ∗ C, we have F∗∪ (X) = C∗ (X) = C∗ (X),∪ F∪ (X) ⊆ C (X) and F∩ (X) = C (X). ∗ We also have C (X) ⊇ j=1,2,...,t2 Fj (X). The equality does not hold always because we have the possibility of Fj∪ (X) ⊂ C∗ (X) ∪ x∈X−C∗ (X) C(x), where C(x) is an arbitrary Ci ∈ M d(x). When F = {n(x) | x ∈ U }, we have F∗∪ (X) = ν∗ (X) ⊇ ν∗ (X) and F∩∗ (X) = ν ∗ (X) ⊆ ν ∗ (X). Generally, F∗∩ (X) (resp. F∪∗ (X)) has no relation with ν∗ (X) and ν∗ (X) (resp. ν ∗ (X) and ν ∗ (X)). From those relations, we know that F∗∪ (X) and F∗∩ (X) are maximal approximations among six lower approximations while F∪∗ (X) and F∩∗ (X) are minimal approximations among six upper approximations. This implies that the proposed lower and upper

106

Masahiro Inuiguchi Table 3. Fundamental properties of AU- and AI-rough sets

(i) F∗∪ (X) ⊆ X ⊆ F∪∗ (X), F∗∩ (X) ⊆ X ⊆ F∩∗ (X). ∗ ∗ = F∗∩ (∅) (ii) F∗∪ (∅) = ∅, F∪ (U ) = F∩ (U ) = U . When F = i=1,...,p Fi = U , F∩∗ (∅) = ∅, F∗∪ (U ) = U . When F = i=1,...,p Fi = ∅, F∪∗ (∅) = ∅, F∗∩ (U ) = U . (iii) F∗∪ (X ∩ Y ) ⊆ F∗∪ (X) ∩ F∗∪ (Y ), F∗∩ (X ∩ Y ) = F∗∩ (X) ∩ F∗∩ (Y ), F∪∗ (X ∪ Y ) = F∪∗ (X) ∪ F∪∗ (Y ), F∩∗ (X ∪ Y ) ⊇ F∩∗ (X) ∪ F∩∗ (Y ). When Fi ∩ Fj = ∅, for any i = j, F∗∪ (X ∩ Y ) = F∗∪ (X) ∩ F∗∪ (Y ), F∩∗ (X ∪ Y ) = F∩∗ (X) ∪ F∩∗ (Y ). (iv) X ⊆ Y implies F∗∪ (X) ⊆ F∗∪ (Y ), F∗∩ (X) ⊆ F∗∩ (Y ), X ⊆ Y implies F∪∗ (X) ⊆ F∪∗ (Y ), F∩∗ (X) ⊆ F∩∗ (Y ). (v) F∗∪ (X ∪ Y ) ⊇ F∗∪ (X) ∪ F∗∪ (Y ), F∗∩ (X ∪ Y ) ⊇ F∗∩ (X) ∪ F∗∩ (Y ), F∪∗ (X ∩ Y ) ⊆ F∪∗ (X) ∩ F∪∗ (Y ), F∩∗ (X ∩ Y ) ⊆ F∩∗ (X) ∩ F∩∗ (Y ). (vi) F∗∪ (U − X) = U − F∩∗ (X), F∗∩ (U − X) = U − F∪∗ (X), F∪∗ (U − X) = U − F∗∩ (X), F∩∗ (U − X) = U − F∗∪ (X). (vii) F∗∪ (F∗∪ (X)) = F∗∪ (X), F∗∩ (F∗∩ (X)) = F∗∩ (X), F∪∗ (F∪∗ (X)) = F∪∗ (X), F∩∗ (F∩∗ (X)) = F∩∗ (X), F∪∗ (F∗∪ (X)) = F∗∪ (X), F∗∩ (F∩∗ (X)) = F∩∗ (X), F∩∗ (F∗∩ (X)) ⊇ F∗∩ (X), F∩∗ (Fj∩ (X)) = Fj∩ (X), j = 1, 2, . . . , t1 , F∗∪ (F∪∗ (X)) ⊆ F∪∗ (X), F∗∪ (Fj∪ (X)) = Fj∪ (X), j = 1, 2, . . . , t2 , When Fi ∩ Fj = ∅, for any i = j, F∩∗ (F∗∩ (X)) = F∗∩ (X), F∗∪ (F∪∗ (X)) = F∪∗ (X).

approximations are better approximations of X so that they are suitable for our interpretation of rough sets. Moreover, the proposed deﬁnitions are applicable under a more general setting since we neither assume that F is a cover nor that p = Card(F ) ≤ n = Card(U ), i.e., the number of elementary sets Fi is not less than the number of objects. 4.3

Fundamental Properties

The fundamental properties of AU- and AI-rough sets are shown in Table 3. Properties (i), (iv) and (v) in Table 1 are preserved for both of AU- and AI-rough sets. Parts of (ii) and (iii) in Table 1 are preserved, however, some conditions are necessary for full preservation. The duality, i.e., property (vi) in Table 1 is preserved between upper (resp. lower) approximations of AU-rough sets (resp. AI-rough sets) and lower (resp. upper) approximations of AI-rough sets (resp. AU-rough sets). Property (vii) in Table 1 is almost preserved. F∩∗ (F∗∩ (X)) = F∗∩ (X) (resp. F∗∪ (F∪∗ (X)) = F∪∗ (X)) is not always preserved because F∗∩ (X) (resp. F∪∗ (X)) is not always a union of elementary sets Fi (resp. an intersection of complements of elementary sets U − Fi ). However, for the minimal union Fj∪ (resp. Fj∩ ), the property corresponding to (vii) holds. The proof of property (iii) is given in Appendix. The other properties can be proved easily.

Generalizations of Rough Sets and Rule Extraction

107

Table 4. Relationships between two kinds of rough sets (a) When P is reﬂexive, P∗ (X) ⊆ P∗∪ (X) = P ∗ (P∗ (X)) ⊆ X ⊆ P∪∗ (X) ⊆ P ∗ (X). ¯ ∗ (Q ¯ ∗ (X)) ⊇ X ⊇ Q∩ ¯ ¯ ∗ (X) ⊇ Q∗∩ (X) = Q When Q is reﬂexive, Q ∗ (X) ⊇ Q∗ (X). (b) When P is transitive, P∪∗ (X) ⊇ P ∗ (X) ⊇ X ⊇ P∗ (X) ⊇ P∗∪ (X). ∗ ¯ ¯∗ When Q is transitive, Q∩ ∗ (X) ⊆ Q∗ (X) ⊆ X ⊆ Q (X) ⊆ Q∩ (X). (c) When P is reﬂexive and transitive, P∗ (X) = P∗∪ (X) = P ∗ (P∗ (X)) ⊆ X ⊆ P∪∗ (X) = P ∗ (X). When Q is reﬂexive and transitive, ¯ ∗ (Q ¯ ∗ (X)) ⊇ X ⊇ Q∩ ¯ ¯ ∗ (X) = Q∗∩ (X) = Q Q ∗ (X) = Q∗ (X).

5

Relationships between Two Kinds of Rough Sets

Given a relation P , we may deﬁne a family by P = {P (x) | x ∈ U }.

(32)

Therefore, under a positively extensive relation P is given, we obtain not only CP-rough sets but also AU- and AI-rough sets. This is the same for a negatively extensive relation Q. Namely, by a family Q = {Q(x) | x ∈ U }, we obtain AUand AI-rough sets. The relationships between CP-/CN-rough sets and AU/AI-rough sets are listed in Table 4. In Table 4 we recognize a strong relation between CP- and AU-rough sets as well as a strong relation between CN- and AI-rough sets. The proofs of (a) and (b) in Table 4 are given in Appendix.

6 6.1

Rule Extraction Decision Table and Problem Setting

In this section, we discuss rule extraction from decision tables based on generalized rough sets. Consider a decision table I = U, C ∪ {d}, V, ρ, where U = {x1 , x2 , . . . , xn } is a universe of objects,C is a set of all condition attributes, d is a unique decision attribute, V = a∈C∪{d} Va , Va is a ﬁnite set of attribute values of attribute a, and ρ : U × C ∪ {d} → V is the information function such that ρ(x, a) ∈ Va for all a ∈ C ∪{d}. By decision attribute value ρ(x, d), we assume that we can group objects into several classes Dk , k = 1, 2, . . . , m. Dk , k = 1, 2, . . . , m do not necessary form a partition but a cover. Namely, Dk ∩ Dj = ∅ does not always hold but k=1,2,...,m Dk = U . Corresponding to Dk , k = 1, 2, . . . , m, we assume that there is a relation Pa ∈ Va2 is given to each condition attribute a ∈ C so that if x ∈ Dk and (y, x) ∈ Pa then we intuitively conclude y ∈ Dk from the viewpoint of attribute a. For each A ⊆ C, we deﬁne a positively extensive relation by PA = {(x, y) | (ρ(x, a), ρ(y, a)) ∈ Pa , ∀a ∈ A}.

(33)

Moreover, we also assume that there is a relation Qa ∈ Va × Va is given to each condition attribute a ∈ C so that if x ∈ U − Dk and (y, x) ∈ Qa then

108

Masahiro Inuiguchi

we intuitively conclude y ∈ U − Dk from the viewpoint of attribute a. For each A ⊆ C, we deﬁne a negatively extensive relation by QA = {(x, y) | (ρ(x, a), ρ(y, a)) ∈ Qa , ∀a ∈ A}.

(34)

For the purpose of the comparison, we may built ﬁnite families based on relations Pa and Qa as described below. We can build families using PA and QA as P = {PA (x) | x ∈ U, A ⊆ C},

(35)

Q = {QA (x) | x ∈ U, A ⊆ C},

(36)

where PA (x) = {y ∈ U | (y, x) ∈ PA } and QA (x) = {y ∈ U | (y, x) ∈ QA }. For A = {a1 , a2 , . . . , as } and v = (v1 , v2 , . . . , vs ) ∈ Va1 × Va2 × · · · × Vas , let us deﬁne ZA (v) = {x ∈ U | (ρ(x, ai ), vi ) ∈ Pai , i = 1, 2, . . . , s}, WA (v) = {x ∈ U | (ρ(x, ai ), vi ) ∈ Qai , i = 1, 2, . . . , s}.

(37) (38)

Using those sets, we may build the following families deﬁned by Z = {ZA (v) | v ∈ Va1 × Va2 × · · · × Vas , A = {a1 , a2 , . . . , as } ⊆ U }, (39) W = {WA (v) | v ∈ Va1 × Va2 × · · · × Vas , A = {a1 , a2 , . . . , as } ⊆ U }. (40) 6.2

Rule Extraction Based on Positive and Certain Regions

¯ ∗ (X) has the As shown in (10), a positive region P∗ (X) and a certain region Q same representation. The diﬀerence is the adoption of the relation, i.e., P versus QT . Therefore the rule extraction method is the same. In this subsection, we describe the rule extraction method based on a positive region P∗ (X). The rule ¯ ∗ (X) is obtained by replacing a extraction method based on a certain region Q T relation P with a relation Q . We discuss the extraction of decision rules from the decision table I = U, C ∪ {d}, V, ρ described in the previous subsection. First, let us discuss the type of decision rule corresponding to the positive region (3). For any object y ∈ U satisfying the condition of the decision rules, y ∈ Dk and PC (y) ⊆ Dk . Dk should not be in the condition part since we would like to infer the members of Dk . Considering those requirements, we should explore suitable conditions of the decision rules. When we conﬁrm y = x for an object x ∈ PC ∗ (Dk ), we may obviously conclude y ∈ PC ∗ (Dk ). Since each object is characterized by conditional attributes a ∈ C, y = x can be conjectured from ρ(y, a) = ρ(x, a), ∀a ∈ C. However, it is possible that there exists z ∈ U such that ρ(z, a) = ρ(x, a), ∀a ∈ C but z ∈ Dk . When PC is reﬂexive, we always have x ∈ PC ∗ (Dk ) if such an object z ∈ U exists. Since we do not assume the reﬂexivity, x ∈ PC ∗ (Dk ) is possible even in the case such an object z ∈ U exists. From these observations, we obtain the following type of decision rule based on x ∈ PC ∗ (Dk ) only when there is no object z ∈ U such that ρ(z, a) = ρ(x, a), ∀a ∈ C but z ∈ Dk : if ρ(y, a1 ) = v1 and · · · and ρ(y, al ) = vl then y ∈ Dk ,

Generalizations of Rough Sets and Rule Extraction

109

where vj = ρ(x, ai ), i = 1, 2, . . . , l and we assume C = {a1 , a2 , . . . , al }. Let us call this type of the decision rule, an identity if-then rule (for short, id-rule). When PC is transitive, we may conclude y ∈ PC ∗ (Dk ) from the fact that (y, x) ∈ PC and x ∈ PC ∗ (Dk ). This is because we have PC (y) ⊆ PC (x) ⊆ Dk and y ∈ PC (x) ⊆ X from transitivity and the fact x ∈ PC ∗ (Dk ). In this case, we may have the following type of decision rule, if (ρ(y, a1 ), v1 ) ∈ Pa1 and · · · and (ρ(y, al ), vl ) ∈ Pal then y ∈ Dk . This type of if-then rule is called a relational if-then rule (for short, R-rule). When the relation PC is reﬂexive and transitive, an R-rule includes the corresponding id-rule. As discussed above, based on an object x ∈ PC ∗ (Dk ), we can extract id-rules, and R-rules if PC is transitive. We prefer to obtain decision rules with minimum length conditions. To this end, we should calculate the minimal condition attribute set A ⊆ C such that x ∈ PA ∗ (Dk ). Let A = {a1 , a2 , . . . , aq } be such a minimal condition attribute set. Then we obtain the following id-rule when there is no object z ∈ U − Dk such that ρ(z, ai ) = vi , i = 1, 2, . . . , q, if ρ(y, a1 ) = v1 and · · · and ρ(y, aq ) = vq then y ∈ Dk , where vi = ρ(x, ai ), i = 1, 2, . . . , q. When PC is transitive, we obtain an R-rule, if (ρ(y, a1 ), v1 ) ∈ Pa1 and · · · and (ρ(y, aq ), vq ) ∈ Paq then y ∈ Dk . Note that PC is transitive if and only if PA is transitive for each A ⊆ C. Moreover, the minimal condition attribute set is not always unique and, for each minimal condition attribute set, we obtain id- and R-rules. Through the above procedure, we will obtain many decision rules. The decision rules are not always independent. Namely, we may have two decision rules ‘if Cond1 then Dec’ and ‘if Cond2 then Dec’ such that Cond1 implies Cond2 . Eventually, the decision rule ‘if Cond1 then Dec’ is superﬂuous and then omitted. This is diﬀerent from the rule extraction based on the classical rough set. For extracting all decision rules with minimal length conditions, we can utilize a decision matrix [8] with modiﬁcations. Consider the extraction of decision rules concluding y ∈ Dk . We begin with the calculation of PC ∗ (Dk ). Based on the obtained PC ∗ (Dk ), we deﬁne two disjoint index sets K + = {i | xi ∈ PC ∗ (Dk )} and K − = {i | xi ∈ Dk }. The decision matrix M id (I) = (Mijid ) is deﬁned by Mijid = {(a, v˜i ) | v˜i = ρ(xi , a), (ρ(xj , a), ρ(xi , a)) ∈ Pa , ρ(xj , a) = ρ(xi , a), a ∈ C}, i ∈ K + , j ∈ K − .

(41)

Note that the size of the decision matrix M id (I) is Card(K + ) × Card(K − ). An element (a, v˜i ) of Mijid corresponds to a condition ‘ρ(y, a) = v˜i ’ which is not satisﬁed with y = xj but with y = xi . Moreover Mijid (I) can be empty and in this case, we cannot obtain any id-rule from xi ∈ PC ∗ (Dk ). When PC is transitive, we should consider another decision matrix for Rrules. The decision matrix M R (I) = (MijR ) is deﬁned by MijR = {(a, v˜i ) | v˜i = ρ(xi , a), (ρ(xj , a), ρ(xi , a)) ∈ Pa , a ∈ C}, i ∈ K +, j ∈ K −.

(42)

110

Masahiro Inuiguchi

Note that the size of the decision matrix M R (I) is Card(K + ) × Card(K − ). An element (a, v˜i ) of MijR shows a condition ‘(ρ(y, a), v˜i ) ∈ Pa ’ which is not satisﬁed with y = xj but with y = xi . Let Id((a, v)) be a statement ‘ρ(x, a) = v’ and P˜ ((a, v)) a statement ‘(ρ(x, a), v) ∈ Pa ’. Then all minimal conditions in all possible decision rules with respect to Dk are obtained as conjunctive terms in the disjunctive normal form of the following logical function: Id(Mijid ), if PC is not transitive, + − j∈K i∈K Bk = (43) P˜ (MijR ) , Id(Mijid ) ∨ i∈K + j∈K − i∈K + j∈K − if PC is transitive. By the construction, it is obvious that z ∈ Dk does not satisfy the conditions of decision rules and that x ∈ PC ∗ (Dk ) satisﬁes them. Moreover we can prove that z ∈ Dk − PC ∗ (Dk ) does not satisfy the conditions. The proof is as follows. Let z ∈ Dk − PC ∗ (Dk ) and let y ∈ Dk such that (ρ(y, a), ρ(z, a)) ∈ Pa for all a ∈ C. The existence of y is guaranteed by the deﬁnition of z. First consider the condition of an arbitrary id-rule, ‘ρ(w, a1 ) = v1 , ρ(w, a2 ) = v2 and · · · and ρ(w, aq ) = vq ’, where A = {a1 , a2 , . . . , aq } ⊆ C. Suppose z satisﬁes this condition, i.e., ‘ρ(z, a1 ) = v1 , ρ(z, a2 ) = v2 and · · · and ρ(z, aq ) = vq ’. Since vi = ρ(x, ai ), i = 1, 2, . . . , q, the fact z ∈ Dk − PC ∗ (Dk ) implies that (ρ(y, a), ρ(x, a)) ∈ Pa for all a ∈ A, i.e., (y, x) ∈ PA (y ∈ PA (x)). From y ∈ Dk , we have PA (x) ⊆ Dk . On the other hand, by the construction of Mijid , for each y ∈ Dk , there exists a ∈ A such that (ρ(y, a), ρ(x, a)) ∈ Pa . This implies PA (x) ⊆ Dk . A contradiction. Thus, for each id-rule, there is no z ∈ Dk − PC ∗ (Dk ) satisfying the condition. Next, assuming that PC is transitive, we consider the condition of an arbitrary R-rule, ‘(ρ(w, a1 ), v1 ) ∈ Pa1 , and · · · and (ρ(w, aq ), vq ) ∈ Paq ’, where {a1 , a2 , . . . , aq } ⊆ C. Suppose z satisﬁes this condition, i.e., ‘(ρ(z, a1 ), v1 ) ∈ Pa1 , and · · · and (ρ(z, aq ), vq ) ∈ Paq ’. From the transitivity and the fact (ρ(y, a), ρ(z, a)) ∈ Pa for all a ∈ C, we have ‘(ρ(y, a1 ), v1 ) ∈ Pa1 , and · · · and (ρ(y, aq ), vq ) ∈ Paq ’. This contradicts the construction of the condition of R-rule. Therefore, for each R-rule, there is no z ∈ Dk − PC ∗ (Dk ) satisfying the condition. The rule extraction method based on certain region is obtained by replacing T T PC , PA and Pa of the above discussion with QT C , QA and Qa , respectively. 6.3

Rule Extraction Based on Lower Approximations of AU-Rough Sets

As in the previous subsection, we discuss the extraction of decision rules from the decision table I = U, C ∪ {d}, V, ρ. First, let us discuss the type of decision rule corresponding to the lower approximation of AU-rough set (18). For any object y ∈ U satisfying the condition of the decision rules, we should have y ∈ Fi and

Generalizations of Rough Sets and Rule Extraction

111

Fi ⊆ Dk . When we conﬁrm y ∈ Fi for an elementary set Fi ∈ F such that Fi ⊆ Dk , we may obviously conclude y ∈ Dk . From this fact, when Fi ⊆ Dk we have the following type of decision rule; if y ∈ Fi then y ∈ Dk . For the decision table I = U, C ∪ {d}, V, ρ, we consider two cases; (a) a case when F = P and (b) a case when F = Z. In those cases the corresponding decision rules from the facts PA (x) ⊆ X and ZA (v) ⊆ X become Case (a): if (ρ(y, a1 ), v¯1 ) ∈ Pa1 and · · · and (ρ(y, as ), v¯s ) ∈ Pas then y ∈ Dk , Case (b): if (ρ(y, a1 ), v1 ) ∈ Pa1 and · · · and (ρ(y, as ), vs ) ∈ Pas then y ∈ Dk , where A = {a1 , a2 , . . . , as }, v¯i = ρ(x, ai ), i = 1, 2, . . . , s and v = (v1 , v2 , . . . , vs ). By the construction of P and Z, we have PA (x) ⊇ PA (x) and ZA (v ) ⊇ ZA (v) for A ⊆ A, where A = {ak1 , ak2 , . . . , akt } ⊆ A and v = (vk1 , vk2 , . . . , vkt ) is a sub-vector of v. Therefore, the decision rules with respect to minimal attribute sets A are suﬃcient since they cover all decision rules with larger attribute sets A ⊇ A . By this observation, we enumerate all decision rules with respect to minimal attribute sets. The enumeration can be done by a modiﬁcation of the decision matrix [8]. We describe the method in Case (a). Consider an enumeration of all decision rules with respect to a decision class Dk . To apply the decision matrix method, we ﬁrst obtain a CP-rough set P∗ (Dk ) = {x ∈ U | PC (x) ⊆ Dk }. Using K + = {i | xi ∈ P∗ (Dk )} and K − = {i | xi ∈ Dk }, we deﬁne the decision matrix ˜ (I) = (M ˜ ij ) by M ˜ ij = {(a, v) | v = ρ(xi , a), (ρ(xj , a), ρ(xi , a)) ∈ Pa , a ∈ C}, M i ∈ K +, j ∈ K −.

(44)

Then all minimal conditions in all possible decision rules with respect to Dk are obtained as conjunctive terms in the disjunctive normal form of the following logical function: ˜ ij ). ˜k = P˜ (M B (45) i∈K + j∈K −

In Case (b), we calculate K(Dk ) = {v ∈ V1 ×V2 ×· · · Vl | Z(v) ⊆ Dk } instead of P∗ (Dk ). Number elements of K(Dk ) such that K(Dk ) = {v 1 , v 2 , . . . , v r }, ˜ (I) = (M ˜ ) by where r = Card(K(Dk )). Then we deﬁne the decision matrix M ij ˜ = {(ad , v i ) | (ρ(xj , ad ), v i ) ∈ Pa , ad ∈ C}, i ∈ {1, . . . , r}, j ∈ K − , (46) M d ij d d where vdi is the d-th component of v i , i.e., v i = (v1i , v2i , . . . , vli ). All minimal conditions in all possible decision rules with respect to Dk are obtained as conjunctive terms in the disjunctive normal form of the following logical function: ˜ ). ˜ = P˜ (M (47) B k

ij

i∈{1,2,...,r} j∈K −

112

6.4

Masahiro Inuiguchi

Rule Extraction Based on Lower Approximations of AI-Rough Sets

Let us discuss the type of decision rule corresponding to the lower approximation of AI-rough set (19). For any object y ∈ U satisfying the condition of the decision rules, y ∈ j=1,2,...,t1 Fj∩ . Therefore, for each Fj∩ , we have the following type of decision rule: if y ∈ Fj∩ then y ∈ Dk . By the deﬁnition, Fj∩ is represented by an intersection of a number of complementary sets of elementary sets, i.e., i∈I (U − Fi ) for a certain I ⊆ {1, 2, . . . , p}. Therefore the condition part of the decision rule can be represented by ‘y ∈ Fi1 , y ∈ Fi2 ,..., and y ∈ FiCard(I) ’, where I = {i1 , i2 , . . . , iCard(I) }. Since each Fiz is a conjunction of sentences (ρ(y, ac ), vc ) ∈ Qac , ac ∈ C in our problem setting, at ﬁrst glance, the condition of the above decision rule seems to be very long. Note that we should use the relation Qa , a ∈ C. Otherwise, it is not suitable for the meaning of the relation Pa because we approximate Dk by monotonone set operations of U − Pa (va ), a ∈ C, va ∈ Va . Accordingly, we consider two cases; (c) F = Q and (d) F = W. By the construction of Q and W, the condition part of the decision rule becomes simpler. This relies on the following fact. Suppose Fi∩ = (U − (Qa1 (x) ∩ Qa2 (x))) ∩ (U − (Qa3 (y) ∩ Qa4 (y))), where x, y ∈ U , {a1 , a2 }, {a3 , a4 } ⊆ C and it is possible that x = y and {a1 , a2 } ∩ {a3 , a4 } = ∅. Then we have Fi∩ = ((U −Qa1 (x))∩(U −Qa3 (y)))∪((U −Qa1 (x))∩(U −Qa4 (y)))∪((U −Qa2 (x))∩(U − sub Qa3 (y)))∪((U −Qa2 (x))∩(U −Qa4 (y))). Let Fi1 = (U −Qa1 (x))∩(U −Qa3 (y)), sub sub Fi2 = (U − Qa1 (x)) ∩ (U − Qa4 (y)), Fi3 = (U − Qa2 (x)) ∩ (U − Qa3 (y)) and sub sub sub sub sub = (U − Qa2 (x)) ∩ (U − Qa4 (y)). We have Fi∩ = Fi1 ∪ Fi2 ∪ Fi3 ∪ Fi4 . Fi4 ∩ This implies that the decision rule ‘if y ∈ Fi then y ∈ Dk ’ can be decomposed sub to ‘if y ∈ Fij then y ∈ Dk ’, j = 1, 2, 3, 4. From this observation, for Fj∩ , j = 1, 2, . . . , t1 , we have the following body of if-then rules: sub if y ∈ Fji then y ∈ Dk , i = 1, 2, . . . , i(j), j = 1, 2, . . . , t1 , ∩ sub sub where Fj = i=1,2,...,i(j) Fji . It can be seen that Fji , i = 1, 2, . . . , i(j), j = 1, 2, . . . , t include all maximal sets of the form 1 a⊆C, x∈U (U − Qa (x)) such that (U − Q (x)) ⊆ D . This can be proved as follows. Suppose that a k a∈A⊆C, x∈I⊆U sub sub is one of the maximal sets which does not included in Fji , i = 1, 2, . . . , i(j), G sub j = 1, 2, . . . , t1 . By the construction of Q, G is a member of Q. This implies that there is a set A⊆C, x∈I⊆U (U −QA (x)) such that G sub ⊆ A⊆C, x∈I⊆U (U − QA (x)) ⊆ Dk . This contradicts to the fact that Fj∩ , j = 1, 2, . . . , t1 are maximal. sub Hence, Fji , i = 1, 2, . . . , i(j), j = 1, 2, . . . , t1 include all maximal sets of the form a⊆C, x∈U (U − Qa (x)) such that a∈A⊆C, x∈I⊆U (U − Qa (x)) ⊆ Dk . The same discussion is valid in Case (d), i.e., F = W. Therefore we consider the type of decision rule,

Case (c): if (ρ(y, a1 ), v¯1 ) ∈ Qa1 and · · · (ρ(y, as ), v¯s ) ∈ Qas then y ∈ Dk , Case (d): if (ρ(y, a1 ), v1 ) ∈ Qa1 and · · · (ρ(y, as ), vs ) ∈ Qas then y ∈ Dk ,

Generalizations of Rough Sets and Rule Extraction

113

where v¯i = ρ(xi , ai ), xi ∈ U , ai ∈ C, i = 1, 2, . . . , s and vi ∈ Vai , i = 1, 2, . . . , s. We should enumerate all minimal conditions of the decision rules above. This can be done also by a decision matrix method with modiﬁcations described below. − In Case (c), let K + = {i | xi ∈ Q∩ = {i | xi ∈ Dk }. We deﬁne a ∗ (Dk )} and K Q Q decision matrix M = (Mij ) by MijQ = {(a, v) | (ρ(xj , a), v) ∈ Qa , (ρ(xi , a), v) ∈ Qa , (48) v = ρ(a, x), x ∈ U, a ∈ C}, i ∈ K + , j ∈ K − . ˜ Let ¬Q((a, v)) be a statement ‘(ρ(y, a), v) ∈ Q’. Then the all minimal conditions are obtained as conjunctive terms in the disjunctive normal form of the following logical function: Q ˜ BkQ = ¬Q(M ij ). i∈K + j∈K −

In Case (d), let K = {i | xi ∈ W∗∩ (Dk )} and K − = {i | xi ∈ Dk }. We deﬁne a decision matrix MW = (MijW ) by +

MijW = {(a, v) | (ρ(xj , a), v) ∈ Qa , (ρ(xi , a), v) ∈ Qa , v ∈ Va , a ∈ C}, (49) i ∈ K +, j ∈ K −. All minimal conditions in all possible decision rules with respect to Dk are obtained as conjunctive terms in the disjunctive normal form of the following logical function: W ˜ BkW = (50) ¬Q(M ij ). i∈K + j∈K −

6.5

Comparison and Correspondence between Definitions and Rules

As shown in the previous sections, the extracted decision rules are diﬀerent by the underlying generalized rough sets. The correspondences between underlying generalized rough sets and types of decision rules are arranged in Table 5. Table 5. Correspondence between generalized rough sets and types of decision rules deﬁnition of rough set

type of decision rule if ρ(x, a1 ) = v1 and · · · and ρ(x, ap ) = vp then x ∈ Dk , CP-rough set: if (ρ(x, a1 ), v1 ) ∈ Pa1 and · · · and (ρ(x, ap ), vp ) ∈ Pap {x ∈ X | P (x) ⊆ X} then x ∈ Dk . (when Pa is transitive) CN-rough set: if ρ(x, a1 ) = v1 and · · · and ρ(x, ap ) = vp then x ∈ Dk , X∩ if (v1 , ρ(x, a1 )) ∈ Qa1 and · · · and (vp , ρ(x, ap )) ∈ Qap U − {Q(x) | x ∈ U − X} then x ∈ Dk . (when Qa is transitive) AU-rough if (ρ(x, a1 ), v1 ) ∈ Pa1 and · · · and (ρ(x, ap ), vp ) ∈ Pap set: {Fi ∈ F | Fi ⊆ X} then x ∈ Dk . (when F = P of (35) or F = Z of (39)) if (ρ(y, a1 ), v1 ) ∈ Qa1 and · · · and (ρ(y, ap ), vp ) ∈ Qap AI-rough set: (∗1) then y ∈ Dk . (when F = Q of (36) or F = W of (40)) (∗1)

{

i∈I

(U − Fi ) |

i∈I

(U − Fi ) ⊆ X, I ⊆ {1, . . . , p + 1}}

114

Masahiro Inuiguchi

When Pa , a ∈ C are reﬂexive and transitive, the type of decision rules are the same between CP- and AU-rough sets. However, extracted decision rules are not always the same. More speciﬁcally, condition parts of extracted decision rules based on CP-rough sets are the same as those based on AU-rough sets when F = P of (35) but usually stronger than those based on AU-rough sets when ˜ ij ⊆ M ˜ . When Pa , a ∈ C are F = Z of (39). This is because we have MijR = M ij only transitive, the extracted R-rules based on CP-rough sets are the same as extracted decision rules based on AU-rough sets with F = P. In this case, the extracted decision rules include id-rules. Namely, extracted decision rules based on CP-rough sets are more than those based on AU-rough sets. While converse relations QT a , a ∈ C appear in extracted R-rules based on CN-rough sets when Q is transitive, complementary relations (U × U ) − Qa , a ∈ C appear in extracted decision rules based on AI-rough sets. Table 6. Car evaluation Car fuel consumption (F u) selling price (P r) Car1 medium medium high medium Car2 [medium,high] low Car3 low [low,medium] Car4 high [low,high] Car5 [low,medium] low Car6

7

size (Si) marketability (M a) medium poor [medium,large] poor [medium,large] poor large good [small,medium] poor [medium,large] good

Simple Examples

Example 1. Let us consider a decision table with interval attribute values about car evaluation, Table 6. An interval attribute value in this table shows that we do not know the exact value but the possible range within which the exact value exists. Among attribute values, we have orderings, low ≤ medium ≤ high and small ≤ medium ≤ large. Let us consider the decision class of good marketability, i.e., D1 = {Car4, Car6}, and extract conditions of good marketability. A car with low fuel consumption, low selling price and large size is preferable. Therefore, we can deﬁne PF =≤st , PP =≤st and PS =≥st , QF =≥st , QP =≥st L R and QS =≤st , where for intervals E1 = [ρL1 , ρR 1 ] and E2 = [ρ2 , ρ2 ], we deﬁne st R L st L R E1 ≤ E2 ⇔ ρ1 ≤ ρ2 and E1 ≥ E2 ⇔ ρ1 ≥ ρ2 . We consider P of (35), Z of (39), Q of (36) and W of (40). P and Q are not reﬂexive but transitive. ∩ We obtain PC∗ (D1 ) = P∗∪ (D1 ) = Z∗∪ (D1 ) = Q∩ ∗ (D1 ) = W∗ (D1 ) = {Car4, ¯ Car6}, QC∗ (D1) = {Car4}, where C = {F u, P r, Si}. Applying the proposed methods, we obtain the following decision rules:

Generalizations of Rough Sets and Rule Extraction

115

PC∗ (D1 ): if P r=[low,medium] then M a=good, if F u=[low,medium] then M a=good, if F uR ≤ low then M a=good, if SiL ≥ large then M a=good, if F uR ≤ medium and P rR ≤ low then M a=good, ¯ C∗ (D1 ): if F u=low then M a=good, Q if P r=[low,medium] then M a=good, if F uL ≤ low then M a=good, P∗∪ (D1 ): if F uR ≤ low then M a=good, if SiL ≥ large then M a=good, if F uR ≤ medium and P rR ≤ low then M a=good, L Q∩ ∗ (D1 ): if F u < medium then M a=good, where we use F u = [F uL , F uR ] = ρ(y, F u), P r = [P rL , P rR ] = ρ(y, P r), Si = [SiL , SiR ] = ρ(y, Si) and M a = ρ(y, M a) for convenience. Extracted decision rules based on Z∗∪ (D1 ) and W∗∩ (D1 ) are same as those based on P∗∪ (D1 ) and Q∩ ∗ (D1 ). We can observe the similarity between rules based on PC∗ (D1 ) and ¯ C∗ (D1 ) and Q∩ P∗∪ (D1 ) and between rules based on Q ∗ (D1 ), respectively. Table 7. Survivability of alpinists with respect to foods and tools

Alp1 Alp2 Alp3 Alp4 Alp5

foods (F o) tools (T o) survivability (Sur) {a} {A, B} low {a, b, c} {A, B} high {a, b} {A} low {b} {A} low {a, b} {A, B} high

Example 2. Consider an alpinist problem. There are three packages a, b and c of foods and two packages A and B of tools. When an alpinist climbs a mountain, he/she should carry foods and tools in order to be back safely. Assume the survivability Sur is determined by foods F o and tools T o packed in his/her knapsack and a set of data is given as in Table 7. Discarding the weight, we think that the more foods and tools, the higher the survivability. In this sense, we consider an inclusion relation ⊇ for both attributes F o and T o. Namely, we adopt ⊇ for the positively extensive relation P and ⊆ for the negatively extensive relation Q. Since ⊇ satisﬁes the reﬂexivity and transitivity and ⊆ is the converse of ⊇, all generalized rough sets described in this paper, i.e., CP-rough sets, CNrough sets, AU-rough sets and AI-rough sets coincide one another. Indeed, for ¯ C∗ (D1 ) = P ∪ (D1 ) = Z ∪ (D1 ) = a class D1 of Sur =high, we have PC∗ (D1 ) = Q ∗ ∗ ∩ ∩ Q∗ (D1 ) = W∗ (D1 ) = D1 = {Alp2, Alp3}, where C = {F o, T o, Sur} and P, Q, Z and W are deﬁned by (35), (36), (39) and (40). ¯ C∗ (D1 ), P ∪ (D1 ), Extracting decision rules based on rough sets PC∗ (D1 ), Q ∗ ∪ ∩ ∩ Z∗ (D1 ), Q∗ (D1 ) and W∗ (D1 ), we have the following decision rules;

116

Masahiro Inuiguchi

PC∗ (D1 ): if if ∪ Z∗ (D1 ): if if (D ): if Q∩ 1 ∗ if

F o ⊇ {a, b, c} then Sur = high, F o ⊇ {a, b} and T o ⊇ {A, B} then Sur = high, F o ⊇ {c} then Sur = high, F o ⊇ {b} and T o ⊇ {B} then Sur = high, F o ⊆ {a, b} then Sur = high, F o ⊆ {a} and T o ⊆ {A} then Sur = high,

where we use F o = ρ(y, F o), T o = ρ(y, T o) and Sur = ρ(y, Sur) for convenience. ¯ C∗ (D1 ), P∗∪ (D1 ) and W∗∩ (D1 ) are same as Extracted decision rules based on Q those based on P∗∪ (D1 ), P∗∪ (D1 ) and Q∩ ∗ (D1 ), respectively. Unlike the previous example, the extracted decision rules based on Q∩ ∗ (D1 ) ¯ C∗ (D1 ), i.e., those based on PC∗ (D1 ). are not very similar to those based on Q This is because an inclusion relation ⊆ is a partial order so that the negation of an inclusion relation is very diﬀerent from the converse of the inclusion relation. As shown in this example, even if positive region, certain region and lower approximations coincide each other, the extracted if-then rules are diﬀerent by underlying generalized rough sets.

8

Concluding Remarks

We have proposed four kinds of generalized rough sets based on two diﬀerent interpretations of rough sets: rough sets as classiﬁcation of objects into positive, negative and boundary regions and rough sets as approximation by means of elementary sets in a given family. We have described relationships of the proposed rough sets to the previous rough sets in general settings. Fundamental properties of the generalized rough sets have been investigated. Moreover relations among four generalized rough sets have been also discussed. Rule extraction based on the generalized rough sets has been proposed. We have shown the diﬀerences in the types of extracted decision rules by underlying rough sets. Rule extraction methods based on modiﬁed decision matrices have been proposed. A few numerical examples have been given to illustrate the diﬀerences among extracted decision rules. One of the examples has demonstrated that extracted decision rules can be diﬀerent by underlying generalized rough sets even when positive region, certain region and lower approximations coincide one another. For rule extraction, we did not utilize possible regions, conceivable regions and upper approximations. It would be possible to extract decision rules corresponding to those sets. The proposed rule extraction methods are all based on decision matrices and require a lot of computational eﬀort. The other extraction methods like LERS [4] should be investigated. In this case, we should abandon to extract all decision rules but extract only useful decision rules or a minimal body of decision rules which covers all objects. In all methods proposed in this paper, we extracted all minimal conditions. This may increase the risk to give wrong conclusions for objects when we apply the obtained decision rules to infer conclusions of new objects. Risk and minimal descriptions of conditions are in a trade-oﬀ relation. We should investigate an extraction method of decision rules with moderate risk and suﬃciently weak conditions. Those topics and applications to real world problems would be our future work.

Generalizations of Rough Sets and Rule Extraction

117

References 1. Bonikowski, Z., Bryniarski, E., Wybraniec-Skardowska, U.: Extensions and intensions in the rough set theory. Information Sciences 107 (1998) 149–167 2. Dubois, D., Grzymala-Busse, J., Inuiguchi, M., Polkowski, L. (eds.): Fuzzy Rough Sets: Fuzzy and Rough and Fuzzy along Rough. Springer-Verlag, Berlin (to appear) 3. Greco, S., Matarazzo, B., Slowi´ nski, R.: The use of rough sets and fuzzy sets in MCDM. in: Gal, T., Stewart, T. J., Hanne, T. (Eds.) Multicriteria Decision Making: Advances in MCDM Models, Algorithms, Theory, and Applications, Kluwer Academic Publishers, Boston, MA (1999) 14-1–14-59 4. Grzymala-Busse, J. W.: LERS: A system for learning from examples based on rough sets. in: Slowi´ nski (ed.): Intelligent Decision Support: Handbook pf Applications and Advances of the Rough Sets Theory, Kluwer Academic Publishers, Dordrecht, (1992) 3–18. 5. Inuiguchi, M., Tanino, T.: On rough sets under generalized equivalence relations. Bulletin of International Rough Set Society 5(1/2) (2001) 167–171 6. Inuiguchi, M., Tanino, T.: Generalized rough sets and rule extraction. in: Alpigini, J. J., Peters, J. F., Skowron, A., Zhong, N. (eds.): Rough Sets and Current Trends in Computing, Springer-Verlag, Berlin (2002) 105–112 7. Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning About Data, Kluwer Academic Publishers, Boston, MA (1991) 8. Shan, N., Ziarko, W.: Data-based acquisition and incremental modiﬁcation of classiﬁcation rules. Computational Intelligence 11 (1995) 357–370 9. Skowron, A., Rauszer, C. M.: The discernibility matrix and functions in information systems. in: Slowi´ nski, R. (ed.) Intelligent Decision Support: Handbook of Applications and Advances of the Rough Sets Theory, Kluwer Academic Publishers, Dordrecht (1992) 331–362 10. Slowi´ nski, R., Vanderpooten, D.: A generalized deﬁnition of rough approximations based on similarity. IEEE Transactions on Data and Knowledge Engineering 12(2) (2000) 331–336 11. Yao, Y.Y.: Two views of the theory of rough sets in ﬁnite universes. International Journal of Approximate Reasoning 15 (1996) 291–317 12. Yao, Y.Y.: Relational interpretations of neighborhood operators and rough set approximation operators. Information Sciences 111 (1998) 239–259 13. Yao, Y.Y., Lin, T.Y.: Generalization of rough sets using modal logics. Intelligent Automation and Soft Computing 2(2) (1996) 103–120

Appendix: Proofs of Fundamental Properties (a) The proof of (vi) in Table 2 When Q is the converse of P , we have y ∈ P (x) if and only if x ∈ Q(y). Then we obtain Q∗ (U − X) = (U − X) ∪ {Q(x) | x ∈ U − X} = (U − X) ∪ {x ∈ U | P (x) ∩ (U − X) = ∅}. Hence we have ¯ ∗ (X) = U − Q∗ (U − X) = X ∩ {x ∈ U | P (x) ∩ (U − X) = ∅} Q = X ∩ {x ∈ U | P (x) ⊆ X} = {x ∈ X | P (x) ⊆ X} = P∗ (X). The other equation can be obtained similarly.

118

Masahiro Inuiguchi

(b) The proof of (vii) in Table 2 P ∗ (P∗ (X)) = P∗ (X) ∪ {P (x) | x ∈ P∗ (X)} = P∗ (X) ∪ {P (x) | P (x) ⊆ X, x ∈ X} ⊆ X, P∗ (P ∗ (X)) = {x ∈ P ∗ (X) | P (x) ⊆ P ∗ (X)} = P ∗ (X) ∩ x ∈ U P (x) ⊆ X ∪ {P (x) | x ∈ X} ⊇ X are valid. Thus we have X ⊇ P ∗ (P∗ (X)) and X ⊆ P∗ (P ∗ (X)). This implies also ¯ ∗ (Q ¯ ∗ (X)) and X ⊆ Q ¯ ∗ (Q ¯ ∗ (X)) because we obtain U − X ⊇ Q∗ (Q∗ (U − X⊇Q ∗ X)) and U − X ⊆ Q∗ (Q (U − X)). Hence, ﬁrst four relations are obvious. When P is transitive, x ∈ P (y) implies P (x) ⊆ P (y). Let z ∈ P∗ (X), i.e., z ∈ X and P (z) ⊆ X. Suppose z ∈ P∗ (P∗ (X)). Then we obtain P (z) ⊆ P∗ (X). Namely, there exists y ∈ P (z) such that y ∈ P∗ (X). Since P (z) ⊆ X, y ∈ X. Combining this with y ∈ P∗ (X), we have P (y) ⊆ X. From the transitivity of P , y ∈ P (z) ⊆ X implies P (y) ⊆ X. Contradiction. Therefore, we proved P∗ (X) ⊆ P∗ (P∗ (X)). The opposite inclusion is obvious. Hence P∗ (P∗ (X)) = P∗ (X). Now, let us prove P ∗ (P ∗ (X)) = P ∗ (X) when P is transitive. It suﬃces to prove P ∗ (P ∗ (X)) ⊆ P ∗ (X) since the opposite inclusion is obvious. Let z ∈ P ∗ (P ∗ (X)), i.e, (i) z ∈ P ∗ (X) or (ii) there exists y ∈ P ∗ (X) such that z ∈ P (y). We prove z ∈ P ∗ (X). Thus, in case of (i), it is straightforward. Consider case (ii). Since y ∈ P ∗ (X), (iia) y ∈ X or (iib) there exists w ∈ X such that y ∈ P (w). In case of (iia), we obtain z ∈ P ∗ (X) from z ∈ P (y). In case of (iib), from the transitivity of P , we have P (y) ⊆ P (w). Combining this fact with z ∈ P (y), z ∈ P (w). Since w ∈ X, we obtain z ∈ P ∗ (X). Therefore, in any case, we obtain z ∈ P ∗ (X). Hence, P ∗ (P ∗ (X)) = P ∗ (X). The same properties with respect to a relation Q can be proved similarly. When P is reﬂexive and transitive, we can prove {x ∈ U | P (x) ⊆ X} = {P(x) | P (x) ⊆ X}. This equation can be proved in the following way. Let y ∈ {P (x) | P (x) ⊆ X}. There exists z ∈ U such that y ∈ P (z) ⊆ X. Because of the transitivity, P (y) ⊆ P (z) ⊆ X. This implies that y ∈ {x ∈ U | P (x) ⊆ X}. Hence, {x ∈ U | P (x) ⊆ X} ⊇ {P (x) | P (x) ⊆ X}. The opposite inclusion is obvious from the reﬂexivity. ∗ From the reﬂexivity, we have P∗ (X) = {x ∈ X | P (x) ⊆ X}, P (X) = {P (x) | x ∈ X}. Using these equations, we obtain P ∗ (P∗ (X)) = {P (x) | x ∈ P∗ (x)} = {P (x) | P (x) ⊆ X} = P∗ (X), P∗ (P ∗ (X)) = {x ∈ X | P (x) ⊆ P ∗ (X)} = {P (x) | P (x) ⊆ P ∗ (X)} = {P (x) | x ∈ X} = P ∗ (X). The properties with respect to the relation Q can be proved in the same way. (c) The proof of (iii) in Table 3 The ﬁrst and fourth inclusion relations are obvious. We prove F∪∗ (X ∪ Y ) = F∪∗ (X) ∪ F∪∗ (Y ) only. The second equality can be proved by the duality (vi).

Generalizations of Rough Sets and Rule Extraction

119

F∪∗ (X ∪ Y ) ⊇ F∪∗ (X) ∪ F∪∗ (Y ) is straightforward. We prove the opposite inclusion. Let x ∈ F∪∗ (X ∪ Y ). Suppose x ∈ F∪∗ (X) and x ∈ F∪∗ (Y). Then there exist J, K ⊆ {1, 2, . . . , p} such that x ∈ j∈J Fj ⊇X and x ∈ j∈K Fj ⊇ Y . This fact implies that x ∈ j∈J Fj ∪ j∈K Fj = j∈J∪K Fj ⊇ X ∪ Y . This contracts with x ∈ F∪∗ (X ∪ Y ). Hence, we have x ∈ F∪∗ (X) ∪ F∪∗ (Y ). (d) The proof of (a) in Table 4 We only prove P∗ (X) ⊆ P∗∪ (X) = P ∗ (P∗ (X)) ⊆ X ⊆ P∪∗ (X) ⊆ P ∗ (X) when P is reﬂexive. The other assertion can be proved similarly. First inclusion and the inequality are obvious from the reﬂexivity. Relations with a set X is obtained from (i) in Table 3. From the reﬂexivity, we have ∗ P (x) ∈ P (x) X ⊆ P (x), Y ⊆ U P (X) = x∈X

x∈Y

x∈Y

Then the last inclusion is proved as follows: P (x) X ⊆ P (x), Y ⊆ U = P∪∗ (X). P ∗ (X) ⊇ x∈Y

x∈Y

(e) The proof of (b) in Table 4 We only prove the ﬁrst part. The second part can be obtained similarly. Let x ∈ P∗∪ (X). There exists y such that x ∈ P (y) ⊆ X. Because of the transitivity, P (x) ⊆ P (y). Therefore, P (x) ⊆ X. This fact together with x ∈ X implies x ∈ P∗ (X). Hence, P∗∪ (X) ⊆ P∗ (X). The relation P∗ (X) ⊆ X ⊆ P ∗ (X) has been given as (i) in Table 2. ∗ ∗ ∗ Finally, we prove P (X) ⊆ P∪ (X). Let z ∈ X ⊆ P∪ (X). Then, for all Wi such that X ⊆ w∈Wi P (w), there exists wi ∈ Wi such that z ∈ P (wi ). By transitivity, P (z) ⊆ P (wi ). Therefore, P (w) ⊆ P (w) X ⊆ P (w) P (z) ⊆ P (wi ) X ⊆ w∈Wi

w∈Wi

w∈Wi

Hence, we have ∗

P (X) =

z∈X

P (z) ⊆

w∈Wi

P (w) X ⊆ P (w) = P∪∗ (X). w∈Wi

Towards Scalable Algorithms for Discovering Rough Set Reducts Marzena Kryszkiewicz1 and Katarzyna Cichoń1,2 1

Institute of Computer Science, Warsaw University of Technology Nowowiejska 15/19, 00-665 Warsaw, Poland [email protected] 2 Institute of Electrical Apparatus, Technical University of Lodz Stefanowskiego 18/22, 90-924 Lodz, Poland [email protected]

Abstract. Rough set theory allows one to find reducts from a decision table, which are minimal sets of attributes preserving the required quality of classification. In this article, we propose a number of algorithms for discovering all generalized reducts (preserving generalized decisions), all possible reducts (preserving upper approximations) and certain reducts (preserving lower approximations). The new RAD and CoreRAD algorithms, we propose, discover exact reducts. They require, however, the determination of all maximal attribute sets that are not supersets of reducts. In the case, when their determination is infeasible, we propose GRA and CoreGRA algorithms, which search approximate reducts. These two algorithms are well suited to the discovery of supersets of reducts from very large decision tables.

1 Introduction Rough set theory has been conceived as a non-statistical tool for analysis of imperfect data [17]. Rough set methodology allows one to discover interesting data dependencies, decision rules, repetitive data patterns and to analyse conflict situations [24]. The reasoning in the rough set approach is based solely on available information. Objects are perceived as indiscernible if they have the same description in the system. This may be a reason for uncertainty. Two or more objects identically described in the system may belong to different classes (concepts). Such concepts, though vague, can be defined roughly by means of a pair of crisp sets: lower approximation and upper approximation. Lower approximation of a concept is a set of objects that surely belong to that concept, whereas upper approximation is a set of objects that possibly belong to that concept. Rough set theory allows one to find reducts from a decision table, which are minimal sets of attributes preserving the required quality of classification. For example, a reduct may preserve lower approximations of decision classes, or upper approximations of decision classes, or both. A number of methods for discovering reducts have already been proposed in the literature [2-8, 11, 15-17, 20-31]. The most popular J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 120–143, 2004. © Springer-Verlag Berlin Heidelberg 2004

Towards Scalable Algorithms for Discovering Rough Set Reducts

121

methods are based on discernibility matrices [20]. Other methods are based, e.g., on the theory of cones and fences [7, 19]. Unfortunately, the existing methods are not capable to discover all reducts from very large decision tables, although research on discovering rough set decision rules in large data sets started a few years ago (see e.g., [9-10, 14]). One may try to overcome this problem either by applying heuristics or data sampling or both, or by restricting search to looking for some reducts instead of all of them. Recently, we have proposed the GRA-like (GeneralizedReductsApriori) algorithms for discovering approximate generalized, possible and certain reducts from very large decision tables [13]. This article extends the results obtained in [13]. Here, we propose new algorithms - RAD and CoreRAD - for discovering exact generalized, possible and certain reducts. CoreRAD is a variation of RAD, which uses information on the so-called core in order to restrict the number of candidates for reducts and the number of scans of the decision table. The new algorithms require the determination of all maximal sets that are not supersets of reducts (MNSR). The knowledge of MNSR is sufficient to evaluate candidates for reducts correctly. The method of creating and pruning candidates is very similar to the one proposed in GRA [13]. In the case, when the calculation of MNSR is infeasible, we advocate to search approximate reducts. In the article, we first introduce the theory behind approximate reducts and then present in detail respective algorithms (GRA and CoreGRA). The layout of the article is as follows: In Section 2, we remind basic rough set notions and prove some of their properties that will be applied in the proposed algorithms. In Section 3, we propose the RAD algorithm for discovering generalized and possible reducts. A number of optimizations of the basic algorithm are discussed as well. The CoreRAD algorithm, which calculates both the core and the reducts, is offered in Section 4. In Section 5, we discuss briefly how to adapt RAD and CoreRAD for the discovery of certain reducts. The notions of approximate reducts are introduced in Section 6. We prove that approximate reducts are supersets of exact reducts. The properties of approximate generalized reducts are used in the construction of the GRA algorithm, which is presented in Section 7. In Section 8, we discuss the CoreGRA algorithm, which calculates both the approximate generalized reducts and the approximate core. In Section 9, we propose simple modifications of GRA and CoreGRA that enable the usage of these algorithms for discovering approximate certain reducts. Section 10 concludes the results indicating that the proposed solutions can be applied in the case of incomplete decision tables as well.

2 Basic Notions 2.1 Information Systems An information system (IS) is a pair S = (O, AT), where O is a non-empty finite set of objects and AT is a non-empty finite set of attributes, such that a: O → Va for any a∈AT, where Va is called domain of the attribute a.

122

Marzena Kryszkiewicz and Katarzyna Cichoń

An attribute-value pair (a,v), where a∈AT and v∈Va, is called an atomic descriptor. An atomic descriptor or its conjunction is called a descriptor [20]. A conjunction of atomic descriptors for attributes A⊆AT is called A-descriptor. Let S = (O, AT). Each subset of attributes A⊆AT determines a binary indiscernibility relation IND(A), IND(A) = {(x,y)∈O×O| ∀a∈A, a(x) = a(y)}. The relation IND(A), A⊆AT, is an equivalence relation and constitutes a partition of O. Objects indiscernible with regard to their description on attribute set A in the system will be denoted by IA(x); that is, IA(x) = {y∈O| (x,y)∈IND(A)}. Property 1 [9]. Let A, B ⊆ AT. a) If A ⊆ B, then IB(x) ⊆ IA(x). b) IA∪B(x) = IA(x) ∩ IB(x). c)

IA(x) = ∩a∈A Ia(x).

Let X⊆O and A⊆AT. AX is defined as a lower approximation of X iff AX = {x∈O| IA(x) ⊆ X} = {x ∈ X | IA(x) ⊆ X}. A X is defined as an upper approximation of X iff A X = {x∈O| IA(x) ∩ X ≠ ∅} =

∪{IA(x)| x ∈ X}. AX is the set of objects that

belong to X with certainty, while A X is the set of objects that possibly belong to X. 2.2 Decision Tables A decision table is an information system DT = (O, AT∪{d}), where d∉AT is a distinguished attribute called the decision, and the elements of AT are called conditions. The set of all objects whose decision value equals k, k∈Vd, will be denoted by Xk. Let us define the function ∂A: O → P(Vd), A⊆AT, as follows [18]:

∂A(x) = {d(y)| y∈IA(x)}. ∂A will be called A-generalized decision in DT. For A = AT, an A-generalized decision will be also called briefly a generalized decision. Table 1. DT = (O, AT∪{f}) extended by generalized decision ∂AT. x∈O 1 2 3 4 5 6 7 8 9

a 1 1 0 0 0 1 1 1 1

b 0 1 1 1 1 1 1 1 0

c 0 1

D 1 1 1 0 1 0 1 2 0 2 0 2 0 3 0 3

e 1 2 3 3 2 2 2 2 2

f 1 1 1 2 2 2 3 3 3

∂AT {1} {1} {1,2} {1,2} {2} {2,3} {2,3} {3} {3}

Table 2. DT’ = (O, AT∪{∂AT}) – sorted and reduced version of DT from Table 1. x∈O in DT’ (x∈O in DT) 1 (3,4) 2 (5) 3 (1) 4 (9) 5 (6,7) 6 (8) 7 (2)

a 0 0 1 1 1 1 1

b 1 1 0 0 1 1 1

c 1 1 0 0 0 0 1

d 0 2 1 3 2 3 1

e 3 2 1 2 2 2 2

∂AT {1,2} {2} {1} {3} {2,3} {3} {1}

Example 1. Table 1 describes a sample decision table DT. The conditional attributes are as follows: AT = {a, b, c, d, e}. The decision attribute is f. One may note that objects 3 and 4 are indiscernible with respect to the conditional attributes in AT.

Towards Scalable Algorithms for Discovering Rough Set Reducts

123

Hence, ∂AT for object 3 contains both the decision 1 for object 3, as well as the decision 2 for object 4. Analogously, ∂AT for object 4 contains both its own decision (2), as well as the decision of object 3 (1). Please see the last column in Table 1 for generalized decision ∂AT for all objects in DT. Let X1 be the class of objects determined by decision 1; that is, X1 = {1,2,3}. The lower and upper approximations of X1 are as follows: ATX1 = {1,2} and AT X1 = {1,2,3,4}. Property 2 shows that the approximations of decision classes can be expressed by means of an A-generalized decision. Property 2 [9-11]. Let Xi ⊆ O and A⊆AT. a) IA(x) ⊆ Xi iff ∂A(x) = {i}. b) IA(x) ∩ Xi ≠ ∅ iff i ∈ ∂A(x). c) AXi = {x∈O| ∂A(x) = {i}}. d) A Xi = {x∈O| i ∈ ∂A(x)}. e) ∂A(x) = ∂A(y) for any (x,y)∈IND(A). By Property 2e, objects having the same A-descriptor have also the same A-generalized decision value; that is, the A-descriptor uniquely determines the A-generalized decision value for all objects satisfying this descriptor. In the sequel, the A-generalized decision value determined by A-descriptor t, such that t is satisfied by at least one object in the system, will be denoted by ∂t. Table 2 shows the generalized decision values determined by atomic descriptors that occur in Table 1. Table 3. Generalized decision values ∂(a,v) determined by atomic descriptors (a,v), where a∈AT, v∈Va, supported by DT from Table 1. (a,v)

∂(a,v)

(a,0) (a,1) (b,0) (b,1) (c,0) (c,1) (d,0) (d,1) (d,2) (d,3) {1,2} {1,2,3} {1,3} {1,2,3} {1,2,3} {1,2} {1,2} {1} {2,3} {3}

(e,1) (e,2) (e,3) {1} {1,2,3} {1,2}

We note that the A- and B-generalized decision values for object x provide an upper bound on the A∪B-generalized decision value for x. Property 3 [13]. Let A,B⊆AT, x∈DT. ∂A∪B(x) ⊆ ∂A(x) ∩ ∂B(x). Proof: ∂A∪B(x) = {d(y)| y∈IA∪B(x)} = /* by Property 1b */ = {d(y)| y∈(IA(x) ∩ IB(x))} ! ⊆ {d(y)| y∈IA(x)} ∩ {d(y)| y∈IB(x)} = ∂A(x) ∩ ∂B(x). We conclude further that the elementary a-generalized decision values for x, a∈A, can be used for calculating an upper bound on the A-generalized decision value for x. Corollary 1. Let A⊆AT and x∈DT. ∂A(x) ⊆ ∩a∈A ∂a(x) = ∩a∈A ∂(a, a(x)). Example 2. The {ce}-generalized decision value calculated from DT in Table 1 for object 5 (∂{ce}(5) = {1,2}) equals its upper bound ∂c(5) ∩ ∂e(5) = ∂(c,1) ∩ ∂(e,2) = {1,2} ∩ {1,2,3} = {1,2}. On the other hand, the {ce}-generalized decision value for object 6 (∂{ce}(6) = {2,3}) is a proper subset of its upper bound ∂c(6) ∩ ∂e(6) = ∂(c,0) ∩ ∂(e,2) = {1,2,3} ∩ {1,2,3} = {1,2,3}. !

124

Marzena Kryszkiewicz and Katarzyna Cichoń

Corollary 2. Let A⊆B⊆AT, x∈DT. ∂B(x) ⊆ ∂A(x). Proof: By Property 3, ∂B(x) ⊆ ∂A(x) ∩ ∂B\A(x). Hence, ∂B(x) ⊆ ∂A(x).

!

Finally, we observe that A- and B-generalized decision values for object x, where A⊆B⊆AT, are identical when their cardinalities are identical. Proposition 1. Let A⊆B⊆AT and x∈DT. ∂A(x) = ∂B(x) iff |∂A(x)| = |∂B(x)|. Proof: (⇒) Straightforward. (⇐) Let |∂A(x)| = |∂B(x)| (*). Since, A⊆B, then by Corollary 2, ∂A(x) ⊇ ∂B(x). Taking ! into account (*), we conclude ∂A(x) = ∂B(x). 2.3 Reducts for Decision Tables Reducts for decision tables are minimal sets of conditional attributes that preserve the required properties of classification. In what follows, we provide definitions of reducts preserving lower and upper approximations of decision classes and objects’ generalized decisions, respectively. Let ∅≠A⊆AT. A is a certain reduct (c-reduct) of DT iff A is a minimal attribute set such that (c) ∀x∈O, x∈ATXd(x) ⇒ IA(x) ⊆ Xd(x) A certain reduct is a set of attributes that allows us to distinguish each object x belonging to the lower approximation of its decision class in DT from the objects that do not belong to this approximation. A is a possible reduct (p-reduct) of DT iff A is a minimal attribute set such that ∀x∈O, IA(x) ⊆ AT Xd(x)

(p)

A possible reduct is a set of attributes that allows us to distinguish each object x in DT from objects that do not belong to the upper approximation of its decision class. A is a generalized decision reduct (g-reduct) of DT iff A is a minimal set such that ∀x∈O, ∂A(x) = ∂AT(x)

(g)

A generalized decision reduct is a set of attributes that preserves the generalized decision value for each object x in DT. In the sequel, a superset of a t-reduct, where t ∈ {c, p, g}, will be called a t-super-reduct. Corollary 3. AT is a superset of all c-reducts, p-reducts, and g-reducts for any DT. Proposition 2. Let A ⊆ AT. a) If A satisfies property (c), then all of its supersets satisfy property (c). b) If A does not satisfy property (c), then all of its subsets do not satisfy (c). c) If A satisfies property (p), then all of its supersets satisfy property (p). d) If A does not satisfy property (p), then all of its subsets do not satisfy (p). e) If A satisfies property (g), then all of its supersets satisfy property (g). f) If A does not satisfy property (g), then all of its subsets do not satisfy (g). Proof: Let A⊆B⊆AT and x∈O.

Towards Scalable Algorithms for Discovering Rough Set Reducts

125

Ad a) Let A satisfy property (c) and x∈ATXd(x). We are to prove that IB(x) ⊆ Xd(x). Since A satisfies property (c), then IA(x) ⊆ Xd(x) (*). By Property 1a, IB(x) ⊆ IA(x) (**). By (*) and (**), IB(x) ⊆ Xd(x). Ad b) Analogous to a). Ad c) Let A satisfy property (g). We are to prove that ∂B(x) = ∂AT(x). Since A satisfies property (g), then ∂A(x) = ∂AT(x) (*). By Corollary 2, ∂AT(x) ⊆ ∂B(x) ⊆ ∂A(x) (**). By (*) and (**), ∂B(x) = ∂AT(x). Ad b, d, f) Follow immediately from Proposition 2a, b, c, respectively. ! Corollary 4. a) c-super-reducts are all and the only attribute sets that satisfy property (c). b) p-super-reducts are all and the only attribute sets that satisfy property (p). c) g-super-reducts are all and the only attribute sets that satisfy property (g). Proof: By definition of reducts and Proposition 2.

!

Interestingly, not only g-reducts, but also p-reducts and c-reducts, can be determined by examining generalized decisions. Theorem 1 [11]. The set of all generalized decision reducts of DT equals the set of all possible reducts of DT. Lemma 1 [13]. A⊆AT is a c-reduct of DT iff A is a minimal set such that ∀x∈O, ∂AT(x) = {d(x)} ⇒ ∂A(x) = {d(x)}. !

Proof: By Property 2a,c.

Corollary 5 [13]. A⊆AT is a c-reduct of DT iff A is a minimal set such that∀x∈O, ∂AT(x) = {d(x)} ⇒ ∂A(x) = ∂AT(x). 2.4 Core The notion of a core is meant to be the greatest set of attributes without which an attribute set does not satisfy the required classification property (i.e. is not a superreduct). The generic notion of a t-core, t ∈ {c, p, g}, corresponding to c-reducts, preducts and g-reducts, respectively, is defined as follows: t-core = {a∈AT| AT\{a} is not a t-super-reduct}. Clearly, the p-core and g-core are the same. Proposition 3. Let R be all reducts of the same type t, where t ∈ {c, p, g}. t-core = ∩R.

Proof: Let us consider the case when R is the set of all c-reducts. Let b ∈ c-core. Hence b is an attribute in AT such that AT\{b} is not a superset of c-reduct. By Corollary 4a and Proposition 2b, no attribute set without b satisfies property (c). Hence, no

attribute set without b is a c-reduct. Thus, all c-reducts contain b; that is, ∩R ⊇ {b}. Generalizing this observation, ∩R ⊇ c-core.

126

Marzena Kryszkiewicz and Katarzyna Cichoń

Now, we will prove by contradiction that

∩R

\ c-core is an empty set. Let

d ∈ ∩R and d ∉ c-core. Since d ∉ c-core, then, by definition of a core, AT\{d} is a superset of some c-reduct, say B. Since B is a subset of AT\{d}, then B does not contain d either. This means that among c-reducts, there is an attribute set (B), which

does not contain d. Therefore, d ∉ ∩R, which contradicts the assumption. The cases when R is the set of all p-reducts or g-reducts can be proved analogously from Corollary 4b,c and Proposition 2d,f, respectively. !

3 Discovering Generalized Reducts 3.1 Main Algorithm Notation for RAD • Rk – candidate k attribute sets (potential g-reducts); • Ak – k attribute sets that are not g-super-reducts; • MNSR – all maximal conditional attribute sets that are not g-super-reducts; • MNSRk – k attribute sets in MNSR; • DT’ – reduced DT; • x.a – the value of an attribute a for object x; • x.∂AT – the generalized decision value for object x. Algorithm. RAD; DT’ = GenDecRepresentation-of-DT(DT); MNSR = MaximalNonSuperReducts(DT’); /* search g-reducts - note: g-reducts are all attribute sets that are not subsets of any set in MNSR */ if |MNSR|AT|-1| = |AT| then return AT; // optional optimizing step 1 R1 = {{a}| a∈AT}; A1 = {}; // initialize 1 attribute candidates for g-reducts forall B ∈ MNSR do move subsets of B from R1 to A1; // subsets of non-super-reducts are not reducts for (k = 1; Ak ≠ {}; k++) do begin if |MNSR| = 1 then return ∪k Rk; // optional optimizing step 2 MNSR = MNSR \ MNSRk; // MNSRk is not useful any more – optional optimizing step 3 /* create k+1 attribute g-reducts Rk+1 and non-g-super-reducts Ak+1 from Ak and MNSR */ RADGen(Rk+1, Ak+1, Ak, MNSR); endfor; return ∪k Rk;

The RAD (ReductsAprioriDiscovery) algorithm we propose starts by determining the reduced decision table DT’ that stores only conditional attributes AT and the AT-generalized decision for each object in DT instead of the original decision (see Section 3.2 for the description of the GenDecRepresentation-of-DT function). Each class of objects indiscernible w.r.t. AT ∪ {∂AT} in DT (see Table 1) is represented by one object in DT’ (see Table 2). Next, DT’ is examined in order to find all maximal attribute sets MNSR that are not g-super-reducts (see Section 3.3 for the description of the MaximalNonSuperReducts function). The information on MNSR is sufficient to derive all g-reducts; namely, g-reducts are these sets each of which has no superset in MNSR (i.e., is a g-super-reduct), but all proper subsets of which have supersets in MNSR (i.e., are not g-reducts).

Towards Scalable Algorithms for Discovering Rough Set Reducts

127

Now, RAD creates initial candidates for g-reducts that are singleton sets and are stored in R1. The candidates in R1 that are subsets of MNSR are moved to 1 attribute non-g-super-reducts A1. The main loop starts. In each k-th iteration, k ≥ 1, k+1 attribute candidates Rk+1 are created from k attribute sets in Ak, which are not gsuper-reducts (see Section 3.4 for the description of the RADGen procedure). The information on non-g-super-reducts MNSR is used to prune candidates in Rk+1. Namely, each candidate in Rk+1 that has a superset in MNSR is not a g-superreduct. Therefore it is moved from Rk+1 to Ak+1. The algorithm stops when Ak = {}. Optional optimizing steps in RAD are discussed in Section 3.5. 3.2 Determining Generalized Decision Representation of Decision Table The GenDecRepresentation-of-DT function starts with sorting the given decision table DT w.r.t. the set of all conditional attributes and (optionally) the decision attribute. The sorting enables fast determination of the generalized decision values for all classes of objects indiscernible w.r.t. AT. Each such class will be represented by one object in the new decision table DT’ = (AT, {∂AT}), where the decision attribute is replaced by the generalized decision. function GenDecRepresentation-of-DT(decision table DT); DT’ = {}; sort DT with respect to AT and d; // apply any ordering of attributes in AT, e.g. lexicographical x = first object in DT; // or null if DT is empty while x is not null do begin forall a∈AT do x’.a = x.a; x’.∂AT = {d(y)| y∈IAT(x)}; add x’ to DT’; x = the first object located just after IAT(x) in DT; endwhile; return DT’;

3.3 Calculating Maximal Non-super-reducts The purpose of the MaximalNonSuperReducts function is to determine all maximal conditional attribute sets that are not g-super-reducts. To this end, each object in the reduced decision table DT’ is compared with all other objects from different generalized decision classes. The result of the comparison of two objects, say x and y, belonging to different classes is the set of all attributes on which x and y are indiscernible. Clearly, such a resulting set is not a g-super-reduct, since it does not discern at least one pair of objects belonging to different generalized decision classes. The comparison results, which are non-g-super-reducts, are stored in the NSR variable. After the comparison of objects is accomplished, NSR contains a superset of all maximal non-g-super-reducts. The function returns MAX(NSR), which can be calculated as the final step or on the fly. For DT’ from Table 2, MaximalNonSuperReducts will find NSR = {abc, b, bc, e, bde, be, bce, ac, ace, ae, abce, abe}, and eventually will return MAX(NSR) = {abce, bde}.

128

Marzena Kryszkiewicz and Katarzyna Cichoń

function MaximalNonSuperReducts(reduced decision table DT’); NSR = {}; forall objects x in DT’ do forall objects y following x in DT’ do if x.∂AT ≠ y.∂AT then /* objects x and y should be distinguishable as they belong to different generalized decision classes; */ /* the set {a∈AT| x.a = y.a} is not a g-super-reduct since it does not distinguish between x and y */ insert in {a∈AT| x.a = y.a}, if non-empty, to NSR; return MAX(NSR); // note: MAX(NSR) contains all maximal non-g-super-reducts

3.4 Generating Candidates for Reducts The RADGen procedure has 4 arguments. Two of them are input ones: k attribute non-g-super-reducts Ak and the maximal non-g-super-reducts MNSR. The two remaining candidates Rk+1 and Ak+1 are output ones. After the completion of the function, Rk+1 contains k+1 attribute g-reducts and Ak+1 contains k+1 attribute nong-super-reducts. During the first phase of the procedure, new k+1 attribute candidates are created by merging k attribute non-g-super-reducts in Ak that differ only in their final attributes. The characteristic feature of such a method of creating candidates is that no candidate that is likely to be a solution (here: g-reduct) is missed and that no candidate is generated twice (please, see the detailed description of the Apriori algorithm [1] for justification). In the second phase, it is checked for each newly obtained k+1 attribute candidate whether all its proper k attribute subsets are contained in nong-super-reducts Ak. If yes, then a candidate remains in Rk+1; otherwise it is pruned as a proper superset of some g-super-reduct. Finally, all candidates in Rk+1 that are subsets of maximal non-g-super-reducts MNSR are found non-g-super-reducts too, and thus are moved to Ak+1. procedure RADGen(var Rk+1, var Ak+1, in Ak, in MNSR); forall B, C ∈Ak do /* Merging */ if B[1] = C[1] ∧ ... ∧ B[k-1] = C[k-1] ∧ B[k] < C[k] then begin A = B[1]•B[2]•...•B[k]•C[k]; add A to Rk+1; endif; forall A∈Rk+1 do /* Pruning */ forall k attribute sets B ⊂ A do if B ∉ Ak then delete A from Rk+1; // A is a proper superset of g-super-reduct B forall B∈MNSR do move subsets of B from Rk+1 to Ak+1; /* Removing subsets of non-g-super-reducts */ return;

3.5 Optimizing Steps in RAD In the main algorithm, we offer an optimization that may speed up checking which candidates are not g-reducts (optimizing step 3) and two optimizations for reducing the number of useless iterations (optimizing steps 1 and 2). In step 3, k attribute sets are deleted from MNSR since they are useless for identifying non-g-superset-reducts among l attribute candidates, where l > k.

Towards Scalable Algorithms for Discovering Rough Set Reducts

129

Optimizing step 1 is based on the following observation: the condition |MNSR|AT|-1| = |AT| implies that all AT\{a} sets are not g-super-reducts. Hence, AT is the only greduct for DT and thus the algorithm can be stopped. Optimizing step 2 can be applied when |MNSR| = 1. This condition implies that all sets in Ak, which are not g-super-reducts, have exactly one - the same superset, say B, in maximal non-g-super-reducts MNSR. If one continues the creation of k+1 attribute candidates Rk+1 by merging sets in Ak, then the new k+1 attribute candidates would be still subsets of B. Hence, they would be pruned by the RADGen procedure from Rk+1 to Ak+1. As a result, one would obtain Rk+1 = {} and |MNSR| = 1. Such a scenario would continue when creating longer candidates until Al = {B}, l > k. Then, RADGen will produce empty Rl+1 and empty Al+1; that is, the condition, which stops the RAD algorithm. In conclusion, the condition |MNSR| = 1 implies that no more g-reducts will be discovered, so the algorithm can be stopped. 3.6 Illustration of RAD Let us illustrate now the discovery of g-reducts of DT from Table 1. We assume that maximal non-g-super-reducts MNSR are already found and are equal to {{abce}, {bde}}. Table 4 shows how candidates for g-reducts change in each iteration. Table 4. Rk and Ak after verification w.r.t. MNSR in subsequent iterations of New. k 1 2 3 4

Ak (each X in Ak has a superset in MNSR) {a}, {b}, {c}, {d}, {e} {ab}, {ac}, {ae}, {bc}, {bd}, {be}, {ce}, {de} {abc}, {abe}, {ace}, {bce}, {bde} {abce}

Rk (each X in Rk has no superset in MNSR) {ad}, {cd}

4 Core-Oriented Discovery of Generalized Reducts 4.1 Main Algorithm In this section, we offer the CoreRAD procedure, which finds not only g-reducts, but also their core. The layout of CoreRAD reminds that of RAD. CoreRAD, however, differs from RAD in that it first checks if the set of all maximal non-g-super-reducts MNSR is empty. If yes, then each single conditional attribute is a g-reduct, so

CoreRAD returns {{a}| a∈AT} as the set of all g-reducts and ∩a∈AT {a} = ∅ as the g-core (by Proposition 3). Otherwise, CoreRAD determines the g-core by definition from all maximal |AT|-1 non-g-super-reducts in MNSR. All sets in MNSR that are not supersets of the g-core are deleted, since the only candidates considered in CoreRAD will be the g-core and its supersets. If the reduced MNSR is an empty set, then the g-core does not have subsets in MNSR and thus it is the only g-reduct. Otherwise, the g-core is not a g-reduct, and the new candidates R|core|+1 are created by merging the g-core with the remaining attributes in AT. Clearly, the new candidates

130

Marzena Kryszkiewicz and Katarzyna Cichoń

which have supersets in maximal non-g-super-reducts MNSR are not g-reducts either, and hence are moved from R|core|+1 to A|core|+1. From now on, CoreRAD is performed in the same way as RAD. Algorithm. CoreRAD; DT’ = GenDecRepresentation-of-DT(DT); MNSR = MaximalNonSuperReducts(DT’); if MNSR = {} then return (∅,{{a}| a∈AT}); // each conditional attribute is a g-reduct core = ∅; forall A∈MNSR|AT|-1 do begin {a} = AT\A; core = core ∪ {a} endfor; if |MNSR|AT|-1| = |AT| then return (AT, AT); // or if core = AT then - optional optimizing step 1 MNSR = {B ∈ MNSR| B ⊇ core}; // g-reducts are supersets of the g-core if MNSR = {} then return (core, {core}); // g-core is a g-reduct as there is no its superset in MNSR MNSR = MNSR \ MNSR|core|; // or equivalently MNSR = MNSR \ {core}; /* initialize candidate for reducts as g-core’s supersets */ startLevel = |core| + 1; RstartLevel = {}; AstartLevel = {}; forall a∈AT \ core do begin A = core ∪ {a}; RstartLevel = RstartLevel ∪ {A} endfor; forall B ∈ MNSR do move subsets of B from RstartLevel to AstartLevel; for (k = startLevel; Ak ≠ {}; k++) do begin if |MNSR| = 1 then return (core, ∪k Rk); // optional optimizing step 2 MNSR = MNSR \ MNSRk; // MNSRk is not useful any more – optional optimizing step 3 /* create k+1 attribute g-reducts Rk+1 and non-g-super-reducts Ak+1 from Ak and MNSR */ GRAGen(Rk+1, Ak+1, Ak, MNSR); endfor; return (core, ∪k Rk);

4.2 Illustration of CoreRAD We will illustrate now the core-oriented discovery of g-reducts of DT from Table 1. We assume that MNSR has already been calculated and equals {{abce}, {bde}}. Hence, core = AT / {abce} = {d}. Now, we leave only the supersets of the core in MNSR; thus MNSR becomes equal to {{bde}}. Table 5 shows how candidates for g-reducts change in each iteration (here: only 1 iteration was sufficient). Table 5. Rk and Ak after verification w.r.t. MNSR in subsequent iterations of CoreRAD. K 2

Ak (each X in Ak has a superset in MNSR) {bd}, {de}

Rk (each X in Rk has no superset in MNSR) {ad}, {cd}

5 Discovering Certain Reducts RAD and CoreRAD can easily be adapted for the discovery of certain reducts. It suffices to modify line 4 of the MaximalNonSuperReducts function as follows: if (x.∂AT ≠ y.∂AT) and (| x.∂AT | = 1 or | y.∂AT | = 1) then

This modification guarantees that all objects from lower approximations of all decision classes, which have singleton generalized decisions, will be compared with all objects not belonging to the lower approximations of their decision classes.

Towards Scalable Algorithms for Discovering Rough Set Reducts

131

6 Approximate Attribute Reduction 6.1 Approximate Reducts for Decision Table The discovery of reducts may be very time consuming. Therefore, one may resign from calculating strict reducts and search more efficiently for approximate reducts, which however, should be supersets of exact reducts and subsets of AT. In this section, we introduce the notion of such approximate reducts based on the observation that for any object x in O: ∩a∈A ∂a(x) ⊇ ∂A(x) (by Corollary 1). Let ∅≠A⊆AT. AT is defined an approximate generalized decision reduct (ag-

reduct) of DT iff ∃x∈O, ∩a∈AT ∂a(x) ⊃ ∂AT(x). Otherwise, A is an approximate generalized decision reduct (g-reduct) of DT iff A is a minimal set such that ∀x∈O, ∩a∈A ∂a(x) = ∂AT(x)

(ag)

Corollary 5 specifies properties of certain decision reducts in terms of generalized decisions. By analogy to this corollary, we define an approximate certain decision reduct as follows: AT is defined an approximate certain decision reduct (ac-reduct) of DT iff ∃x∈O,

∂AT(x) = {d(x)} ⇒ ∩a∈AT ∂a(x) ⊃ ∂AT(x). Otherwise, A is defined an approximate certain reduct (ac-reduct) of DT iff A is a minimal attribute set such that ∀x∈O, ∂AT(x) = {d(x)} ⇒ ∩a∈A ∂a(x) = ∂AT(x)

(ac)

In the sequel, a superset of a t-reduct, t ∈ {ac, ag}, will be called a t-super-reduct. Corollary 6. AT is a superset of all ac-reducts and ag-reducts for any DT. Proposition 4. Let x∈O and A ⊆ AT. If ∩a∈A ∂a(x) = ∂AT(x), then:

∩a∈A ∂a(x) = ∂A(x) = ∂AT(x). b) ∀B ⊆ AT, B⊃A ⇒ ∩a∈B ∂a(x) = ∂B(x) = ∂AT(x). Proof: Let ∩a∈A ∂a(x) = ∂AT(x) (*). Ad a) By Corollaries 1-2, ∩a∈A ∂a(x) ⊇ ∂A(x) ⊇ ∂AT(x). Taking into account (*), ∩a∈A ∂a(x) = ∂A(x) = ∂AT(x). a)

Ad b) Let B ⊆ AT, B⊃A. By Corollary 2, ∂A(x) ⊇ ∂B(x) ⊇ ∂AT(x). Taking into account

∩a∈A ∂a(x) = ∂A(x) = ∂B(x) = ∂AT(x) (**). Clearly, ∩a∈A ∂a(x) ⊇ ∩a∈B ∂a(x) ⊇ ∩a∈AT ∂a(x). Taking into account (**), ∂B(x) = ∂AT(x) = ∩a∈A ∂a(x) ⊇ ∩a∈B ∂a(x) ⊇ ∩a∈AT ∂a(x) ⊇ ∂AT(x). Hence, ∩a∈B ∂a(x) = ∂B(x) = ∂AT(x). !

Proposition 4a,

Corollary 7. a) An ag-reduct is a g-super-reduct. b) An ag-reduct is a p-super-reduct. c) An ac-reduct is a c-super-reduct.

132

Marzena Kryszkiewicz and Katarzyna Cichoń

Proof: Ad a) Let A be an ag-reduct. If ∃x∈O, ∩a∈AT ∂a(x) ⊃ ∂AT(x), then A = AT, which by Corollary 3 is a g-super-reduct. Otherwise, by definition of an ag-reduct and Proposition 4a, ∀x∈O, ∩a∈A ∂a(x) = ∂A(x) = ∂AT(x). Thus A satisfies property (g). Hence, by Corollary 4c, A is a g-super-reduct. Ad b) Follows from Theorem 1 and Corollary 7a. Ad c) Analogous, to the proof of Corollary 7a. Follows from the definition of an ac-reduct, Corollary 3, Corollary 5, Corollary 4a and Proposition 4a. Proposition 5. Let A ⊆ AT. a) If A satisfies property (ag), then all of its supersets satisfy property (ag). b) If A does not satisfy property (ag), then all of its subsets do not satisfy (ag). c) If A satisfies property (ac), then all of its supersets satisfy property (ac). d) If A does not satisfy property (ac), then all of its subsets do not satisfy (ac). Proof: Ad a,c) Follow from Proposition 4. Ad b, d) Follow immediately from Proposition 5a, c, respectively.

!

Corollary 8. a) ag-super-reducts are all and the only attribute sets that satisfy property (ag). b) ac-super-reducts are all and the only attribute sets that satisfy property (ac). Proof: By definition of respective approximate reducts and Proposition 5.

!

6.2 Approximate Core An approximate core will be defined in usual way; that is, t-core = {a∈AT| AT\{a} is not a t-super-reduct}, where t ∈ {ac, ag}. Proposition 6. Let R be all approximate reducts of the same type t, t ∈ {ac, ag}.

t-core = ∩R. Proof: Follows from Corollary 8 and Proposition 5, and is analogous to the proof of Proposition 3. !

7 Discovering Approximate Generalized Reducts 7.1 Main Algorithm The GRA (GeneralizedReductsApriori) algorithm, we have recently introduced in [13], finds all ag-reducts from the decision table DT. Unlike in RAD, GRA, does not need to store all maximal non-g-super-reducts MNSR. On the other hand, GRA requires the candidates for reducts to be evaluated against the decision table. The validation of the candidate solution against the decision table DT in our algorithm consists in checking if the candidate satisfies property (ag); that is, if the intersection of the elementary generalized decisions of the attributes in the candidate set determines the same generalized decision value as the set of all conditional attributes AT does for each object in DT. We will use the following properties in the process of searching reducts in order to prune the search space efficiently:

Towards Scalable Algorithms for Discovering Rough Set Reducts

133

• Proper supersets of ag-reducts are not ag-reducts, and hence such sets shall not be evaluated against the decision table. • Subsets of attribute sets that are not ag-super-reducts are not ag-reducts, and thus such sets shall not be evaluated against the decision table. • An attribute set whose all proper subsets are not ag-super-reducts may or may not be an ag-reduct, and hence should be evaluated against the decision table. Since our algorithm is to work with very large decision tables, we propose to restrict the number of decision table objects against which a candidate should be evaluated. Our proposal is based on the following observation: • If an attribute set A satisfies property (ag) for the first n objects in DT (or reduced DT’) and does not satisfy it for object n+1, then A is certainly not an ag-reduct and thus evaluating it against the remaining objects in DT (DT’) is useless. • If an attribute set A satisfies property (ag) for the first n objects in DT (or DT’), then property (ag) will be satisfied for these objects for all supersets of A. Hence, the evaluation of the first n objects should be skipped for a candidate that is a proper superset of A. The GRA algorithm starts with building the reduced version DT’ of decision table DT (see Section 3.2 for the description of the GenDecRepresentation-of-DT function). DT’ stores only the AT-generalized decisions instead of the original decisions. Next, the a-generalized decision value for each atomic descriptor (a,v) occurring in DT (or in DT’) is calculated as the set of the decisions (or the union of the ATgeneralized decisions) of the objects supporting (a,v) in DT (or in DT’). Each pair: (atomic descriptor, its generalized decision) is stored in Γ. Now GRA creates initial candidates for ag-reducts. The initial candidates are singleton sets and are stored in R1. The set of 1 attribute non-ag-super-reducts A1, as well as known maximal nonag-super-reducts NSR, are initialized to an empty set. The main loop starts. In each k-th iteration, k ≥ 1, the k attribute candidates Rk are evaluated during one pass over DT’ (see Section 7.2 for the description of the EvaluateCandidates procedure). As a side effect of evaluating of Rk, all k attribute non-ag-super-reducts Ak are found and known maximal non-ag-super-reducts NSR are updated. The case when NSR|AT| = AT indicates that AT does not satisfy property (ag) for some object. Hence, by definition AT is the only ag-reduct and the algorithms stops. Otherwise, k+1 attribute candidates Rk+1 are created from k attribute sets in Ak, which turned out not to be agsuper-reducts (see Section 7.4 for the description of the GRAGen procedure). The information on non-ag-super-reducts NSR is used to prune the candidates in Rk+1. Namely, each candidate in Rk+1 that has a superset in NSR is known a priori not to be an ag-reduct. Therefore it is moved from Rk+1 to Ak+1. The algorithm stops when Rk = Ak = {}. Optimizations steps 1-2 in GRA are analogous to steps 1-2 in RAD, which were discussed in Section 3.5.

134

Marzena Kryszkiewicz and Katarzyna Cichoń

Modified or additional notation for GRA • Rk – candidate k attribute sets (potential ag-reducts); • Ak – k attribute sets that are not ag-super-reducts; • A.id – the identifier of the object against which attribute set A should be evaluated; • NSR – quasi maximal attribute sets found not to be ag-super-reducts; • NSRk – k attribute sets in NSR; • x.identifier – the identifier of object x; • Γ - the set containing generalized decision values determined by atomic descriptors supported by objects in DT (DT’); that is: Γ = ∪a∈AT, v∈Va {{(a,v), ∂(a,v))}. Algorithm. GRA; DT’ = GenDecRepresentation-of-DT(DT); /* calculate a-generalized decision value for each atomic descriptor (a,v) supported by DT (or DT’) */ for each conditional attribute a∈AT do for each domain value v∈Va do begin compute ∂(a,v); store ((a,v), ∂(a,v)) in Γ endfor; /* initialize 1 attribute candidates */ R1 = {{a}| a∈AT}; A1 = {}; NSR = {}; // conditional attributes are candidates for ag-reducts for each A in R1 do A.id = 1; // the evaluation of candidate A should start from object 1 in DT’ /* search reducts */ for (k = 1; Ak ≠ {} ∨ Rk ≠ {}; k++) do begin if Rk ≠ {} then begin /* find and move non-ag-reducts from Rk to Ak and determine maximal non-ag-super-reducts NSR */ EvaluateCandidates(Rk, Ak, Γ, NSR); if |NSR|AT|| = 1 then return AT; // or equivalently, if NSR|AT| = AT then if |NSR|AT|-1| = |AT| then return AT; // optional optimizing step 1 elseif |NSR| = 1 then return ∪k Rk; // optional optimizing step 2 endif; /* create k+1 attribute candidates Rk+1 and non-ag-super-reducts Ak+1 from Ak and NSR */ GRAGen(Rk+1, Ak+1, Ak, NSR); endfor; return ∪k Rk;

A characteristic feature of our algorithm, which is shared by all Apriori-like algorithms (see [1] for the Apriori algorithm), is that the evaluation of candidates requires no more than n scans of the data set (decision table), where n is the length of a longest candidate (here: n ≤ |AT|). GRA, however, differs from Apriori in several ways. First of all, our candidates are sets of attributes instead of descriptors. Next, we evaluate candidates whether they satisfy property (ag), while the evaluation in Apriori consists in calculating the number of objects satisfying candidate descriptors. Additionally, our algorithm uses dynamically obtained information on non-ag-super-reducts to restrict the search space as quickly as possible. Another distinct optimizing feature of our algorithm is that the majority of candidates is evaluated against a fraction of the decision table instead of the entire decision table (see Section 7.2). Namely, having found that a candidate A does not satisfy the required property (ag) for some object x, the next objects are not considered for evaluating this candidate at all. In addition, the evaluation of candidates that are proper supersets of the invalidated candidate A starts from object x. These two optimizations may speed up the evaluation process considerably.

Towards Scalable Algorithms for Discovering Rough Set Reducts

135

7.2 Evaluating Candidates for Approximate Reducts The EvaluateCandidates procedure takes 4 arguments: k attribute candidates for agreducts Rk, k attribute sets that are known not to be ag-super-reducts Ak, the generalized decisions determined by atomic descriptors Γ, and known maximal non- agapproximate-super-reducts NSR. For each object read from DT’, the candidates in Rk that should be evaluated against this object are identified. These are candidates A such that A.id equals the identifier of the object. Let x be the object under consideration and A be a candidate such that A.id = x.identifier. The upper bound ∂ on ∂A(x) is calculated from the generalized decisions determined by the atomic descriptors stored in Γ. If ∂ equals x.∂AT, then A satisfies property (ag) for object x and still has a chance to be an ag-reduct. Hence, A.id is incremented to indicate that A should be evaluated against the next object after x in DT’ too. Otherwise, if ∂ ≠ x.∂AT, then A is certainly not an ag-reduct and thus is moved from candidates Rk to non-ag-super-reducts Ak. Additionally, the MaximalNonAGSuperReduct procedure (see Section 7.3) is called to determine a quasi maximal superset (nsr) of A that does not satisfy property (ag) for object x either. If nsr obtains the maximal possible length (i.e. |nsr| = |AT|), AT is returned as the maximal set the approximate generalized decision of which differs from the real AT-generalized decision, and the procedure stops. Otherwise, the found non-ag-super-reduct is stored in NSR’. Since the evaluation of candidates against objects may result in moving all candidates from Rk to Ak, scanning of DT’ is stopped as soon as all candidates turned out false ones. The last step of the EvaluateCandidates procedure consists in updating maximal non-ag-super-reducts NSR with NSR’. Please note that k attribute sets are not stored in the final NSR since they are useless for identifying non-super-reducts among l attribute candidates, where l > k. procedure EvaluateCandidates(var Rk, var Ak, in Γ, var NSR); /* assert: Γ = ∪a∈AT, v∈Va {{(a,v), ∂(a,v))} */ NSR’ = {}; for each object x in DT’ do begin for each candidate A in Rk do if A.id = x.identifier then begin ∂ = ∩a∈A ∂(a, x.a); // note: each ((a, x.a), ∂(a, x.a)) ∈ Γ if ∂ ≠ x.∂AT then begin // or equivalently: if | ∂ | ≠ | x.∂AT | then move A from Rk to Ak; nsr = MaximalNonAGSuperReduct(A, x, ∂ , Γ); // find a quasi maximal non-ag-super-reduct if nsr = AT then begin NSR = {AT}; return endif; // or equivalently: if |nsr| = |AT| then add nsr to NSR’; else A.id = x.identifier + 1 // A should be evaluated against the next object endif endif; if Rk = {} then break; endfor; NSR = MAX((NSR’ \ NSRk’) ∪ (NSR \ NSRk)); return;

136

Marzena Kryszkiewicz and Katarzyna Cichoń

7.3 Calculating Quasi Maximal Non-approximate Generalized Super-reducts The MaximalNonAGSuperReduct function is called whenever a candidate, say A, does not satisfy property (ag) for some object x. This function returns a quasi maximal superset of A that does not satisfy property (ag) for x. Clearly, there may be many such supersets of A; however the function creates and evaluates supersets of A in a specific order. Namely, nsr variable, which initially equals A, is extended in each iteration with one attribute (assigned to variable a) that is next after the one recently added to nsr. Please note that the first attribute in AT is assumed to be next to the last attribute in AT. The creation of supersets stops when an evaluated attribute nsr∪{a} satisfies property (ag) for object x. Then, MaximalNonAGSuperReduct returns nsr as a known maximal superset of A, which is not an ag-super-reduct. function MaximalNonAGSuperReduct(in A, in x, in ∂, in Γ); /* assert: ∂ ≠ x.∂AT */ nsr = A; ∂nsr = ∂; previous_a = last attribute in A; for (i=1; i 1 then if NSR ≠ {} then // ag-core is not an ag-reduct as there is its superset in NSR NSR = NSR \ NSR|core| // or equivalently NSR = NSR \ {core}; else begin R|core| = {core}; A|core| = {}; EvaluateCandidates(R|core|, A|core|, Γ, NSR); if |NSR|AT|| = 1 then return (AT, AT); endif; if R|core| = {core} then return(core, R|core|) // or equivalently if |R|core|| = 1 then else begin startLevel = |core| + 1; RstartLevel = {}; AstartLevel = {}; forall {a}∈A1 such that a∉core do begin A = core ∪ {a}; A.id = max(core.id, {a}.id); // candidates should contain ag-core RstartLevel = RstartLevel ∪ {A} endfor; forall B ∈ NSR do move subsets of B from RstartLevel to AstartLevel; endif endif; for (k = startLevel; Ak ≠ {} ∨ Rk ≠ {}; k++) do begin /* ag-reducts’ regular search */ if Rk ≠ {} then begin /* find and move non-ag-reducts from Rk to Ak and determine maximal non-ag-super-reducts NSR */ EvaluateCandidates(Rk, Ak, Γ, NSR); if |NSR|AT|| = 1 then return (AT, AT) endif; elseif |NSR| = 1 then return (core; ∪k Rk); // optional optimizing step endif; GRAGen(Rk+1, Ak+1, Ak, NSR); // create (k+1)-candidates from k attribute non-ag-reducts endfor; return (core; ∪k Rk);

Towards Scalable Algorithms for Discovering Rough Set Reducts

139

The CoreGRA algorithm, we propose, finds not only ag-reducts, but also their core. The layout of CoreGRA reminds that of GRA. CoreGRA, however, differs from GRA, in that it evaluates 1 attribute candidates in special way that provides sufficient information to determine the ag-core, and next creates subsequent candidates only as supersets of the found ag-core. CoreGRA calls the EvaluateCandidate1 procedure (see Section 8.2) in order to evaluate 1 attribute candidates. Unlike the EvaluateCandidate procedure, EvaluateCandidate1 guarantees that all maximal |AT|-1 nonag-super-reducts will be determined and returned in NSR. Using this information, the ag-core will then be calculated according to its definition. If the ag-core is an empty set, then 2 attribute and longer candidates are created and evaluated as in GRA. Otherwise, all sets in NSR that are not supersets of the ag-core are deleted, since the only candidates considered in CoreGRA will be the ag-core and its supersets. If the ag-core contains only one attribute, it is not evaluated because singleton attributes were already evaluated. The ag-core is not evaluated also in the case, when NSR, already restricted to non-ag-super-reducts being the core’s supersets, is not empty. In this case, the ag-core is also a non-ag-super-reduct as a subset of some non-ag-super-reduct in NSR. Otherwise, the ag-core is evaluated. Provided the ag-core is found an ag-reduct, it is returned as the only ag-reduct. If the ag-core is not a reduct, the new candidates R|core|+1 are created by merging the core with the remaining attributes in AT. Clearly, the new candidates which have supersets in maximal known non-ag-super-reducts NSR, are not ag-reducts either, and hence are moved from R|core|+1 to A|core|+1. From now on, CoreGRA is performed in the same way as GRA. It is expected that CoreGRA should perform better than GRA, when the ag-core consists of a sufficient number of attributes. Then fewer iterations should be performed and probably fewer candidates will be evaluated. Nevertheless, when the number of attributes in the ag-core is small, CoreGRA may be less effective than GRA because of the more exhaustive evaluation of 1 attribute candidates (their nsr fields are likely to be evaluated against the entire decision table in CoreGRA). 8.2 Evaluating Singleton Candidates Below we describe the EvaluateCandidates1 procedure, which is primarily intended to be applied only to 1 attribute candidates in CoreGRA, although it can be applied for evaluating candidates of any length. It is assumed that an additional field nsr is associated with each k attribute candidate A in Rk. The EvaluateCandidates1 procedure differs from EvaluateCandidates in that after discovering that a candidate A is not an ag-reduct, it is not removed from Rk immediately. Nevertheless, EvaluateCandidates1 stops advancing A.id field as soon as the first object invalidating A is found (like EvaluateCandidates does). In such a case, instead of evaluating A, its nsr field is extended and evaluated against the remaining objects in the decision table as long as nsr obtains the maximal possible length (i.e. |nsr| = |AT|) or the end of the decision table is reached. In the former case, AT is returned as the maximal set the approximate generalized decision of which differs from

140

Marzena Kryszkiewicz and Katarzyna Cichoń

the real AT-generalized decision, and the procedure stops. In the latter case, the remaining candidates A in Rk that turned out not ag-reducts (i.e. such that A.id ≠ |DT|+1), are moved to Ak and NSR’ is updated with their nsr fields. procedure EvaluateCandidates1(var Rk, var Ak, in Γ, var NSR); NSR’ = {}; for each object x in DT do begin for each candidate A in Rk do begin ∂ = ∩a∈A.nsr ∂(a, x.a); // note: each ((a,x.a), ∂(a,x.a)) ∈ Γ if ∂ ≠ x.∂AT then begin // or equivalently: if |∂t| = |x.∂AT| then A.nsr = MaximalNonAGSuperReduct(A.nsr, x, ∂, Γ); // find a maximal non-ag-super-reduct if A.nsr = AT then begin NSR = {AT}; return endif // or equivalently: if |A.nsr| = |AT| then elseif A.id = x.identifier then A.id = x.identifier + 1 // evaluate A’s supersets against the next object endif; endfor; if Rk = {} then break; endfor; for each candidate A in Rk do // A is not an ag-reduct if A.id ≠ |DT|+1 then move A from Rk to Ak; add A.nsr to NSR’ endif; NSR = MAX(NSR’ \ NSRk’); // NSR = MAX((NSR’ \ NSRk’) ∪ (NSR \ NSRk)) for k > 1 return;

8.3 Illustration of CoreGRA In this section, we illustrate how CoreGRA searches ag-reducts in the decision table DT from Table 1. Table 7 shows how candidates change in each iteration before and after validation against the reduced decision table DT’ from Table 2. After 1 attribute candidates were evaluated by EvaluateCandidates1, NSR became equal to {{abce}, {de}}. Thus, {abce} was the only set in NSR the length of which was equal to |AT|-1. Hence, the ag-core was determined as AT\{abce} = {d}. Since the new candidates were to be supersets of the ag-core, all sets from NSR that were not supersets of the ag-core were pruned and NSR became equal to {{de}}. The agcore {d} is not an ag-reduct, as it was not present in the set of the positively evaluated candidates R1 (here: R1 = ∅). New candidates were created by merging the ag-core with the remaining attributes in AT resulting in the following 4 attribute candidates: {ad}, {bd}, {cd}, {de}. One of them ({de}) was known a priori not to be an ag-reduct as a subset of the known nonag-super-reduct {de} in NSR. From now on, CoreGRA proceeded as GRA. The execution of the CoreGRA algorithm resulted in enumeration of 9 attribute sets instead of 21 (see Section 7.5). Table 7. Rk, Ak, and NSR in subsequent iterations of CoreGRA. k

Rk before validation

1 {a}[id:1], {b}[id:1], {c}[id:1], {d}[id:1], {e}[id:1] 2 {ad}[id:2], {bd}[id:2], {cd}[id:3]

Ak before Rk after validation validation

{de}[id:2]

{ad}[id:8], {cd}[id:8]

Ak after validation {a}[id:2], {b}[id:1], {c}[id:3], {d}[id:2], {e}[id:2] {bd}[id:2], {de}[id:2]}

NSR’ {abc}, {bc}, {c}, {de}, {abce} {bde}

NSR {abce}, {de} {bde}

Towards Scalable Algorithms for Discovering Rough Set Reducts

141

9 Discovering Approximate Certain Reducts Approximate certain reducts of DT are defined by means of generalized decisions only of objects in DT with singleton AT-generalized decisions. This observation suggests that the GRA and CoreGRA algorithms shall calculate ac-reducts of DT correctly, provided the candidate attribute sets are evaluated only against the objects in DT with singleton AT-generalized decisions. This can be achieved in two ways: a) either the initialization of candidates in the GRA procedure should be preceded by an additional operation that removes all objects from DT (or DT’) that have non-singleton AT-generalized decisions and renumbers the remaining objects; b) or the evaluation of candidates should be modified so that to ignore objects with non-singleton AT-generalized decisions safely (please, see [13]).

10 Conclusion In the article, we have offered two new algorithms: RAD and CoreRAD for discovering all exact generalized (and by this also possible) and certain reducts from decision tables. In addition, CoreRAD determines the core. Both algorithms require the calculation of all maximal attribute sets MNSR that are not super-reducts. An Apriorilike method of determining reducts based on MNSR was proposed. Our method of determining MNSR is orthogonal to the methods that determine a discernibility matrix (DM), which stores information on sets of attributes each of which discerns at least one pair of objects that should be discerned, and return the family of all such minimal sets (MDM). The reducts are then found from MDM by applying Boolean reasoning. The calculation of MNSR (as well as MDM) requires comparing each pair of objects in the decision table and finding maximal (minimal) attribute sets among those that are the result of the objects’ comparison. This operation is very costly when the number of objects in a decision table is large. In order to overcome this problem one may use a reduced table (AT, {∂AT}), which stores one object instead of many original objects that are indiscernible on AT and ∂AT. Nevertheless, when the number of objects in the reduced table is still large or the number of MNSR (MDM) is large, the calculation of reducts may be infeasible. Our preliminary experiments indicate that the determination of MNSR is a bottleneck of the proposed RAD-like algorithms in such cases. To the contrary, the proposed Apriori-like method of determining reducts based on MNSR is very efficient. In the case, when the determination of MNSR is infeasible, we advocate to search approximate reducts. In the article, we have defined such approximate reducts based on the properties of a generalized decision function. We have shown that for each A-generalized decision one may derive its upper bound (A-approximate generalized decision) from elementary a-generalized decisions, where a∈A. Whereas exact generalized (certain) reducts preserve the AT-generalized decision for all objects (for objects with singleton generalized decisions), each approximate generalized (certain) reduct A guarantees that A-approximate generalized decision is equal to the

142

Marzena Kryszkiewicz and Katarzyna Cichoń

AT-generalized decision for all objects (for objects with singleton generalized decisions). An exception to the rule is the case, when there is an object for which the approximate AT-generalized decision differs from the actual AT-generalized decision. Then the entire set of conditional attributes AT is defined as a reduct. We have proved that approximate generalized and certain reducts are supersets of exact reducts of respective types. In addition, approximate generalized reducts are supersets of exact possible reducts. We have presented GRA and CoreGRA algorithms for discovering approximate generalized (and by this also possible) reducts and certain reducts from very large decision tables. The experiments we have carried out and reported in [13] prove that the GRA-like algorithms are scalable with respect to the number of objects in a decision table and that CoreGRA tends to outperform GRA with increasing number of conditional attributes. For a few conditional attributes, however, GRA may find reducts faster. Nevertheless, the experiments need to be continued to fully recognize the performance characteristics of particular GRA-like algorithms. Finally, we note that all the proposed algorithms are capable to discover all discussed types of reducts from incomplete decision tables as well. The only difference consists in a slightly different determination of generalized decision value for atomic descriptors, namely ∂A(x) = {d(y)| y∈SA(x)}, where SA(x) = {y∈O | ∀a∈A, (a(x) = a(y)) ∨ (a(x) is NULL) ∨ (a(y) is NULL)} (see e.g. [12]). In the future, we intend to develop scalable algorithms for discovering all exact reducts.

References 1. Agrawal, R., Mannila, H., Srikant, R., Toivonen, H., Verkamo, A.I.: Fast Discovery of Association Rules. In: Advances in KDD. AAAI, Menlo Park, California (1996) 307-328 2. Bazan, J., Skowron, A., Synak, P.: Dynamic Reducts as a Tool for Extracting Laws from Decision Tables. In: Proc. of ISMIS ’94, Charlotte, USA. LNAI, Vol. 869, SpringerVerlag, (1994) 346–355 3. Bazan, J., Nguyen, H.S., Nguyen, S.H., Synak, P., Wróblewski, J.: Rough Set Algorithms in Classification Problem. In: L. Polkowski, S. Tsumoto and T.Y. Lin (eds.): Rough Set Methods and Applications. Physica-Verlag, Heidelberg, New York (2000) 49 - 88 4. Jelonek, J., Krawiec, K., Stefanowski, J.: Comparative Study of Feature Subset Selection Techniques for Machine Learning Tasks. Proc. of IIS ’98, Malbork, Poland (1998) 68–77 5. John, H.G., Kohavi, R., Pfleger, K.: Irrelevant Features and the Subset Selection Problem. In: Machine Learning: Proc. of the Eleventh International Conference, Morgan Kaufmann Publishers, San Francisco, CA, (1994) 121–129 6. Kohavi, R., Frasca, B.: Useful Feature Subsets and Rough Set Reducts. In: Proc. of the Third International Workshop on Rough Sets and Soft Computing, San Jose, CA (1994) 7. Kryszkiewicz, M.: The Algorithms of Knowledge Reduction in Information Systems, Ph.D. Thesis, Warsaw University of Technology, Institute of Computer Science (1994) 8. Kryszkiewicz, M., Rybinski, H.: Finding Reducts in Composed Information Systems. Fundamenta Informaticae Vol. 27, No. 2–3 (1996) 183–196 9. Kryszkiewicz, M.: Strong Rules in Large Databases. In: Proc. of IPMU’ 98, Paris, France, Vol. 2 (1998) 1520–1527 10. Kryszkiewicz M., Rybinski H.: Knowledge Discovery from Large Databases using Rough Sets. In: Proc. of EUFIT ’98, Aachen, Germany, Vol. 1 (1998) 85-89

Towards Scalable Algorithms for Discovering Rough Set Reducts

143

11. Kryszkiewicz, M.: Comparative Study of Alternative Types of Knowledge Reduction in Inconsistent Systems. International Journal of Intelligent Systems, Wiley, Vol. 16, No. 1 (2001) 105–120 12. Kryszkiewicz, M.: Rough Set Approach to Rules Generation from Incomplete Information Systems. In: The Encyclopedia of Computer Science and Technology. Marcel Dekker, Inc., New York, Vol. 44 (2001) 319-346 13. Kryszkiewicz, M., Cichoń K.: Scalable Methods of Discovering Rough Sets Reducts. ICS Research Report 28/2003, Warsaw University of Technology (2003) 14. Lin, T.Y.: Rough Set Theory in Very Large Databases. In: Proc. of CESA IMACS ’96, Lille, France Vol. 2 (1996) 936-941 15. Modrzejewski, M.: Feature Selection using Rough Sets Theory. In: Proc. of the European Conference on Machine Learning (1993) 213–226 16. Nguyen, S.H., Skowron, A., Synak, P., Wróblewski, J.: Knowledge Discovery in Databases: Rough Set Approach. In: Proc. of IFSA ’97, Prague, Vol. II (1997) 204-209 17. Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning about Data, Kluwer Academic Publishers, Vol. 9 (1991) 18. Pawlak, Z., Skowron, A.: A Rough Set Approach to Decision Rules Generation, ICS Research Report 23/93, Warsaw University of Technology (1993) 19. Romanski, S., Operations on Families of Sets for Exhaustive Search, Given a Monotonic Boolean Function. In: Proc. of Intl’ Conf. on Data and Knowledge Bases, Israel (1988) 20. Skowron, A., Rauszer, C.: The Discernibility Matrices and Functions in Information Systems. In: Intelligent Decision Support: Handbook of Applications and Advances of Rough Sets Theory. Kluwer Academic Publishers (1992) 331-362 21. Skowron, A., Swiniarski, R.W.: Information Granulation and Pattern Recognition. In: S.K. Pal, L. Polkowski, A. Skowron (eds.): Rough-Neural Computing. Techniques for Computing with Words. Heidelberg: Springer-Verlag (2004) 22. Slezak, D.: Approximate Reducts in Decision Tables. In: Proc. of IPMU ’96, Granada, Spain, Vol. 3 (1996) 1159-1164 23. Slezak, D.: Searching for Frequential Reducts in Decision Tables with Uncertain Objects. In: Proc. of RSCTC ’98, Warsaw. Springer-Verlag, Berlin (1998) 52–59 24. Slowiński, R. (ed.): Intelligent Decision Support, Handbook of Applications and Advances of the Rough Sets Theory. Kluwer Academic Publishers, Vol 11 (1992) 25. Stepaniuk, J.: Approximation Spaces, Reducts and Representatives. In: Skowron, A., Polkowski, L. (eds.): Rough Sets in Data Mining and Knowledge Discovery, SpringerVerlag, Berlin (1998) 26. Susmaga, R.: Experiments in Incremental Computation of Reducts. In: Skowron, A., Polkowski, L., (eds.): Rough Sets in Data Mining and Knowledge Discovery, SpringerVerlag, Berlin (1998) 27. Susmaga, R.: Parallel Computation of Reducts. In: Proc. of RSCTC ’98, Warsaw. SpringerVerlag, Berlin (1998) 450–457 28. Susmaga, R.: Computation of Shortest Reducts. In: Foundations of Computing and Decision Sciences, Poznan, Poland, Vol. 2, No. 23 (1998) 29. Susmaga, R.: Effective Tests for Inclusion Minimality in Reduct Generation. In: Foundations of Computing and Decision Sciences, Vol. 4, No. 23 (1998) 219–240 30. Tannhäuser, M.: Efficient Reduct Computation. M.Sc. Thesis, Institute of Mathematics, Warsaw University, Warsaw (1994) 31. Wroblewski, J.: Finding Minimal Reducts Using Genetic Algorithms. In: Proc. of the 2nd Annual Join Conference on Information Sc., Wrightsville Beach, NC, (1995) 186–189

Variable Precision Fuzzy Rough Sets Alicja Mieszkowicz-Rolka and Leszek Rolka Department of Avionics and Control Rzesz´ ow University of Technology ul. W. Pola 2, 35-959 Rzesz´ ow, Poland {alicjamr,leszekr}@prz.edu.pl

Abstract. In this paper the variable precision fuzzy rough sets (VPFRS) concept will be considered. The notions of the fuzzy inclusion set and the α-inclusion error based on the residual implicators will be introduced. The level of misclassiﬁcation will be expressed by means of α-cuts of the fuzzy inclusion set. Next, the use of the mean fuzzy rough approximations will be postulated and discussed. The concept of VPFRS will be deﬁned using the extended version of the variable precision rough sets (VPRS) model, which utilises a general allowance for levels of misclassiﬁcation expressed by two parameters: lower (l) and upper (u) limit. Remarks concerning the variable precision rough fuzzy sets (VPRFS) idea will be given. An example will illustrate the proposed VPFRS model.

1

Introduction

The rough sets theory [15] was originally based on the notions of classical sets theory. Dubois and Prade [3] and Nakamura [14] were among the ﬁrst who showed that the basic idea of rough set given in the form of lower and upper approximation can be extended in order to approximate fuzzy sets deﬁned in terms of membership functions. This makes it possible to analyse information systems with fuzzy attributes. The idea of fuzzy rough sets was pursued and investigated in many papers e.g. [1, 2, 4–6, 9, 16, 17]. An important extension of the rough sets theory, helpful in analysis of inconsistent decision tables, is the variable precision rough sets model (VPRS). It seems natural and valuable to combine the concepts of VPRS and fuzzy rough sets. The motivation for doing this is supported by the fact that the extended fuzzy rough approximations deﬁned by Dubois and Prade have the same disadvantages as their counterparts in the original (crisp) rough set theory [12]. Even a relative small inclusion error of a similarity class results in rejection (membership value equal to zero) of that class from the lower approximation. A small inclusion degree can also lead to an excessive increase of the upper approximation. These properties can be important especially in case of large universes, e.g. generated from dynamic processes. In order to overcome the described drawbacks we generalised the idea of Ziarko for expressing the inclusion error of one fuzzy set in another. If we want to determine the lower and upper approximation using real data sets, then we must take into account the quality of the data which is usually J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 144–160, 2004. c Springer-Verlag Berlin Heidelberg 2004

Variable Precision Fuzzy Rough Sets

145

inﬂuenced by noise and errors. The VPRS concept admits some level of misclassiﬁcation, but we go one step further and propose additionally an alternative way of evaluating the variable precision fuzzy rough approximations. We suggest determination of the mean membership degree. This is contrary to using only limit values of membership functions and disregarding the statistical properties of analysed large information system. We start by recalling the basic notions of VPRS.

2

Variable Precision Rough Sets Model

The concept of VPRS has proven to be particularly useful in analysis of inconsistent decision tables obtained from dynamic control processes [10]. We have utilised it in order to identify the decision model of a military pilot [11]. The idea of VPRS is based on a changed relation of set inclusion given in (1) and (2) [18], deﬁned for any nonempty crisp subsets A and B of the universe X. We say that the set A is included in the set B with an inclusion error β: β

A ⊆ B ⇐⇒ e(A, B) ≤ β e(A, B) = 1 −

card(A ∩ B) . card(A)

(1) (2)

The quantity e(A, B) is called the inclusion error of A in B. The value of β should be limited: 0 ≤ β < 0.5. Katzberg and Ziarko proposed later [7] an extended version of VPRS with asymmetric bounds l and u for required inclusion degree instead of admissible inclusion error β which satisfy the following inequality: 0≤l 0 x∈X i=1,...,n

and the property of disjointness [3] ∀i, j, i = j,

sup min(µFi (x), µFj (x)) < 1 .

(10)

x∈X

We should use a mapping ω from the domain Φ into the domain of the universe X, if we want to express in X the membership functions of the lower and upper approximation given by (7) and (8). Assuming that Φ is equal to the quotient set of X by a fuzzy similarity relation R, we can determine the membership functions of the fuzzy extension of the lower and upper approximation of a fuzzy set F by R [3]: ∀x ∈ X,

µω(RF ) (x) = inf µR (x, y) → µF (y)

(11)

∀x ∈ X,

µω(RF ) (x) = sup µR (x, y) ∗ µF (y) .

(12)

y∈X

y∈X

In such a case the fuzzy extension ω(A) of a fuzzy set A on X/R can be expressed as follows: µω(A) (x) = µA (Fi ),

if µFi (x) = 1 .

(13)

We use later in this paper a fuzzy compatibility relation which is symmetric and reﬂexive. One can easy show in this case that the deﬁnitions (7) and (8) are equivalent to the deﬁnitions (11) and (12). Indeed, by using a symmetric and reﬂexive fuzzy relation we obtain a family of fuzzy compatibility classes. Any elements x and y of the universe X, for which µR (x, y) = 1, belong with a membership degree equal to 1 to the same fuzzy compatibility class. In order to determine the membership degrees (11) and (12) for some x, we can take merely the membership degrees (7) and (8) obtained for that compatibility class, to which x belongs with a membership degree equal to 1.

Variable Precision Fuzzy Rough Sets

147

Another general approach was given by Greco, Matarazzo and Slowi´ nski, who proposed [5] approximation of fuzzy sets by means of fuzzy relations which are only reﬂexive. An important issue is the choice of implicators used in the deﬁnitions (7), (8), (11) and (12). Apart from applying S-implications, Dubois and Prade considered also the R-implication variant of fuzzy rough sets [3]. A comprehensive study concerning a general concept of fuzzy rough sets was done more recently by Radzikowska and Kerre in [17]. They analysed the properties of fuzzy rough approximations based on three classes of fuzzy implicators: S-implicators, R-implicators and QL-implicators. As we state below, R-implicators constitute a good base for constructing the variable precision fuzzy rough sets model.

4

Variable Precision Fuzzy Rough Sets Model

An extension of the fuzzy rough sets concept in the sense of Ziarko requires a method of determination of the lower and upper approximation, in which only a signiﬁcant part of the approximating set is taken into account. In other words, we should evaluate the membership degree of the approximating set in the lower or upper approximation by regarding only those of its elements, which are included to a suﬃciently high degree in the approximated set. This way we allow some level of misclassiﬁcation. Before we try to express the inclusion error of one fuzzy set in another we will ﬁrst recall the classical deﬁnition of fuzzy set inclusion [8]. For any fuzzy sets A and B deﬁned on the universe X, we say that the set A is included in the set B: A ⊆ B ⇐⇒ ∀x ∈ X,

µA (x) ≤ µB (x) .

(14)

If the condition (14) is satisﬁed, then we should say that the degree of inclusion of A in B is equal to 1 (the inclusion error is equal to 0). In our approach we want to evaluate the inclusion degree of a fuzzy set A in a fuzzy set B regarding particular elements of A. We obtain in such a way a new fuzzy set, which we call the fuzzy inclusion set of A in B and denote by AB . To this end we apply an implication operator → as follows: µA (x) → µB (x) if µA (x) > 0 µAB (x) = (15) 0 otherwise Only the proper elements of A (support of A) are considered as relevant. The deﬁnition (15) is based on implication operator → in order to maintain the compatibility between the the approach of Dubois and Prade and the VPFRS model in limit cases. This will be stated later in this section by the propositions 2 and 3. Examples of inclusion sets are given in the section 7. The Table 2 contains the membership functions of the approximating set X1 , the approximated set F1 and the inclusion sets X1F1 that are evaluated using implication operators of Gaines and L ukasiewicz (discussed below).

148

Alicja Mieszkowicz-Rolka and Leszek Rolka

We should consider the choice of a suitable implication operator →. Basing on (14) we put a requirement on the degree of inclusion of A in B with respect to any element x belonging to the support of the set A (µA (x) > 0). We assume that the degree of inclusion with respect to x should always be equal to 1, if the inequality µA (x) ≤ µB (x) for that x is satisﬁed: µA (x) → µB (x) = 1,

if µA (x) ≤ µB (x) .

(16)

In general, not all implicators satisfy this requirement. For example, by applying the Kleene-Dienes S-implicator: x → y = max(1 − x, y) we obtain the value 0.6, and for Early Zadeh QL-implicator: x → y = max(1 − x, min(x, y)) the value 0.5, if we take x = 0.5 < y = 0.6. Let us consider the deﬁnition of R-implicators (residual implicators) which are based on a t-norm ∗ x → y = sup{λ ∈ [0, 1] : x ∗ λ ≤ y} .

(17)

One can easy prove that any R-implicator satisﬁes the requirement (16). In the last section we demonstrate an example where two popular R-implicators were used: - the implicator of L ukasiewicz: x → y = min(1, 1 − x + y), - the Gaines implicator: x → y = 1 if x ≤ y and y/x otherwise. Radzikowska and Kerre proved that fuzzy rough approximations based on the L ukasiewicz implicator satisﬁed all properties which were considered in [17]. This is because the L ukasiewicz implicator is both an S-implicator and a residual implicator. In order to extend the idea of Ziarko on fuzzy sets we should express the error that would be made, when the weakest elements of approximating set, in the sense of their membership in the fuzzy inclusion set AB , were discarded. We apply to this end the well known notion of α-cut [8], by which for any given fuzzy set A, a crisp set Aα is obtained as follows: Aα = {x ∈ X : µA (x) ≥ α}

(18)

where α ∈ [0, 1]. We introduce the measure of α-inclusion error eα (A, B) of any nonempty fuzzy set A in a fuzzy set B: eα (A, B) = 1 −

power(A ∩ AB α) . power(A)

(19)

Power denotes here the cardinality of a fuzzy set. For any ﬁnite fuzzy set F deﬁned on X n µF (xi ) . (20) power(F ) = i=1

Now, we show that the measure of inclusion error (2) given by Ziarko is a special case of the proposed measure (19).

Variable Precision Fuzzy Rough Sets

149

Proposition 1. For any nonempty crisp sets A and B, and for α ∈ (0, 1] the α-inclusion error eα (A, B) is equivalent to the inclusion error e(A, B). Proof. First, we show that for any crisp sets A and B the inclusion set AB is equal to the intersection A ∩ B. For any crisp set C 1 for x ∈ C (21) µC (x) = 0 for x ∈ /C Every implicator → is a function satisfying: 1 → 0 = 0, and 1 → 1 = 1, 0 → 1 = 1, 0 → 0 = 1. Thus, applying the deﬁnition (15), we get 1 if x ∈ A and x ∈ B µAB (x) = µA∩B (x) = 0 otherwise

(22)

Taking into account (20) and (21), we get for any ﬁnite crisp set C power(C) = card(C) .

(23)

Furthermore, applying (18) for any α ∈ (0, 1] we obtain Cα = C .

(24)

By the equations (22), (23) and (24), we ﬁnally have power(A ∩ (A ∩ B)α ) card(A ∩ B) power(A ∩ AB α) = = . power(A) power(A) card(A) Hence, we obtain eα (A, B) = e(A, B) for any α ∈ (0, 1].

The use of α-cuts gives us the possibility to change gradually the level, at which some of the members of the approximating set are discarded. The evaluation of the membership degree of the whole approximating set in the lower and upper approximation will then be done by respecting only the remaining elements of the approximating set. The level α can adopt any value from the inﬁnite set (0, 1]. In practice, only a ﬁnite subset of (0, 1] will be applied. In our illustrative examples we used values of α obtained with a resolution equal to 0.01. Let us now consider a partition of the universe X which is generated by a fuzzy compatibility relation R. We denote by Xi some compatibility class on X, where i = 1 . . . n. Any given fuzzy set F deﬁned on the universe X can be approximated by the obtained compatibility classes. The u-lower approximation of the set F by R is a fuzzy set on X/R with the membership function which we deﬁne as follows: fiu if ∃αu = sup{α ∈ (0, 1] : eα (Xi , F ) ≤ 1 − u} µRu F (Xi ) = (25) 0 otherwise

150

Alicja Mieszkowicz-Rolka and Leszek Rolka

where fiu = inf µXi (x) → µF (x) x∈Siu

Siu = supp(Xi ∩ XiFαu ) . The set Siu contains those elements of the approximating class Xi that are included in F at least to the degree αu , provided that such αu exists. The membership fiu is then determined using the “better” elements from Siu instead of the whole class Xi . The given deﬁnition helps to prevent the situation when a few “bad” elements of a large class Xi signiﬁcantly reduce the lower approximation of the set F . Furthermore, we suggest the use of R-implicators both for evaluation of eα (Xi , F ) and in place of the operator → in (25). The l-upper approximation of the set F by R can be deﬁned similarly, as a fuzzy set on X/R with the membership function given by: fil if ∃αl = sup{α ∈ (0, 1] : eα (Xi , F ) < 1 − l} µRl F (Xi ) = (26) 0 otherwise where fil = sup µXi (x) ∗ µF (x) x∈Sil

Sil = supp(Xi ∩ (Xi ∩ F )αl ) eα (Xi , F ) = 1 −

power(Xi ∩ (Xi ∩ F )α ) . power(Xi )

For the l-upper approximation a similar explanation as for the u-lower approximation can be given. Conversely, we want to prevent the situation when a few “good” elements of a large class Xi signiﬁcantly increase the upper approximation of F . The inclusion error is now based on the intersection Xi ∩F (t-norm operator ∗) and denoted by eα (Xi , F ). It can be shown in the same way, as for the inclusion error eα , that eα (A, B) = e(A, B) for any nonempty crisp sets A and B and α ∈ (0, 1]. Now, we demonstrate that the fuzzy rough sets of Dubois and Prade constitute a special case of the proposed variable precision fuzzy rough sets, if no inclusion error is allowed (u = 1 and l = 0). Proposition 2. µR1 F (Xi ) = µRF (Xi ) for any fuzzy set F and Xi ∈ X/R. Proof. For u = 1, it is required that eα1 (Xi , F ) = 0. This means that no elements of an approximating compatibility class Xi can be discarded. I. Assume that µRF (Xi ) = inf µXi (x) → µF (x) = c ∈ (0, 1] . x∈X

In that case there exists α1 = c which is the largest possible value of α for that eα (Xi , F ) = 0. This is because the same function µXi (x) → µF (x) is used for

Variable Precision Fuzzy Rough Sets

151

determination of the inclusion set XiF . We evaluate fi1 using the set Si1 , which is equal to the class Xi since no elements of Xi are discarded. Hence, we have µR1 F (Xi ) = µRF (Xi ) = c. II. Assume now that µRF (Xi ) = inf µXi (x) → µF (x) = 0 . x∈X

There does not exist α ∈ (0, 1] for which eα (Xi , F ) = 0. Any α ∈ (0, 1] would cause discarding some x ∈ Xi . In consequence, we get µR1 F (Xi ) = µRF (Xi ) = 0 according to the deﬁnition (25).

Similarly, one can prove the next proposition which holds for the l-upper fuzzy rough approximation. Proposition 3. µR0 F (Xi ) = µRF (Xi ) for any fuzzy set F and Xi ∈ X/R. The fuzzy rough approximations based on limit values of membership functions are not always suitable for analysis of real data. This can be particulary justiﬁed in case of large universes. The obtained results should correspond to the statistical properties of analysed information systems. We need an approach that takes into account the overall set inclusion, and not merely uses a single value of membership function (often determined from noisy data). Therefore, we propose additionally an alternative deﬁnition of fuzzy rough approximations, in which the mean value of membership (in the fuzzy inclusion set) for all used elements of the approximating class is utilised. The mean u-lower approximation of the set F by R is a fuzzy set on X/R with the membership function which we deﬁne as follows: fiu if ∃αu = sup{α ∈ (0, 1] : eα (Xi , F ) ≤ 1 − u} (27) µRu F (Xi ) = 0 otherwise where fiu =

power(XiF ∩ XiFαu ) card(XiFαu )

.

The mean l-upper approximation of the set F by R is a fuzzy set on X/R with the membership function deﬁned by: fil if ∃αl = sup{α ∈ (0, 1] : eα (Xi , F ) < 1 − l} µRl F (Xi ) = (28) 0 otherwise where fil =

power(XiF ∩ XiFα ) l

card(XiFα )

.

l

The quantities fiu and fil express the mean value of inclusion degree of Xi in F , determined by using only those elements of Xi , which are included in F at least to the degree αu and αl respectively.

152

Alicja Mieszkowicz-Rolka and Leszek Rolka

Observe that we admit only α ∈ (0, 1]. If the admissible inclusion error (1−u) is equal to 0 and there exists any x with µXi (x) > 0 for that µXi (x) → µF (x) = 0, then the α-inclusion error eα (Xi , F ) = 0 only for α = 0. The use of α = 0 would result in the same value of the membership function (27) for the admissible inclusion error equal to 0 and for some value of it greater than 0. Moreover, by avoiding α = 0 we achieve full accordance with the original deﬁnitions of Ziarko in case of crisp sets and crisp equivalence relation R. In such a case the values of fiu and fil are always equal to 1. Proposition 4. For any crisp set A and crisp equivalence relation R the mean variable precision fuzzy rough approximations of A by R are equal to the variable precision rough approximations of A by R. Proof. The equivalence relation R generates a partition of the universe X into crisp equivalence classes Xi , i = 1 . . . n. By the proposition 1 and its proof: eα (Xi , A) = e(Xi , A), XiAαu = XiA , power(XiA ) = card(XiA ) for α ∈ (0, 1]. Thus, we get for the mean u-lower approximation of A by R fiu =

power(XiA ∩ XiAαu ) card(XiAαu )

µRu A (Xi ) =

1 0

=

card(XiA ) =1 card(XiA )

if e(Xi , A) ≤ 1 − u} otherwise

(29)

and for the mean l-upper approximation of A by R fil =

power(XiA ∩ XiAα ) l

card(XiAα ) l

µRl A (Xi ) =

1 0

=

card(XiA ) =1 card(XiA )

if e(Xi , A) < 1 − l} otherwise

(30)

Taking into account all approximating equivalence classes Xi and applying (13) we obtain from (29) and (30) the VPRS approximations (5) and (6) on the domain X.

5

Variable Precision Rough Fuzzy Sets Model

The idea of rough fuzzy sets was introduced by Dubois and Prade in order to approximate fuzzy concepts by means of equivalence classes Xi , i = 1 . . . n, generated by a crisp equivalence relation R deﬁned on X. The lower and upper approximations of a fuzzy set F by R are fuzzy sets on X/R with membership functions deﬁned as follows [3]: µRF (Xi ) = inf{µF (x): x ∈ Xi }

(31)

µRF (Xi ) = sup{µF (x): x ∈ Xi } .

(32)

The pair of sets (RF, RF ) is called a rough fuzzy set [3].

Variable Precision Fuzzy Rough Sets

153

Proposition 5. For every implication operator →, every t-norm ∗, and crisp equivalence relation R fuzzy rough sets are equivalent to rough fuzzy sets. Proof. Since we use crisp equivalence classes Xi we have µXi (x) = 1 for all elements x ∈ Xi . Every R-implicator, S-implicator, and QL-implicator is a border implicator [17] which satisﬁes the condition: 1 → x = x for all x ∈ [0, 1]. Every t-norm ∗ satisﬁes the boundary condition: 1 ∗ x = x. Thus, we get µXi (x) → µF (x) = µF (x) µXi (x) ∗ µF (x) = µF (x) . Therefore, the deﬁnitions (31) and (32) are special case of (7) and (8).

Basing on the proposition 5 we can easy adopt the variable precision fuzzy rough approximations from the previous section in order to obtain a simpler form of the variable precision rough fuzzy approximations. In [12] we proposed a concept of variable precision rough fuzzy sets in case of symmetrical bounds (admissible inclusion error β). We deﬁned [12] the β-lower and β-upper approximation of a fuzzy set F respectively as follows: inf{µF (x): x ∈ Si } if es (Xi , F ) ≤ β µRβ F (Xi ) = (33) 0 otherwise sup{µF (x): x ∈ Si } if es (Xi , F ) < 1 − β (34) µRβ F (Xi ) = 0 otherwise where Si = supp(Xi ∩ F ) is the support set of the intersection of Xi and F , and es is the support inclusion error, which can be deﬁned for any nonempty fuzzy sets A and B card(supp(A ∩ B)) . (35) es (A, B) = 1 − card(supp(A)) The mean rough fuzzy β-approximations were deﬁned [12] as follows: fi if es (Xi , F ) ≤ β µRβ F (Xi ) = 0 otherwise fi if es (Xi , F ) < 1 − β µRβ F (Xi ) = 0 otherwise

(36) (37)

power(Xi ∩ F ) . (38) card(supp(Xi ∩ F )) By comparing (33),(34), (36) and (37) with the deﬁnitions (25), (26), (27) and (28) respectively and taking into account the proposition 5, one can easy show that the former deﬁnitions constitute a restricted version of the new ones. This can be done by setting u = 1 − β and l = β, and by narrowing the interval (0,1] of α so that only those elements of the approximating crisp class Xi are eliminated which do not belong to the fuzzy set F at all. This is the worst case (1 → 0), in which implication produces the value of 0 by deﬁnition. We will use further only the reﬁned variable precision fuzzy rough approximations given in the current paper. fi =

154

6

Alicja Mieszkowicz-Rolka and Leszek Rolka

Decision Tables with Fuzzy Attributes

In order to analyse decision tables with fuzzy attributes we deﬁned in [12] a fuzzy compatibility relation R. We introduced furthermore the notion of fuzzy information system S with the following formal description S = X, Q, V, f

(39)

where: X – a nonempty set, called the universe, Q – a ﬁnite set of attributes, V – a set of fuzzy values of attributes. V = q∈Q Vq , where: Vq is the fuzzy domain of the attribute q, Vq is the fuzzy (linguistic) value given by a membership function µVq deﬁned on the original domain Uq of the attribute q, f – an information function, f : X × Q → V, f (x, q) ∈ Vq , ∀q ∈ Q, and ∀x ∈ X. A compatibility relation R, for comparing any elements x, y ∈ X with fuzzy values of attributes, is deﬁned as follows [12]: µR (x, y) = min sup min(µVq (x) (u), µVq (y) (u)) q∈Q u∈Uq

(40)

where Vq (x), Vq (y) are fuzzy values of the attribute q for x and y respectively. The relation given by (40) is reﬂexive and symmetric (tolerance relation). If the intersection of any two diﬀerent fuzzy values of each attribute equals to an empty fuzzy set, then the relation (40) is additionally transitive (fuzzy similarity relation). In such a case the decision table can be analysed using the original measures of the rough sets theory. For crisp attributes the relation (40) is an equivalence relation. Another form of fuzzy decision tables was considered by Bodjanova [1]. In that approach the attributes represented degree of membership in fuzzy condition and fuzzy decision concepts. An important measure, often used for evaluating the consistence of decision tables, is the approximation quality, which was originally deﬁned for a given family of crisp sets Y = {Y1 , Y2 , . . . , Yn } and a crisp indiscernibility relation R: card(PosR (Y )) card(X) PosR (Y ) = RYi .

γR (Y ) =

(41) (42)

Yi ∈Y

We modiﬁed the measure of approximation quality in order to deal with fuzzy sets and fuzzy relations [12]. For a family Φ = {F , F , . . . , Fn } of fuzzy sets and a fuzzy compatibility relation R the approximation quality of Φ by R is deﬁned as follows: γR (Φ) =

power(PosR (Φ)) card(X)

(43)

Variable Precision Fuzzy Rough Sets

PosR (Φ) =

ω(RFi ) .

155

(44)

Fi ∈Φ

The equation (43) is a generalised deﬁnition of approximation quality (mapping ω is explained in the section 3). If the family Φ and the relation R are crisp, then the generalised approximation quality (43) is equivalent to (41). In the next section we will need the measure (43) for evaluating the quality of approximation of compatibility classes obtained with respect to fuzzy decision attributes by compatibility classes obtained with respect to fuzzy condition attributes. Because the positive area of classiﬁcation (44) in the VPFRS model will be obtained by allowing some inclusion error (1−u) we use a measure, which is called u-lower approximation quality.

7

Examples

In the following example we apply the proposed concept of variable precision fuzzy rough approximations to analysis of a decision table with fuzzy attributes (Table 1). We use a compatibility relation (40) for comparing elements of the universe. Table 1. Decision table with fuzzy attributes x

c1

c2

c3

d

x1 x2 x3 x4 x5 x6 x7 x8 x9 x10

A1 A2 A1 A1 A1 A2 A1 A1 A1 A1

B1 B2 B2 B1 B1 B2 B2 B1 B2 B1

C1 C2 C2 C1 C1 C2 C1 C1 C1 C1

D1 D2 D2 D1 D3 D2 D3 D1 D3 D1

For all attributes typical triangular fuzzy membership functions were chosen. The intersection levels of diﬀerent linguistic values for attributes are assumed as follows: for A1 and A2 : 0.3, for D1 and D2 : 0.2, otherwise: 0.

for B1 and B2 : 0.2, for D2 and D3 : 0.2,

for C1 and C2 : 0.25,

We obtain a family Φ = {F , F , F } of compatibility classes with respect to the fuzzy decision attribute d:

156

Alicja Mieszkowicz-Rolka and Leszek Rolka

F1 = { 1.00/x1 , 1.00/x8 , F2 = { 0.20/x1 , 0.20/x8 , F3 = { 0.00/x1 , 0.00/x8 ,

0.20/x2 , 0.00/x9 , 1.00/x2 , 0.20/x9 , 0.20/x2 , 1.00/x9 ,

0.20/x3 , 1.00/x10 1.00/x3 , 0.20/x10 0.20/x3 , 0.00/x10

1.00/x4, 0.00/x5 , 0.20/x6 , 0.00/x7 , }, 0.20/x4, 0.20/x5 , 1.00/x6 , 0.20/x7 , }, 0.00/x4, 1.00/x5 , 0.20/x6 , 1.00/x7 , },

and the following family Ψ = {X , X , X , X } of compatibility classes with respect to the fuzzy condition attributes c1 , c2 , c3 : X1 = { 1.00/x1 , 1.00/x8 , X2 = { 0.20/x1 , 0.20/x8 , X3 = { 0.20/x1 , 0.20/x8 , X4 = { 0.20/x1 , 0.20/x8 ,

0.20/x2 , 0.20/x9 , 1.00/x2 , 0.25/x9 , 0.30/x2 , 0.25/x9 , 0.25/x2 , 1.00/x9 ,

0.20/x3 , 1.00/x10 0.30/x3 , 0.20/x10 1.00/x3 , 0.20/x10 0.25/x3 , 0.20/x10

1.00/x4, }, 0.20/x4, }, 0.20/x4, }, 0.20/x4, }.

1.00/x5 , 0.20/x6 , 0.20/x7 , 0.20/x5 , 1.00/x6 , 0.25/x7 , 0.20/x5 , 0.30/x6 , 0.25/x7 , 0.20/x5 , 0.25/x6 , 1.00/x7 ,

Table 2. Membership functions of X1 , F1 , X1F1 x x1 x2 x3 x4 x5 x6 x7 x8 x9 x10

µX1 (x) 1.00 0.20 0.20 1.00 1.00 0.20 0.20 1.00 0.20 1.00

µF1 (x) 1.00 0.20 0.20 1.00 0.00 0.20 0.00 1.00 0.00 1.00

µX F1 (x)

µX F1 (x)

(G)

(L)

1.00 1.00 1.00 1.00 0.00 1.00 0.00 1.00 0.00 1.00

1.00 1.00 1.00 1.00 0.00 1.00 0.80 1.00 0.80 1.00

1

1

The Table 2 contains the membership functions of the fuzzy inclusion sets X1F1 obtained for the Gaines R-implicator and the L ukasiewicz RS-implicator respectively. We can observe for x7 and x9 a big diﬀerence between the values of the membership functions µX F1 (x) obtained for the Gaines and the L ukasiewicz 1 implicator. If µF1 (x) = 0 the Gaines implicator (x → y = 1 if x ≤ y and y/x otherwise) produces always 0. The implicator of L ukasiewicz (x → y = min(1, 1− x + y)) is more suitable in that case because its value is proportional to the diﬀerence x − y, when x > y. It will be easier, on a later stage, to obtain the largest possible α-cut of the fuzzy inclusion set for a given value of the admissible inclusion error, if we apply the L ukasiewicz implicator.

Variable Precision Fuzzy Rough Sets

157

Table 3. u-lower approximation of F1 Method u

G-inf

G-mean

L -inf

L -mean

µRu F1 (X1 )

1 0.8 0.75

0.00 0.00 1.00

0.00 0.00 1.00

0.00 0.80 1.00

0.00 0.96 1.00

µRu F1 (X2 )

1 0.8 0.75

0.00 0.20 0.20

0.00 0.72 0.72

0.20 0.20 0.20

0.76 0.76 0.76

µRu F1 (X3 )

1 0.8 0.75

0.00 0.00 0.20

0.00 0.00 0.79

0.20 0.20 0.20

0.83 0.83 0.83

µRu F1 (X4 )

1 0.8 0.75

0.00 0.00 0.00

0.00 0.00 0.00

0.00 0.00 0.00

0.00 0.00 0.00

The results for the u-lower approximation of F1 by the family Ψ are given in the Table 3. Let us analyse for example the case, where the upper limit u = 0.80. The admissible inclusion error is equal to 1 − u = 0.20. We see that the membership degrees µF1 (X1 ) for the Gaines implicator are equal to 0, whereas for the L ukasiewicz implicator we obtain µF1 (X1 ) = 0.80 for the inﬁmum and µF1 (X1 ) = 0.96 for the mean u-lower approximation. Only by using a larger value of the admissible inclusion error 1 − u = 0.25 we obtain better results for the Gaines implicator: µF1 (X1 ) = 1 for both the inﬁmum and the mean u-lower approximation. The results for the u-lower approximation of F2 and F3 are given in the Tables 4 and 5. The u-lower approximations of the whole family Φ are given in the Tables 6, 7 and 8. The obtained diﬀerences between the Gaines and the L ukasiewicz implicator have signiﬁcant inﬂuence on the approximation quality for the considered fuzzy information system (see the Table 9) especially for the inﬁmum u-lower approximation. We obtain smaller diﬀerences for the Gaines and L ukasiewicz implicator in case of the mean u-lower approximation. The results given in the Table 9 validate the necessity and usefulness of the introduced VPFRS model. Allowing some level of misclassiﬁcation leads to a signiﬁcant increase of the u-approximation quality (important measure used in analysis of information systems). The mean based VPFRS model produce higher values of the u-approximation quality than the limit based VPFRS model. It must be emphasised here that the strength of the variable precision rough set model can be observed especially for large universes. We had to choose larger values of the admissible inclusion error in the above example, in order to show

158

Alicja Mieszkowicz-Rolka and Leszek Rolka Table 4. u-lower approximation of F2 Method u

G-inf

G-mean

L -inf

L -mean

µRu F2 (X1 )

1 0.75

0.20 0.20

0.60 0.60

0.20 0.20

0.60 0.60

µRu F2 (X2 )

1 0.85

0.80 1.00

0.96 1.00

0.95 1.00

0.99 1.00

µRu F2 (X3 )

1 0.8

0.80 1.00

0.96 1.00

0.95 1.00

0.99 1.00

µRu F2 (X4 )

1 0.75

0.20 0.20

0.84 0.84

0.20 0.20

0.84 0.84

Table 5. u-lower approximation of F3 Method u

G-inf

G-mean

L -inf

L -mean

µRu F3 (X1 )

1 0.75

0.00 0.00

0.00 0.00

0.00 0.00

0.00 0.00

µRu F3 (X2 )

1 0.75

0.00 0.20

0.00 0.68

0.20 0.20

0.75 0.75

µRu F3 (X3 )

1 0.75

0.00 0.00

0.00 0.00

0.20 0.20

0.82 0.82

µRu F3 (X4 )

1 0.75

0.00 0.80

0.00 0.90

0.80 0.95

0.91 0.98

Table 6. u-lower approximation of Φ for u = 1 G-inf G-mean L -inf L -mean

{ { { {

0.20/X1 , 0.60/X1 , 0.20/X1 , 0.60/X1 ,

0.80/X2 , 0.96/X2 , 0.95/X2 , 0.99/X2 ,

0.80/X3 , 0.96/X3 , 0.95/X3 , 0.99/X3 ,

0.20/X4 0.84/X4 0.80/X4 0.91/X4

} } } }

Table 7. u-lower approximation of Φ for u = 0.8 G-inf G-mean L -inf L -mean

{ { { {

0.20/X1 , 0.60/X1 , 0.80/X1 , 0.96/X1 ,

1.00/X2 , 1.00/X2 , 1.00/X2 , 1.00/X2 ,

1.00/X3 , 1.00/X3 , 1.00/X3 , 1.00/X3 ,

0.20/X4 0.84/X4 0.80/X4 0.91/X3

} } } }

Variable Precision Fuzzy Rough Sets

159

Table 8. u-lower approximation of Φ for u = 0.75 G-inf G-mean L -inf L -mean

{ { { {

1.00/X1 , 1.00/X1 , 1.00/X1 , 1.00/X1 ,

1.00/X2 , 1.00/X2 , 1.00/X2 , 1.00/X2 ,

1.00/X3 , 1.00/X3 , 1.00/X3 , 1.00/X3 ,

0.80/X4 0.90/X4 0.95/X4 0.98/X4

} } } }

Table 9. u-approximation quality of Φ Method

γR (Φ)

u

G-inf

G-mean

L -inf

L -mean

1 0.8 0.75

0.380 0.440 0.960

0.756 0.768 0.980

0.553 0.860 0.990

0.779 0.962 0.996

the properties of the proposed approach. Nevertheless, the admissible inclusion error of about 0.2 turned out to be reasonable for analysing large universes obtained from dynamic processes [10, 11, 13].

8

Conclusions

In this paper a concept of variable precision fuzzy rough sets model (VPFRS) was proposed. The VPRS model with asymmetric bounds (l, u) was used. The starting point of the VPFRS idea was introduction of the notion of fuzzy inclusion set that should be based on R-implicators. A generalised notion of α-inclusion error was deﬁned, expressed by means of α-cuts of the fuzzy inclusion set. The idea of mean fuzzy rough approximations was proposed, which helps to obtain results that better correspond to the statistical properties of analysed large information systems. We suggest to use it particularly for small values of the admissible inclusion error. Furthermore, it turns out that application of the L ukasiewicz R-implicator is a good choice for determination of fuzzy rough approximations. The presented generalised approach to VPFRS can be helpful especially in case of analysing fuzzy information systems obtained from real (dynamic) processes. In future work we will concentrate on axiomatisation and further development of the proposed VPFRS model.

References 1. Bodjanova, S.: Approximation of Fuzzy Concepts in Decision Making. Fuzzy Sets and Systems, Vol. 85 (1997) 2. Chakrabarty, K., Biswas, R., Nanda, S.: Fuzziness in Rough Sets. Fuzzy Sets and Systems, Vol. 110 (2000) 3. Dubois, D., Prade H.: Putting Rough Sets and Fuzzy Sets Together. In: Slowi´ nski, R. (ed.): Intelligent Decision Support. Handbook of Applications and Advances of the Rough Sets. Kluwer Academic Publishers, Boston Dordrecht London (1992)

160

Alicja Mieszkowicz-Rolka and Leszek Rolka

4. Greco, S., Matarazzo B., Slowi´ nski R.: The use of rough sets and fuzzy sets in MCDM. In: Gal, T., Stewart, T., Hanne, T. (eds.): Advances in Multiple Criteria Decision Making. Kluwer Academic Publishers, Boston Dordrecht London (1999) 5. Greco, S., Matarazzo B., Slowi´ nski R.: Rough set processing of vague information using fuzzy similarity relations. In: Calude, C.S., Paun, G. (eds.): Finite Versus Inﬁnite – Contributions to an Eternal Dilemma. Springer-Verlag, Berlin Heidelberg New York (2000) 6. Inuiguchi, M., Tanino, T.: New Fuzzy Rough Sets Based on Certainty Qualiﬁcation. In: Pal, S. K., Polkowski, L., Skowron, A. (eds.): Rough-Neuro-Computing: Techniques for Computing with Words. Springer-Verlag, Berlin Heidelberg New York (2002) 7. Katzberg, J.D., Ziarko, W.: Variable Precision Extension of Rough Sets. Fundamenta Informaticae, Vol. 27 (1996) 8. Klir, J., Folger, T. A.: Fuzzy Stets Unertainty and Information. Prentice Hall, Englewood, New Jersey (1988) 9. Lin, T.Y.: Topological and Fuzzy Rough Sets. In: Slowi´ nski, R. (ed.): Intelligent Decision Support. Handbook of Applications and Advances of the Rough Sets. Kluwer Academic Publishers, Boston Dordrecht London (1992) 10. Mieszkowicz-Rolka, A., Rolka, L.: Variable Precision Rough Sets in Analysis of Inconsistent Decision Tables. In: Rutkowski, L., Kacprzyk, J. (eds.): Advances in Soft Computing. Physica-Verlag, Heidelberg (2003) 11. Mieszkowicz-Rolka, A., Rolka, L.: Variable Precision Rough Sets. Evaluation of Human Operator’s Decision Model. In: Soldek, J., Drobiazgiewicz, L. (eds.): Artiﬁcial Intelligence and Security in Computing Systems. Kluwer Academic Publishers, Boston Dordrecht London (2003) 12. Mieszkowicz-Rolka, A., Rolka, L.: Fuziness in Information Systems. Electronic Notes in Theoretical Computer Science, Vol. 82, Issue No. 4. http://www.elsevier.nl/locate/entcs/volume82.html 13. Mieszkowicz-Rolka, A., Rolka, L.: Studying System Properties with Rough Sets. Lectures Notes in Computer Science, Vol. 2657. Springer Verlag, Berlin Heidelberg New York (2003) 14. Nakamura, A.: Application of Fuzzy-Rough Classiﬁcations to Logics. In: Slowi´ nski, R. (ed.): Intelligent Decision Support. Handbook of Applications and Advances of the Rough Sets. Kluwer Academic Publishers, Boston Dordrecht London (1992) 15. Pawlak, Z.: Rough Sets. Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Boston Dordrecht London (1991) 16. Polkowski, L.: Rough Sets: Mathematical Foundations. Physica-Verlag, Heidelberg (2002) 17. Radzikowska, A.M., Kerre, E.E.: A Comparative Study of Fuzzy Rough Sets. Fuzzy Sets and Systems, Vol. 126 (2002) 18. Ziarko, W.: Variable Precision Rough Sets Model. Journal of Computer and System Sciences, Vol. 40 (1993)

Greedy Algorithm of Decision Tree Construction for Real Data Tables Mikhail Ju. Moshkov1,2 1

Faculty of Computing Mathematics and Cybernetics, Nizhny Novgorod State University 23, Gagarina Ave., Nizhny Novgorod, 603950, Russia [email protected] 2 Institute of Computer Science, University of Silesia 39, B¸edzi´ nska St., Sosnowiec, 41-200, Poland

Abstract. In the paper a greedy algorithm for minimization of decision tree depth is described and bounds on the algorithm precision are considered. This algorithm is applicable to data tables with both discrete and continuous variables which can have missing values. Under some natural assumptions on the class N P and on the class of considered tables, the algorithm is, apparently, close to best approximate polynomial algorithms for minimization of decision tree depth. Keywords: data table, decision table, decision tree, depth

1

Introduction

Decision trees are widely used in diﬀerent applications as algorithms for task solving and as a way for knowledge representation. Problems of decision tree optimization are very complicated. In this paper we consider approximate algorithm for decision tree depth minimization which can be applied to real data tables with both discrete and continuous variables having missing values. First, we transform given data table into a decision table, possibly, with many-valued decisions (i.e. pass to the model which is usual for rough set theory [7, 8]). Later, we apply to this table a greedy algorithm which is similar to algorithms for decision tables with one-valued decisions [3], but uses more complicated uncertainty measure. We obtain bounds on precision for this algorithm and, based on results from [2], show that under some natural assumptions on the class N P and on the class of considered tables, the algorithm is, apparently, close to best approximate polynomial algorithms for minimization of decision tree depth. Note that [6] contains some similar results without proofs. The results of the paper were obtained partially in the frameworks of joint research project of Intel Nizhny Novgorod Laboratory and Nizhny Novgorod State University. J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 161–168, 2004. c Springer-Verlag Berlin Heidelberg 2004

162

2

Mikhail Ju. Moshkov

Data Tables and Attributes

A data table D is a rectangular table with t columns which correspond to variables x1 , . . . , xt . The rows of D are t-tuples of variable x1 , . . . , xt values. Values of some variables in some rows can be missed. The table D can contain equal rows. The variables are separated into discrete and continuous. A discrete variable xi takes values from an unordered ﬁnite set Ai . A continuous variable xj takes values from the set IR of real numbers. Each row r of the table D is labelled by an element y(r) from a ﬁnite set C. One can interpret these elements as values of a new variable y. The problem connected with the table D is to predict the value of y using variables x1 , . . . , xt . To this end we will not use values of x1 , . . . , xt directly. We will use values of some attributes depending on variables from the set {x1 , . . . , xt }. An attribute is a function f depending on variables xi1 , . . . , xim ∈ {x1 , . . . , xt } and taking values from the set E = {0, 1, ∗}. Let r be a row of D. If values of all variables xi1 , . . . , xim are deﬁnite in r then for this row the value of f (xi1 , . . . , xim ) belongs to the set {0, 1}. If value of at least one of variables xi1 , . . . , xim is missed in r then for this row the value of f (xi1 , . . . , xim ) is equal to ∗. Consider some examples of attributes. In the system CART [1] the attributes are considered (in the main) each of which depends on one variable xi . Let xi be a continuous variable, and a be a real number. Then the considered attribute takes value 0 if xi < a, takes value 1 if xi ≥ a, and takes value ∗ if the value of xi is missed. Let xi be a discrete variable, which takes values from the set Ai , and / B, takes B be a subset of Ai . Then the considered attribute takes value 0 if xi ∈ value 1 if xi ∈ B, and takes value ∗ if the value of xi is missed. It is possible to consider attributes depending on many variables. For example, let ϕ be a polynomial depending on continuous variables xi1 , . . . , xim . Then the considered attribute takes value 0 if ϕ(xi1 , . . . , xim ) < 0, takes value 1 if ϕ(xi1 , . . . , xim ) ≥ 0, and takes value ∗ if the value of at least one of variables xi1 , . . . , xim is missed. Let F = {f1 , ..., fk } be a set of attributes which will be used for prediction of the variable y value. We will say that two rows r1 and r2 are equivalent relatively F if each attribute fi from F takes the same value on r1 and r2 . The considered equivalence relation divides the set of rows of the table D into equivalence classes S1 , . . . , Sq . Let j ∈ {1, . . . , q}. The rows from the equivalence class Sj are indiscernible from the point of view of values of attributes from F . So when we will predict the value y using only attributes from F we will give the same answer (element from C) for any row from Sj . Denote by C(Sj ) the set of elements d from C such that |{r : r ∈ Sj , y(r) = d}| = max |{r : r ∈ Sj , y(r) = c}| . c∈C

It is clear that only answers from the set C(Sj ) minimize the number of mistakes for rows from the class Sj . For any r ∈ Sj denote C(r) = C(Sj ). Now we can formulate exactly the problem Pred(D, F ) of prediction of the variable y value: for a given row r of the data table D we must ﬁnd a number from the set C(r) using values of attributes from F .

Greedy Algorithm of Decision Tree Construction for Real Data Tables

163

Note that in [5] another setting of the problem of prediction was considered: for a given row r of the data table D we must ﬁnd the set {y(r ) : r ∈ Sj }, where Sj is the equivalence class containing r.

3

Decision Trees

As algorithms for the problem Pred(D, F ) solving we will consider decision trees with attributes from the set F . Such decision tree is ﬁnite directed tree with the root in which each terminal node is labelled either by an element from the set C or by nothing, each non-terminal node is labelled by an attribute from the set F . Three edges start in each non-terminal node. These edges are labelled by 0, 1 and ∗ respectively. The functioning of a decision tree Γ on a row of the data table D is deﬁned in the natural way. We will say that the decision tree Γ solves the problem Pred(D, F ) if for any row r of D the computation is ﬁnished in a terminal node of Γ which is labelled by an element of the set C(r). The depth of a decision tree is the maximal length of a path from the root to a terminal node of the tree. We denote by h(Γ ) the depth of a decision tree Γ . By h(D, F ) we denote the minimal depth of a decision tree with attributes from F which solves the problem Pred(D, F ).

4

Decision Tables with Many-Valued Decisions

We will assume that the information about the problem P (D, F ) is represented in the form of a decision table T = T (D, F ). The table T has k columns corresponding to the attributes f1 , . . . , fk and q rows corresponding to the equivalence classes S1 , . . . , Sq . The value fj (ri ) is on the intersection of a row Si and a column fj , where ri is an arbitrary row from the equivalence class Si . For i = 1, . . . , q the row Si of the table T is labelled by the subset C(Si ) of the set C. We will consider sub-tables of the table T which can be obtained from T by removal of some rows. Let T be a sub-table of T . of rows of the table T . The table T will be called Denote by Row(T ) the set degenerate if Row(T ) = ∅ or Si ∈Row(T ) C(Si ) = ∅. Let i1 , . . . , im ∈ {1, . . . , k} and δ1 , . . . , δm ∈ E = {0, 1, ∗}. We denote by T (i1 , δ1 ) . . . (im , δm ) the sub-table of the table T that consists of rows each of which on the intersections with columns fi1 , . . . , fim has elements δ1 , . . . , δm respectively. We deﬁne the parameter M (T ) of the table T as follows. If T is a degenerate table then M (T ) = 0. Let T be a non-degenerate table. Then M (T ) is minimal natural m such that for any (δ1 , . . . , δk ) ∈ E k there exist numbers i1 , . . . , in ∈ {1, . . . , k}, for which T (i1 , δi1 ) . . . (in , δin ) is a degenerate table, and n ≤ m. set if A nonempty subset B of the set Row(T ) will be called boundary C(S ) = ∅ and C(S ) = ∅ for any nonempty subset B of the set i i Si ∈B Si ∈B B such that B = B. We denote by R(T ) the number of boundary subsets of the set Row(T ). It is clear that R(T ) = 0 if and only if T is a degenerate table.

164

5

Mikhail Ju. Moshkov

Algorithm U for Decision Tree Construction

For decision table T = T (D, F ) we construct a decision tree U (T ) which solves the problem Pred(D, F ). We begin the construction from the tree that consists of one node v which is not labelled. If T has no rows then we ﬁnish the construction. Let T have rows and Si ∈Row(T ) C(Si ) = ∅. Then we mark the node v by an element from the set Si ∈Row(T ) C(Si ) and ﬁnish the construction. Let T have rows and Si ∈Row(T ) C(Si ) = ∅. For i = 1, . . . , k we compute the value Qi = max{R(T (i, δ)) : δ ∈ E}. We mark the node v by the attribute fi0 where i0 is the minimal i for which Qi has minimal value. For each δ ∈ E we add to the tree the node v(δ), draw the edge from v to v(δ), and mark this edge by element δ. For the node v(δ) we will make the same operations as for the node v, but instead of the table T we will consider the table T (i0 , δ), etc.

6

Bounds on Algorithm U Precision

If T is a degenerate table then the decision tree U (T ) consists of one node. The depth of this tree is equal to 0. Consider now the case when T is a non-degenerate table. Theorem 1. Let the decision table T = T (D, F ) be non-degenerate. Then h(U (T )) ≤ M (T ) ln R(T ) + 1 . Later we will show (see Lemma 3) that M (T ) ≤ h(D, F ). So we have the following Corollary 1. Let the decision table T = T (D, F ) be non-degenerate. Then h(U (T )) ≤ h(D, F ) ln R(T ) + 1 . Let t be a natural number. Denote by Tab(t) the set of decision tables T such that |C(Si )| ≤ t for any row Si ∈ Row(T ). Let T ∈ Tab(t). One can show that each boundary subset of the set Row(T ) has at most t + 1 rows. Using this fact it is not diﬃcult to show that the algorithm U has polynomial time complexity on the set Tab(t). Using results from [2] on precision of approximate polynomial algorithms for set covering problem it is possible to prove that if N P ⊆ DT IM E(nO(log log n) ) then for any ε, 0 < ε < 1, there is no polynomial algorithm which for a given decision table T = T (D, F ) from Tab(t) constructs a decision tree Γ such that Γ solves the problem Pred(D, F ) and h(Γ ) ≤ (1 − ε)h(D, F ) ln R(T ). We omit the proof of this statement. Proof of a similar result can be found in [4]. Using Corollary 1 we conclude that if N P ⊆ DT IM E(nO(log log n) ) then the algorithm U is, apparently, close to best (from the point of view of precision) approximate polynomial algorithms for minimization of decision tree depth for decision tables from Tab(t) (at least for small values of t).

Greedy Algorithm of Decision Tree Construction for Real Data Tables

7

165

Proof of Precision Bounds

Lemma 1. Let Γ be a decision tree which solves the problem Pred(D, F ), T = T (D, F ) and τ be a path of the length n from the root to a terminal node of Γ , in which non-terminal nodes are labelled by attributes fi1 , ..., fin , and edges are labelled by elements δ1 , ..., δn . Then T (i1 , δ1 ) . . . (in , δn ) is a degenerate table. Proof. Assume the contrary: let the table T = T (i1 , δ1 ) . . . (in , δn ) be nondegenerate. Let the terminal node v of the path τ be labelled by an element c ∈ C. Since T is a non-degenerate table, it has a row (equivalence class) Si such that c ∈ / C(Si ). Evidently, c ∈ / C(r) for any row r ∈ Si . It is clear that for any row r ∈ Si the computation in the tree Γ moves along the path τ and ﬁnishes in the node v which is impossible since Γ is a decision tree solving the problem Pred(D, F ), and c ∈ / C(r). Therefore T (i1 , δ1 ) . . . (in , δn ) is a degenerate table. Lemma 2. Let T = T (D, F ) and T1 be a sub-table of T . Then M (T1 ) ≤ M (T ). Proof. Let i1 , . . . , in ∈ {1, . . . , k} and δ1 , . . . , δn ∈ E. If T (i1 , δ1 ) . . . (in , δn ) is a degenerate table then T1 (i1 , δ1 ) . . . (in , δn ) is a degenerate table too. From here and from the deﬁnition of the parameter M the statement of the lemma follows. Lemma 3. Let T = T (D, F ). Then h(D, F ) ≥ M (T ). Proof. Let T be a degenerate table. Then, evidently, M (T ) = 0 and h(D, F ) = 0. Let T be a non-degenerate table, and Γ be a decision tree, which solves the problem Pred(D, F ) and for which h(Γ ) = h(D, F ). Consider a tuple (δ1 , . . . , δk ) ∈ E k , which satisﬁes the following condition: if i1 , . . . , in ∈ {1, . . . , k} and T (i1 , δi1 ) . . . (in , δin ) is a degenerate table then n ≥ M (T ). The existence of such tuple follows from the deﬁnition of the parameter M (T ). Consider a path τ from the root to a terminal node of Γ , which satisﬁes the following conditions. Let the length of τ be equal to m and non-terminal nodes of τ be labelled by attributes fi1 , . . . , fim . Then the edges of τ are labelled by elements δi1 , . . . , δim respectively. From Lemma 1 follows that T (i1 , δi1 ) . . . (im , δim ) is a degenerate table. Therefore m ≥ M (T ), and h(Γ ) ≥ M (T ). Since h(D, F ) = h(Γ ), we obtain h(D, F ) ≥ M (T ). Lemma 4. Let T = T (D, F ), T1 be be a sub-table of T , i, i1 , . . . , im ∈ {1, . . . , k} and δ, δ1 , . . . , δm ∈ E. Then R(T1 ) − R(T1 (i, δ)) ≥ R(T1 (i1 , δ1 ) . . . (im , δm )) −R(T1 (i1 , δ1 ) . . . (im , δm )(i, δ)) . Proof. Let T2 = T1 (i1 , δ1 ) . . . (im , δm ). We denote by P1 (respectively by P2 ) the set of boundary sets of rows from T1 (respectively from T2 ) in each of which at least one row has in the column fi an element, which is not equal to δ. One can show that P2 ⊆ P1 , |P1 | = R(T1 ) − R(T1 (i, δ)) and |P2 | = R(T2 ) − R(T2 (i, δ)).

166

Mikhail Ju. Moshkov

Proof (of Theorem 1). Consider a most long path in the tree U (T ) from the root to a terminal node. Let its length be equal to n, its non-terminal nodes be labelled by attributes fl1 , . . . , fln , and its edges be labelled by elements δ1 , . . . , δn . Consider the tables T1 , . . . , Tn+1 , where T1 = T and Tp+1 = Tp (lp , δp ) for p = 1, . . . , n. Let us prove that for any p ∈ {1, . . . , n} the following inequality holds: R(Tp+1 ) ≤

M (Tp ) − 1 R(Tp ) . M (Tp )

(1)

From the description of the algorithm U follows that Tp is a non-degenerate table. Therefore M (Tp ) > 0. For i = 1, . . . , k we denote by σi an element from E such that R(Tp (i, σi )) = max{R(Tp (i, σ)) : σ ∈ E}. From the description of the algorithm U follows that lp is the minimal number from {1, . . . , k} such that R(Tp (lp , σlp )) = min{R(Tp (i, σi )) : i = 1, . . . , k} . Consider the tuple (σ1 , . . . , σk ). From the deﬁnition of M (Tp ) follows that there exists numbers i1 , . . . , im ∈ {1, . . . , k} for which m ≤ M (Tp ) and Tp (i1 , σi1 ) . . . (im , σim ) is a degenerate table. Therefore R(Tp (i1 , σi1 ) . . . (im , σim )) = 0. Hence R(Tp ) − [R(Tp ) − R(Tp (i1 , σi1 ))] − [R(Tp (i1 , σi1 )) −R(Tp (i1 , σi1 )(i2 , σi2 ))] − . . . − [R(Tp (i1 , σi1 ) . . . (im−1 , σim−1 )) −R(Tp (i1 , σi1 ) . . . (im , σim ))] = R(Tp (i1 , σi1 ) . . . (im , σim )) = 0 . Using Lemma 4 we conclude that for j = 1, . . . , m the inequality R(Tp (i1 , σi1 ) . . . (ij−1 , σij−1 ))−R(Tp (i1 , σi1 ) . . . (ij , σij )) ≤ R(Tp )−R(Tp (ij , σij )) holds. Therefore R(Tp ) − m j=1 (R(Tp ) − R(Tp (ij , σij )) ≤ 0 and m

R(Tp (ij , σij )) ≤ (m − 1)R(Tp ) .

j=1

Let s ∈ {1, . . . , m} and R(Tp (is , σis )) = min{R(Tp (ij , σij )) : j = 1, . . . , m}. Then mR(Tp (is , σis )) ≤ (m − 1)R(Tp ) and R(Tp (is , σis )) ≤ m−1 m R(Tp ). Taking into account that R(Tp (lp , σlp )) ≤ R(Tp (is , σis )) and m ≤ M (Tp ) we obtain M (Tp ) − 1 R(Tp ) . (2) R(Tp (lp , σlp )) ≤ M (Tp ) From the inequality R(Tp (lp , δp )) ≤ R(Tp (lp , σlp )) and from (2) follows that the inequality (1) holds. From the inequality (2) in the case p = 1 and from description of the algorithm U follows that if M (T ) = 1 then h(U (T )) = 1, and the statement of the theorem holds. Let M (T ) ≥ 2. From (1) follows that R(Tn ) ≤ R(T1 )

M (Tn−1 ) − 1 M (T1 ) − 1 M (T2 ) − 1 · · ...· . M (T1 ) M (T2 ) M (Tn−1 )

(3)

Greedy Algorithm of Decision Tree Construction for Real Data Tables

167

From the description of the algorithm U follows that Tn is a non-degenerate table. Consequently, R(Tn ) ≥ 1 . (4) From Lemma 2 follows that for p = 1, . . . , n − 1 the inequality M (Tp ) ≤ M (T ) holds. From (3)-(5) follows that 1 ≤ R(T ) 1+

1 M (T ) − 1

M(T )−1 M(T )

(5) n−1

. Therefore

n−1 ≤ R(T ) .

If we take the natural logarithm of both sides of this inequality we conclude 1 that (n − 1) ln 1 + M(T )−1 ≤ ln R(T ). It is known that for any natural m the inequality ln(1 +

1 1 m ) > m+1 holds. Taking (n−1) M(T ) < ln R(T ). Hence n

into account that M (T ) ≥ 2, we

< M (T ) ln R(T ) + 1. Taking into obtain the inequality account that h(U (T )) = n we obtain h(U (T )) < M (T ) ln R(T ) + 1.

8

Conclusion

A greedy algorithm for minimization of decision tree depth is described. This algorithm is applicable to real data tables, which are transformed into decision tables. The structure of the algorithm is so simple that it is possible to obtain bounds on precision of this algorithm. These bounds show that under some natural assumptions on the class N P and on the class of considered decision tables the algorithm is close, apparently, to best approximate polynomial algorithms for minimization of decision tree depth. The second peculiarity of the algorithm is the way of the work with missing values: if we compute the value of an attribute f (xi1 , . . . , xim ), and the value of at least one of variables xi1 , . . . , xim is missed then the computation will go along the special edge labelled by ∗. This peculiarity may be helpful if we will see on the constructed decision tree as on a way for representation of knowledge about data table D.

References 1. Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Wadsworth & Brooks, 1984 2. Feige, U.: A threshold of ln n for approximating set cover (Preliminary version). Proceedings of 28th Annual ACM Symposium on the Theory of Computing (1996) 314–318 3. Moshkov, M.Ju.: Conditional tests. Problems of Cybernetics 40, Edited by S.V. Yablonskii. Nauka Publishers, Moscow (1983) 131–170 (in Russian)

168

Mikhail Ju. Moshkov

4. Moshkov, M.Ju.: About works of R.G. Nigmatullin on approximate algorithms for solving of discrete extremal problems. Discrete Analysis and Operations Research (Series 1) 7(1) (2000) 6–17 (in Russian) 5. Moshkov, M.Ju.: Approximate algorithm for minimization of decision tree depth. Proceedings of the Ninth International Conference Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing. Chongqing, China. Lecture Notes in Computer Science 2639, Springer-Verlag (2003) 611–614 6. Moshkov, M.Ju.: On minimization of decision tree depth for real data tables. Proceedings of the Workshop Concurrency Specification and Programming. Czarna, Poland (2003) 7. Pawlak, Z.: Rough Sets - Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Dordrecht, Boston, London, 1991 8. Skowron, A., Rauszer, C.: The discernibility matrices and functions in information systems. Intelligent Decision Support. Handbook of Applications and Advances of the Rough Set Theory. Edited by R. Slowinski. Kluwer Academic Publishers, Dordrecht, Boston, London (1992) 331–362

Consistency Measures for Conflict Profiles Ngoc Thanh Nguyen and Michal Malowiecki Department of Information Systems, Wroclaw University of Technology Wyb. Wyspianskiego 27, 50-370 Wroclaw, Poland {thanh,malowiecki}@pwr.wroc.pl

Abstract. The formal definition of conflict was formulated and analyzed by Pawlak. In Pawlak’s works the author presented the concept and structure of conflicts. In this concept a conflict may be represented by an information system (U,A), where U is a set of agents taking part in conflict and A is a set of attributes representing conflict issues. On the basis of the information system Pawlak defined also various measures describing conflicts, for example the measure of military potential of the conflict sites. Next the concept has been developed by other authors. In their works the authors defined a multi-valued structure of conflict and proposed using consensus methods to their solving. In this paper the authors present the definition of consistency functions which should enable to measure the degree of consistency of conflict profiles. A conflict profile is defined as a set of opinions of agents referring to the subject of the conflict. Using this degree one could make choice of the method for solving the conflict, for example, a negotiation method or a consensus method. A set of postulates for consistency functions are defined and analyzed. Besides, some concrete consistency functions are formulated and their properties referring to postulates are included.

1 Introduction In Pawlak’s concept [20] a conflict is defined by an information system (U,A) in which U is the set of agents being in conflict, A is a set of attributes representing conflict issues, and the information table contains the conflict content, i.e. the opinions of the agents on particular issues. Each agent for each issue has three possibilities for presenting his opinion: (+) - yes, (−) - no, and (0) - neutral. For example Table 1 below represents the content of a conflict [20]. Within a conflict one can determine several conflict profiles. A conflict profile is the set of opinions generated by the agents on an issue. In the conflict represented by Table 1 we have 5 profiles Pa, Pb, Pc, Pd and Pe, where for example Pb = {+,+,−,−,−,+} and Pc = {+,−,−,−,−,−}. Referring to opinions belonging to these profiles one can observe that the opinions of a certain profile are more similar to each other (that is more convergent or more consistent) than opinions of some other profile. For example, opinions in profile Pc seem to be more consistent than opinions in profile Pb. Below we present another, more practical example. J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 169–186, 2004. © Springer-Verlag Berlin Heidelberg 2004

170

Ngoc Thanh Nguyen and Michal Malowiecki Table 1. The content of a conflict. U 1 2 3 4 5 6

a − + + 0 + 0

b + + − − − +

c + − − − − −

d + − − 0 − 0

e + − 0 − − +

Inconsistency often occurs in testimonies of crime witnesses. Witnesses often have different versions of the same event or on the subject of the same suspect. In this example we consider an investigator who has gathered the following testimonies from four witnesses describing a suspect: • Witness A said: It was a very high and black man with the long black hairs; • Witness B said: It was a high and brown eyes man with the medium long hairs; • Witness C said: Skin: dark; Hairs: short and dark; Height: medium; Eyes: blue; • Witness D said: The suspect was a bald and short man, his skin was brown. As we can noticed, the opinions of the witnesses are not identical, thus they are in conflict. With reference to deposition of witnesses we can create 5 issues of the conflict: Colour of the skin; Colour of eyes; Colour of hairs; Length of hairs and Height. The following conflict profiles are determined: P1. Colour of the skin: {A: black, C: dark, D: brown} P2. Colour of eyes: {B: brown, C: blue} P3. Colour of hairs: {A: black, C: dark} P4. Length of hairs: {A: long, B: medium long, C: short, D: bald} P5. Height: {A: very high, B: high, C: high, D: short} Let us notice that in each profile the opinions are different, thus knowledge of the investigator about the suspect is inconsistent. He may not be sure of the proper value in each subject of the description, but the degrees of his uncertainty are not identical referring to all the conflict issues. It seems that the more consistent are witnesses’ opinions the smaller is the uncertainty degree. In the profiles presented above one may conclude that the elements of profile P3 are most consistent because black and dark colours are very similar, while the elements of profile P4 are the least consistent. In this profile the four witnesses mention all four possible values for the length of hairs, thus the uncertainty degree of the investigator should be large in this case. These examples show that it is needed to determine a value which would represent the degree (or level) of consistency (or inconsistency) of a conflict profile. This value may be very useful in evaluating if a conflict is “solvable” or not. In this paper we propose to define this parameter of conflict profiles as consistency functions. We show a set of postulates which should be satisfied by these functions. We define also several consistency functions and show which postulates they fulfil. The paper is organized as follows. After Introduction, Section 2 presents some aspects of knowledge consistency and inconsistency. Section 3 outlines conflict theories which are the base of consistency measures. Section 4 includes an overview of con-

Consistency Measures for Conflict Profiles

171

sensus methods applied for conflict solving. Definition, postulates and their analysis for consistency functions are given in Section 5. Section 6 presents the definitions of four concrete consistency functions and their analysis referring to defined postulates. Section 7 describes several practical aspects of consistency measures, and some conclusions and the description of the future works are given in Section 8.

2 Consistency and Inconsistency of Knowledge The “consistency” term seems to be well-known and often used notion. This is caused by the fact of intuitive using of this word. Many authors use this term to describe some divergences in various occurrences. The consistency of knowledge notion appears more seldom in knowledge engineering context, but in most of cases it is still used intuitively in order to name some divergences in scientific research. Authors usually use the term, but they do not define what it means. Thus the following questions may arise: What does it mean that the knowledge is consistent or inconsistent? Is there any way to compare levels (or degrees) of consistency or inconsistency? Is there any way to measure up them? During creating publications authors often ignore answers on these questions, because all they need is divalent definition. Either all versions of knowledge are identical (that is it is consistent), or not. However, there exist situations, in which it is needed to know the knowledge consistency level. One of these situations is related to solving knowledge conflicts in multiagent environments. In this kind of conflicts the consistency level (or degree) is very useful because it could help to decide what to do with the agent knowledge states. If these states are different in small degree (the consistency level is high) then the agents could make a compromise (consensus) for reconciling their knowledge. If they are different in large degree (that is the consistency level is low), then it is necessary to gather other knowledge states for more precise reconciling. The need for measures of knowledge consistency has been announced earlier in the aspect, if looking for consensus as the way to solve knowledge conflict of agents is rational [14]. Indeed in multiagent systems, where sources of knowledge acquisition are as various as methods of its acquisition, the inconsistency of knowledge leads to conflict arising. In 1990 Ng and Abramson [12] asked: Is it possible to perform consistency checking on knowledge bases mechanically? If so, how? They claim that this is very important that the knowledge base is consistent because inconsistent knowledge bases may provide inconsistent conclusions. Getting back to conflicts, we can notice that there have been worked out a lot of methods of solving conflicts invented [4][14][15][17], but level of divergence has been usually described in divalent way. Either something was consistent or not. Notified need has led to raising some measures [2]. For looking up for the consensus or the knowledge consistency it is necessary to agree on some formal representation of this knowledge based on distance functions between knowledge states [14]. A multiagent system is a case of distributed environments. Knowledge possessed by agents usually comes from different sources and the problem of their integration may arise. The knowledge of agents may be not only true or false, but also undefined or inconsistent [8]. Knowledge is undefined if there is a case in which the agents do not have any information (absence of information) referring to some subject and the

172

Ngoc Thanh Nguyen and Michal Malowiecki

knowledge is inconsistent if the agents have different versions of it. For integrating this kind of knowledge Loyer, Spyratos and Stamate [8] use Belnap’s four-valued logics which uses true, false, undefined and inconsistent values. They use multivalued logic for describing acquired knowledge and its reasoning. However, the knowledge consistency is divalent. If all versions of knowledge are identical then the knowledge is consistent, else it is inconsistent. The notion of consistency is also very important in enforcing consistency in knowledge-based systems. The rule-based consistency enforcement in knowledgebased systems has been presented by Eick and Werstein [5]. These authors deal with enforcing the consistency constraints which are also called semantic integrity constraints. Actually consistency is one of the most important issues in database system, but it is considered in other sense and there is no need to measure it up. In literature the term of knowledge consistency has been used very often, but we still need an answer to the question about a definition of knowledge consistency. We found some measures, by means of which we can estimate it and we can use it to solve many problems, but we still need a definition, which will translate intuitive approach into formal definition. This definition is provided by Neiger [10]: Knowledge consistency is a property that a knowledge interpretation has with respect to a particular system. Neiger refers to the definition of internal knowledge consistency defined by Helpern and Moses [6]. He formalizes this and other forms of knowledge consistency. After giving the definition, Neiger shows some cases in which knowledge consistency can be applied in distributed systems. In this way he shows how consistent knowledge interpretation can be used to simplify the design of knowledgebased protocols. In other paper [11] Neiger presents how to use knowledge consistency for useful suspension of disbelief. He considers alternative interpretation of knowledge and explores the notion of consistent interpretation. Neiger shows how it can be used to circumvent the known impossibility results in a number of cases. There are of course a lot of applications of knowledge consistency. The authors use this term in each case where it is some kind of divergence, for example between some pieces of knowledge. But there are some situations where we need to know exactly the level of consistency or inconsistency. So we need good measures and tools to estimate the quality of these parameters. These tools have been introduced in [9] and in this paper we are going to present some results of their analysis.

3 Outline of Conflict Theories The simplest conflict takes place when two bodies generate different opinions on the same subject. In works [18][19][20] Pawlak specifies the following elements of a conflict: a set of agents, a set of issues, and a set of opinions of these agents on these issues. The agents and the issues are related with one another in some social or political context. Then we say that a conflict should take place if there are at least two agents whose opinions on some issue differ from each other. Generally, one can distinguish the following 3 constrains of a conflict: • Conflict body: specifies the direct participants of the conflict. • Conflict subject: specifies to whom (or what) the conflict refers and its topic. • Conflict content: specifies the opinions of the participants on the conflict topic.

Consistency Measures for Conflict Profiles

173

In Pawlak's approach the body of conflict is a set of agents, the conflict subject consists of contentious issues and the conflict content is a collection of tuples representing the participants' opinions. Information system tools [4][21] seem to be very good for representing conflicts. In works [14][15] the authors have defined conflicts in distributed systems in the similar way. However, we have built a system which can include more than one conflict, and within one conflict values of the attributes representing agents' opinions should more precisely describe their opinions. This aim has been realized by assuming that values of attributes representing conflict contents are not atomic as in Pawlak's approach, but sets of elementary values, where an elementary value is not necessarily an atomic one. Thus we accept the assumption that attributes are multi-valued, similarly like in Pawlak's concept of multi-valued information systems. Besides, the conflict content in our model is partitioned into three groups. The first group includes opinions of type “Yes, the fact should take place”, the second includes opinions of type “No, the following fact should not take place”, and to the last group contains the opinions of type “I do not know if the fact takes place”. For example, making the forecast of sunshine for tomorrow a meteorological agent can present its opinion as “(Certainly) it will sunny between 10a.m. and 12a.m. and will be cloudy between 3p.m. and 6p.m.”, that means during the rest of the day the agent does not know if it will be sunny or not. This type of knowledge should be taken into account in the system because the set of all possible states of the real world in which the system is placed, is large and an agent having limited possibilities is not assumed to “know everything”. We call the above three kinds of knowledge as positive, negative and uncertain, respectively. In Pawlak's approach positive knowledge is represented by value “+”, and negative knowledge by value “−”. Certain difference occurs between the semantics of Pawlak's “neutrality” and the semantics of “uncertainty” of agents presented in mentioned works. Namely, most often neutrality appears in voting processes and does not mean uncertainty, while uncertainty means that an agent is not competent to present its opinions on some matter. It is worth to note that rough set theory is a very useful tool for conflict analysis. In works [4][21] the authors present an enhancement of the model proposed by Pawlak. With using rough sets tools they explain the nature of conflict and define the conflict situation model in such way that encapsulates the conflict components. Such approach also enables to choose consensus as the conflicts solution, although it is still assumed that attribute values are atomic. In the next section we present an approach to conflict solving, which is based on determining consensus for conflict profiles.

4 The Roles of Consensus Methods in Solving Conflicts Consensus theory has a root in choice theory. A choice from some set A of alternatives is based on a relation α called a preference relation. Owing to it the choice function may be defined as follows: C(A) = {x∈A:(∀y∈A)((x,y)∈α)}

174

Ngoc Thanh Nguyen and Michal Malowiecki

Many works have dealt with the special case, where the preference relation is determined on the basis of a linear order on A. The most popular were the Condorcet choice functions. A choice function is called a Condorcet function if: x∈C(A) ⇔ (∀y∈A)(x∈C({x,y})) In the consensus-based approaches, however, it is assumed that the chosen alternatives do not have to be included in the set presented for choice, thus C(A) need not be a subset of A. On the beginning of this research the authors have dealt only with simple structures of the set A (named macrostructure), such as linear or partial order. Later with the development of computing techniques the structure of each alternative (named microstructure) have been also investigated. Most often the authors assume that all the alternatives have the same microstructure [3]. On the basis of the microstructure one can determine a macrostructure of the set A. Among others, following microstructures have been investigated: linear orders, ordered set partitions, non−ordered set partitions, n−trees, time intervals. The following macrostructures have been considered: linear orders and distance (or similarity) functions. Consensus of the set A is most often determined on the basis of its macrostructure by some optimality rules. If the macrostructure is a distance (or similarity) function then the Kemeny's median [1] is very often used to choose the consensus. According to Kemeny's rule the consensus should be nearest to the elements of the set A. Now, we are trying to analyze what are the roles of consensus in conflict resolution in distributed environments. Before the analysis we should consider what is represented by the conflict content (i.e. the opinions generated by the conflict participations). We may notice that the opinions included in the conflict content represent a unknown solution of some problem. The following two cases may take place [16]: 1. The solution is independent on the opinions of the conflict participants. As an example of this kind of conflicts we can consider different forecasts generated by different meteorological stations referring to the same region for a period of time. The problem is then relied on determining the proper scenario of weather which is unambiguous and really known only when the time comes, and is independent on given forecasts. A conflict in which the solution is independent on opinions of the conflict participants is called independent conflict. In independent conflicts the independence means that the solution of the problem exists but it is not known for the conflict participants. The reasons of this phenomenon may follow from many aspects, among others, from the ignorance of the conflict participations or the random characteristics of the solution which may make the solution impossible to be calculated in a deterministic way. Thus the content of the solution is independent on the conflict content and the conflict participations for some interest have to “guess” it. In this case their solutions have to reflect the proper solution but it is not known if in a valid and complete way. In this case the natural solution of the conflict is relied on determining the proper version of data on the basis of given opinions of the participants. This final version should satisfy the following condition: It should best reflect the given versions. The above defined condition should be suitable to this kind of conflicts because the versions given by the conflict participations reflect the “hidden” and independent solution but it is not known in what degree. Thus in advance each of them is treated as partially valid and partially invalid (which of its part is valid and which of its part is

Consistency Measures for Conflict Profiles

175

invalid – it is not known). The degree in which an opinion is treated as valid is the same for each opinion. This degree may not be equal to 100%. The reason for which all the opinions should be taken into account is that it is not known how large is the degree. It is known only to be greater than 0% and smaller than 100%. In this way the consensus should at best reflect these opinions. In other words, it should at best represent them. For independent conflicts resolution the solution of the problem my be determined by consensus methods. Here for consensus calculation one should use the criterion for minimal sum of distances between the consensus and elements of the profile representing opinions of the conflict participants. This criterion guarantees satisfying the condition mentioned above. 2. The solution is dependent on the opinions of the conflict participants. Conflicts of this kind are called dependent conflicts. In this case this is the opinions of conflict participants, which decide about the solution. As an example let us consider votes at an election. The result of the election is determined only on the basis of these votes. In general this case has a social or political character and the diversity between opinions of the participants most often follows from differences of choice criteria or their hierarchy. For dependent conflicts the natural resolution is relied on determining a version of data on the basis of given opinions. This final version (consensus) should satisfy the following conditions: It should be a good compromise which could be acceptable by the conflict participants. Thus consensus should not only at best represent the opinions but also should reflect each of them in the same degree (with the assumption that each of them is treated in the same way). The condition “acceptable compromise” means that any of opinions should neither be “harmed” nor “favored”. Consider the following example: From a set of candidates (denoted by symbols X, Y, Z...) 4 voters have to choose a committee (as a subset of the candidates’ set). In this aim each of voter votes on such committee which in his opinion is the best one. Assume that the votes are the following: {X, Y, Z}, {X, Y, Z}, {X, Y, Z} and {T}. Let the distance between 2 sets of candidates is equal to the cardinality of their symmetrical difference. If the consensus choice is made only by the first condition then committee {X, Y, Z} should be determined because the sum of distances between it and the votes is minimal. However, one can note that it prefers the first 3 votes while totally ignoring the fourth (the distances from this committee to the votes are: 0, 0, 0 and 4, respectively). Now, if we take committee {X, Y, Z, T} as the consensus then the distances would be 1, 1, 1 and 3, respectively. In this case the consensus neither is too far from the votes nor “harms” any of them. It has been proved that these conditions in general may not be satisfied simultaneously [13]. It is true that the choice based on the criterion of minimization of the sum of squared distances between consensus and the profile' elements gives a consensus more uniform than the consensus chosen by minimization of the distances' sum. Therefore, the criterion of the minimal sum of squared distances is also very important. However, the squared distances' minimal sum criterion often generates computationally complex problems (NP-hard problems), which demand working out heuristic algorithms. Figure 1 below presents the scheme of using consensus methods in the above mentioned cases.

176

Ngoc Thanh Nguyen and Michal Malowiecki A profile X representing a conflict (a unknown solution should be determined)

The solution is independent on the opinions of conflict participants

The solution is dependent on the opinions of conflict participants

The consensus should at best represent the given opinions

The consensus should be a compromise acceptable by the conflict participants

The criterion for minimizing the sum of distances between the consensus and the profile's elements should be used

The criterion for minimizing the sum of the squares of distances between the consensus and the profile's elements should be used

Fig. 1. The scheme for using consensus methods.

5 Postulates for Consistency Measures Formally, let U denote a finite universe of objects (alternatives), and Π(U) denote the ˆ (U) we denote the set of k-element subsets (with repetiset of subsets of U. By ∏ k

ˆ (U)= U ∏ ˆ (U). Each element of set ∏ ˆ (U) is tions) of the set U for k∈N, and let ∏ k k >0

called a profile. In this work we do not use the formalism often used in the consensus theory [1], in which the domain of consensus is defined as U*= U Uk, where Uk is the k >0

k-fold Cartesian product of U. In this way we specify how many times an object can occur in a profile and ensure that the order of profile elements is not important. We also accept in this paper an algebra of sets with repetitions (multisets) given by Lipski and Marek [7]. Some of its elements are as follows: An expression A=(x,x,y,y,y,z) is called a set with repetitions with cardinality equal to 6. In the set A element x appears 2 times, y 3 times and z one time. Set A can also be written as A=(2∗x,3∗y,1∗z). The & and is defined in the followsum of sets with repetitions is denoted by the symbol ∪ ing way: if element x appears in set A n times and in B n' times, then in their sum

Consistency Measures for Conflict Profiles

177

& B the same element should appear n+n' times. The difference of sets with partiA∪ tions is denoted by symbol “–”, its definition follows from the following example: (6∗x,5∗y,1∗z) –(2∗x,3∗y,1∗z) = (4∗x,2∗y). For example, if A=(2∗x,3∗y,1∗z) and & B=(6∗x,5∗y,1∗z). A set A with repetitions is a subset of a set B B=(4∗x,2∗y), then A ∪ with repetitions (A⊆B) if each element from A does not have a greater number of occurrences than it has got in set B. For example (2∗x,3∗y,1∗z) ⊆ (2∗x,4∗y,1∗z). In this paper we only assume that the macrostructure of the set U is known as a dis-

tance function d: U×U → ℜ, which is a) Nonnegative: (∀x,y∈U)[d(x,y) ≥ 0] b) Reflexive:

(∀x,y∈U)[d(x,y) = 0 iff x = y]

c) Symmetrical:

(∀x,y∈U)[d(x,y) = d(y,x)].

For normalization process we can assume that values of function d belong to interval [0,1] and the maximal distance between elements of universe U is equal 1. Let us notice, that the above conditions are only a part of metric conditions. Metric is a good measure of distance, but its conditions are too strong. A space (U,d) defined in this way does not need to be a metric space. Therefore we will call it a distance space [13]. A profile X is called homogeneous if all its elements are identical, that is X={n∗x} for some x∈U and n being a natural number. A profile jest heterogeneous if it is not homogeneous. A profile is called distinguishable if all its elements are different from each other. A profile X is multiple referring to a profile Y (or X is a multiple of Y), if X={n∗x1,..., n∗xk} and Y = {x1,...,xk}. A profile X is regular if it is a multiple of some distinguishable profile. By symbol c we denote the consistency function of profiles. This function has the following signature: ˆ (U) → [0,1]. c: ∏ where [0,1] is the closed interval of real numbers between 0 and 1. The idea of this function is relied on measuring the consistency degree of profile’s elements. The consistency degree of a profile mentions the degrees of indiscernibility (discernibility) defined for an information system [22]. However, they are different conceptions. The difference is based on that the consistency degree represents the coherence level of the profile elements and for its measuring one should firstly define the distances between these elements. The requirements for consistency are expressed in the following postulates: P1a. Postulate for maximal consistency: If X is a homogeneous profile then c(X)=1. P1b. Extended postulate for maximal consistency: For X (n) = {n∗x, k1∗x1, ..., km∗xm} being a profile such that element x occurs n times, and element xi occurs ki times, where ki is a constant for i=1,2,…,m. The following equation should be true: lim c( X ( n) ) = 1 .

n → +∞

178

Ngoc Thanh Nguyen and Michal Malowiecki

P2a. Postulate for minimal consistency: If X={a,b} and d(a,b)= max d (x, y ) then c(X)=0. x, y∈U

P2b. Extended postulate for minimal consistency: For X (n) = {n∗a, k1∗x1, ..., km∗xm, n∗b} being a profile such that elements a and b occur n times, element xi occurs ki times, where ki is a constant for i=1,2,…,m and d(a,b)= max d (x, y ) . The following equation should be true: x, y∈U

lim c( X ( n) ) = 0 .

n → +∞

P2c. Alternative postulate for minimal consistency: If X=U then c(X)=0. P3. Postulate for non-zero consistency: If there exist a,b∈X such that d(a,b) < max d (x, y ) then c(X)>0. x, y∈U

P4. Postulate for heterogeneous profiles: If X is a heterogeneous profile then c(X)1 because if n=1 then function c could be indefinite for Y. Let a’ be such element of universe U that d(a’,Y) = min (D(Y)). It implies that d(a’,Y) ≤ d(a,Y). Besides, from d(a,b) = max d (a, x ) it implies that (n−1)⋅d(a,b) ≥ d(a,Y). Then we have x∈ X

min ( D(Y )) min ( D( X )) d ( a , X ) d ( a , Y ) + d ( a , b) d (a, Y ) d (a ' , Y ) = = ≥ ≥ = . card ( X ) card (Y ) n n n −1 n −1

Because function c satisfies postulate P6 then there should be c(X) ≤ c(Y). This property allows to improve the consistency by removing from the profile the element which generates the maximal distance to the element with minimal sum of distances to the profile’s elements. It also shows that if a consistency function satisfies postulate P6 then it should also partially satisfy postulate P7a.

Consistency Measures for Conflict Profiles

181

Proposition 2. Let c∈CP6, and let a be such an element of universe U that d(a,X) = min (D(X)). The following dependence is true & {a}). c(X) ≤ c(X ∪

Proof. & {a}. From d(a,X) = min (D(X)) it implies that d(a,Y) = min (D(Y)). BeLet Y=X ∪ min ( D( X )) min ( D(Y )) ≥ because card (Y) = sides, d(a,X) = d(a,Y), thus card ( X ) card (Y ) & {a}). card(X)+1. Using the assumption that c∈CP6 we have c(X) ≤ c(X ∪ This property allows to improve the consistency by adding to the profile an element which generates the minimal sum of distances to the profile’s elements. It also shows that if a consistency function satisfies postulate P6 then it should also partially satisfy postulate P7b. Propositions 3-5 below show the independence of postulates P7a and P7b from some other postulates. Proposition 3. Postulates P1a and P2a are inconsistent with postulate P7a, that is CP1a ∩ CP2a ∩ CP7a = ∅. Proof. We show that if a consistency function c satisfies postulates P1a and P2a then it can not satisfy postulate P7a. Let c∈CP1a∩CP2a, let X=U={a,b} and d(a,b) = max d (x, y ) >0, then c(X)=0 according postulate P2a. Because c satisfies postulate x, y∈U

P1a we have c(X− {a}) = c({b}) = 1. Besides, we have d(a,X) = min (D(X)) and c(X− {a})=1 > c(X)=0, so function c can not satisfy postulate P7a. That means postulate P7a is independent on postulates P1a and P2a. Proposition 4. Postulates P1a and P4 are inconsistent with postulate P7a, that is CP1a ∩ CP4 ∩ CP7a = ∅. Proof. We show that if a consistency function c satisfies postulates P1a and P4 then it can not satisfy postulate P7a. Let c∈CP1a∩CP4, let X=U={a,b} and d(a,b) = max d (x, y ) >0, then c(X) c(X), so function c can not satisfy postulate P7a. That means postulate P7a is independent on postulates P1a and P4. Proposition 5. Postulates P2a and P3 are inconsistent with postulate P7b, that is CP2a ∩ CP3 ∩ CP7b = ∅. Proof. We show that if a consistency function c satisfies postulates P2a and P3 then it can not satisfy postulate P7b. Let c∈CP2a∩CP3, let X=U={a,b} and d(a,b) = max d (x, y ) >0, we have d(a,X) = min (D(X)) and d(b,X) = max (D(X)). Then c(X) x, y∈U

182

Ngoc Thanh Nguyen and Michal Malowiecki

& {b}) = c({a,b,b}) >0 because d(b,b)=0 and function c = 0 because c∈CP2a. But c(X ∪ satisfies postulate P3, so it may not satisfy postulate P7b. Here we have the independence of postulate P7b on postulates P2a and P3.

6 Consistency Functions Analysis In this section we present the analysis of 4 consistency functions. These functions are defined as follows: Let X={x1, …, xM} be a profile. We assume that M>1, because if M=1 then the profile X is a homogeneous one. We introduce the following parameters: • The matrix of distances between the elements of profile X: D

X

=

[ ] d ijX

d ( x1, x1) K d ( x1, xM ) , = M O M d ( xM , x1) L d ( x M , xM )

• The vector of average distances between an element to the rest:

[ ]

1 M X 1 M X 1 M X W X = wiX = d j1 , d j2 , K, ∑ ∑ ∑ d jM M − 1 j =1 M − 1 j =1 M − 1 j =1

,

• Diameters of sets X and U:

Diam( X ) = max d (x, y ), x, y∈ X

Diam(U ) = max d (x, y ) = 1, x, y∈U

and the maximal element of vector WX:

( )

Diam W X = max wiX , 1≤ i ≤ M

representing the element of profile X, which generate the maximal sum of distances to other elements, • The average distance in profile X: d(X ) =

M M 1 1 M ∑ ∑ d ijX = ∑ wiX , M ( M − 1) i =1 j =1 M i =1

• The sum of distances between an element x of universe U and the elements of set X: d(x,X) = Σy∈X d(x,y), • The set of all sums of distances: D(X) = {d(x,X): x∈U}, • The minimal sum of distances from an object to the elements of profile X: dmin(X) = min (D(X)).

Consistency Measures for Conflict Profiles

183

These parameters are now applied for the defining the following consistency functions: c1 ( X ) = 1 − Diam( X ),

( )

c2 ( X ) = 1 − Diam W X , c3 ( X ) = 1 − d ( X ), c4 ( X ) = 1 −

1 d min ( X ). M

Values of functions c1, c2, c3 and c4 reflect accordingly: - c1(X) – the maximal distance between two elements of profile. The intuitive sense of this function is based on the fact that if this maximal distance is equal 0 then consistency is maximal (that is 1). - c2(X) – the maximal average distance between an element of profile X and other elements of this profile. If the value of this maximal average distance is small, that is the elements of profile X are near from each other, then the consistency should be high. - c3(X) – the average distance between elements of X. This parameter seems to be most representative for consistency. The larger is this value the smaller is the consistency and vice versa. - c4(X) – the minimal average distance between an element of universe U and elements of X. The element of universe U, which generates the minimal average distance to elements of profile X, may be the consensus for this profile. The profile have a good consensus (that is a good solution for the conflict) if this consensus generates small average distance to the elements of the profile. In this case the consistency should be large. Table 2 presented below shows result of analysing functions. The columns represent postulates and the rows represent the defined functions. Symbol ‘+’ means that presented function satisfies the postulate, symbol ‘−’ means that presented function does not satisfy the postulate and symbol ± means partial satisfying given postulate. From these results it implies that function c4 satisfies partially postulates P7a and P7b. The reason is based on the fact that function c4 satisfies postulate P6 and Propositions 1 and 2. Table 2. Results of consistency functions analysis. c1 c2 c3 c4

P1a + + + +

P1b − − + +

P2a + + + −

P2b + − − −

P2c + − − −

P3 − − + +

P4 + + + +

P5 + + − +

P6 − − − +

P7a − + + ±

P7b − + + ±

184

Ngoc Thanh Nguyen and Michal Malowiecki

Satisfying some postulates and non-satisfying other postulates of each consistency function show many its properties. Below we present some another property of functions c2 and c3 [2]. Proposition 6. If X’⊆X, I is the set of indexes of elements from X’ and wiX = Diam( W X ) for i∈I then profile X\X’ should not have smaller consistency than X, that is c(X\X’) ≥ c(X) where c∈{c2, c3}. Proof. a) For function c2 the proof follows immediately from the notice that Diam( W X ) ≤ Diam( W X \ X ' ). b) For function c3 we have d ( X ) =

1 M X ∑ wi , with the assumption that wiX = M i =1

Diam( W X ) for i∈I it follows d ( X ) ≥ d ( X \ X ' ) , that is c3(X\X’) ≥ c3(X). This property shows a way to improve the consistency by moving from the profile these elements which generate maximal average distance. This way for consistency improvement is simple and therefore is a useful property of functions c2, c3. The way for consistency improvement using function c4 which satisfies postulate P6, has been presented by means of Propositions 1 and 2.

7 Practical Aspects of Consistency Measures One of the practical aspects of consistency measures is their applications for choice of the best method for solving conflicts in distributed environments. Some methods for conflict solving have been developed, each of them is suitable for a given kind of conflicts. But before one decides to select a method for a conflict, he should takes into account the degree of consistency of the opinions which occur in the conflict. This measure should be useful in evaluating if the conflict is already “mature” for solving or not yet. Let us consider an example which illustrates the statement that it is good to know the consistency level before using consensus algorithms. Assume that one is collecting information from meteorological institutes about the weather for the city of Zakopane during the weekend. He want to know if it will be snowing during the weekend. Five institutes say yes and five another institutes say no. Thus a conflict appears. The profile of the conflict looks as follows: X = {5∗yes, 5∗no}. The consensus, if determined, should be yes or no. However, neither yes nor no seems to be a good conflict solution. The consistency of the profile is low, according to postulates P2a and P5 it should be equal 0. It is the reason why the solution is not good. Consensus algorithms are usually very complex. Therefore, it is worth to check out the consistency of conflict profile before determining the consensus. Evaluating consistency measure before using consensus algorithms may eliminate these situations in which the consensus is not a good conflict solution. This will surely increase the ef-

Consistency Measures for Conflict Profiles

185

fectiveness of solving conflicts systems. In the above example there is no good conflict solution at all and one has to collect more information or choose other conflict solution method. Consistency measures are also very helpful for investigators during investigations. Witnesses’ opinions about a suspect can be very inconsistent. The consistency degree of suspect evidences may by used for determining the reliability of witness. Another interesting application of consistency measures is some kind of explorative system, where the menstruation results are collected in some interval of time. The results may be inconsistent, but when the consistency of results equals 1 then the alert can be sent. In this way we can measure for example the concentration of sulfur oxide. The scheme for application of a consistency measure in a conflict situation may look as follows: • First we should define the universe of all possible opinions on some subject, • Then we should determine a conflict profile on this subject, • After this, we have to find proper distance function, and calculate the distances between elements in the created profile, • Next, we choose the most proper consistency measure, which depends on the postulates that we want them to be satisfied, • We use chosen measure to calculate the consistency degree, • Now, we can use this level in decision process. As a matter of fact there is a lot of practical aspects of consistency measures. We can use the consistency degrees in multiagent systems and in all kinds of information systems where knowledge is processed by autonomous programs; in distributed database systems where the data consistency is a one of the key factors, and also in reasoning systems and many others.

8 Conclusions In this paper the concept of measuring consistency degrees of conflict profiles is presented. The authors formulate the conditions (postulates) which should be satisfied by consistency functions. These postulates are independent on the structure of conflict profiles. Some consistency functions have been defined and analyzed referring to the postulates. The future works should concern the solid analysis of presented postulates, which should allow to choose appropriate consistency functions for concrete practical conflict situations. Besides, some implementation should be performed for justifying the sense of introduced postulates and consistency functions.

References 1. Barthelemy, J.P., Janowitz M.F.: A Formal Theory of Consensus. SIAM J. Discrete Math. 4 (1991) 305-322. 2. Danilowicz, C., Nguyen, N.T., Jankowski, Ł.: Methods for selection of representation of agent knowledge states in multi-agent systems. Wroclaw University of Technology Press (2002) (in Polish).

186

Ngoc Thanh Nguyen and Michal Malowiecki

3. Day, W.H.E.: Consensus Methods as Tools for Data Analysis. In: Bock, H.H. (ed.): Classification and Related Methods for Data Analysis. North-Holland (1988) 312-324. 4. Deja, R.: Using Rough Set Theory in Conflicts Analysis, Ph.D. Thesis (Advisor: A. Skowron), Institute of Computer Science, Polish Academy of Sciences, Warsaw 2000. 5. Eick, C.F., Werstein, P.: In: Rule-Based Consistency Enforcement for Knowledge-Based Systems, IEEE Transactions on Knowledge and Data Engineering 5 (1993) 52-64. 6. Helpern, J. Y., Moses, Y.: Knowledge and common knowledge in distributed environment. Journal of the Association for Computing Machinery 37 (2001) 549-587. 7. Lipski, W., Marek, W.: Combinatorial Analysis. WNT Warsaw (1986). 8. Loyer, Y., Spyratos, N., Stamate, D.: Integration of Information in Four-Valued Logics under Non-Uniform Assumption. In: Proceedings of 30th IEEE International Symposium on Multiple-Valued Logic (2000) 180-193. 9. Malowiecki, M., Nguyen, N.T.: Consistency Measures of Agent Knowledge in Multiagent Systems. In: Proceedings of 8th National Conference on Knowledge Engineering and Expert Systems, Wroclaw Univ. of Tech. Press vol. 2 (2003) 245-252. 10. Neiger, G.: Simplifying the Design of Knowledge-based Algorithms Using Knowledge Consistency. Information & Computation 119 (1995) 283-293. 11. Neiger, G.: Knowledge Consistency: A Useful Suspension of Disbelief. In: Proceedings of the Second Conference on Theoretical Aspects of Reasoning about Knowledge. Morgan Kaufmann. Los Altos, CA, USA (1988) 295-308. 12. Ng, K. Ch., Abramson, B.: Uncertainty Management in Expert Systems. In: IEEE Expert: Intelligent Systems and Their Applications (1990) 29-48. 13. Nguyen, N.T.: Using Distance Functions to Solve Representation Choice Problems. Fundamenta Informaticae 48(4) (2001) 295-314. 14. Nguyen, N.T.: Consensus Choice Methods and their Application to Solving Conflicts in Distributed Systems. Wroclaw University of Technology Press (2002) (in Polish). 15. Nguyen, N.T.: Consensus System for Solving Conflicts in Distributed Systems. Journal of Information Sciences 147 (2002) 91-122. 16. Nguyen, N.T., Sobecki, J.: Consensus versus Conflicts – Methodology and Applications. In: Proceedings of RSFDGrC 2003, Lecture Notes in Artificial Intelligence 2639, 565-572. 17. Nguyen, N.T.: Susceptibility to Consensus of Conflict Profiles in Consensus Systems. Bulletin of International Rough Sets Society 5(1/2) (2001) 217-224. 18. Pawlak, Z.: On Conflicts, Int. J. Man-Machine Studies 21 (1984) 127-134. 19. Pawlak, Z.: Anatomy of Conflicts, Bull. EATCS 50 (1993) 234-246. 20. Pawlak, Z.: An Inquiry into Anatomy of Conflicts, Journal of Information Sciences 109 (1998) 65-78. 21. Skowron, A., Deja, R.: On Some Conflict Models and Conflict Resolution. Romanian Journal of Information Science and Technology 5(1-2) (2002) 69-82. 22. Skowron, A., Rauszer, C.: The Discernibility Matrices and Functions in Information Systems. In: E. Słowi ski (ed.): Intelligent Decision Support, Handbook of Applications and Advances of the Rough Sets Theory. Kluwer Academic Publishers (1992) 331-362.

Layered Learning for Concept Synthesis Sinh Hoa Nguyen1 , Jan Bazan2 , Andrzej Skowron3, and Hung Son Nguyen3 1

Polish-Japanese Institute of Information Technology Koszykowa 86, 02-008, Warsaw, Poland 2 Institute of Mathematics,University of Rzesz´ ow Rejtana 16A, 35-959 Rzesz´ ow, Poland 3 Institute of Mathematics, Warsaw University Banacha 2, 02-097 Warsaw, Poland {hoa,bazan,skowron,son}@mimuw.edu.pl

Abstract. We present a hierarchical scheme for synthesis of concept approximations based on given data and domain knowledge. We also propose a solution, founded on rough set theory, to the problem of constructing the approximation of higher level concepts by composing the approximation of lower level concepts. We examine the eﬀectiveness of the layered learning approach by comparing it with the standard learning approach. Experiments are carried out on artiﬁcial data sets generated by a road traﬃc simulator. Keywords: Concept synthesis, hierarchical schema, layered learning, rough sets.

1

Introduction

Concept approximation is an important problem in data mining [10]. In a typical process of concept approximation we assume that there is given information consisting of values of conditional and decision attributes on objects from a ﬁnite subset (training set, sample) of the object universe and on the basis of this information one should induce approximations of the concept over the whole universe. In many practical applications, this standard approach may show some limitations. Learning algorithms may go wrong if the following issues are not taken into account: Hardness of Approximation: A target concept, being a compositions of some simpler one, is too complex, and cannot be approximated directly from feature value vectors. The simpler concepts may be either approximated directly from data (by attribute values) or given as domain knowledge acquired from experts. For example, in the hand-written digit recognition problem, the raw input data are n × n images, where n ∈ [32, 1024] for typical applications. It is very hard to ﬁnd an approximation of the target concept (digits) directly from values of n2 pixels (attributes). The most popular approach to this problem is based on deﬁning some additional, e.g., basic shapes, skeleton graph. These features must be easily extracted from images, and they are used to describe the target concept. J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 187–208, 2004. c Springer-Verlag Berlin Heidelberg 2004

188

Sinh Hoa Nguyen et al.

Eﬃciency: The fact that the complex concept can be decomposed into simpler one allows to decrease the complexity of the learning process. Each component can be learned separately on a piece of a data set and independent components can be learned in parallel. Moreover, dependencies between component concepts and their consequences can be approximated using domain knowledge and experimental data. Expressiveness: Sometime, one can increase the readability of concept description by introducing some additional concepts. The description is more understandable, if it is expressed in natural language. For example, one can compare the readability of the following decision rules: if car speed is high and a distance to a preceding car is small then a traﬃc situation is dangerous

if car speed(X) > 176.7km/h and distance to f ront car(X) < 11.4m then a traﬃc situation is dangerous

Layered learning [25] is an alternative approach to concept approximation. Given a hierarchical concept decomposition, the main idea is to synthesize a target concept gradually from simpler ones. One can imagine the decomposition hierarchy as a treelike structure (or acyclic graph structure) containing the target concept in the root. A learning process is performed through the hierarchy, from leaves to the root, layer by layer. At the lowest layer, basic concepts are approximated using feature values available from a data set. At the next layer more complex concepts are synthesized from basic concepts. This process is repeated for successive layers until the target concept is achieved. The importance of hierarchical concept synthesis is now well recognized by researchers (see, e.g., [15, 14, 12]). An idea of hierarchical concept synthesis, in the rough mereological and granular computing frameworks has been developed (see, e.g., [15, 17, 18, 21]) and problems related to approximation compound concept are discussed, e.g., in [18, 22, 5, 24]. In this paper we concentrate on concepts that are speciﬁed by decision classes in decision systems [13]. The crucial factor in inducing concept approximations is to create the concepts in a way that makes it possible to maintain the acceptable level of precision all along the way from basic attributes to ﬁnal decision. In this paper we discuss some strategies for concept composing founded on the rough set approach. We also examine eﬀectiveness of the layered learning approach by comparison with the standard rule-based learning approach. The quality of the new approach will be veriﬁed relative to the following criteria: generality of concept approximation, preciseness of concept approximation, computation time required for concept induction, and concept description lengths. Experiments are carried out on an artiﬁcial data set generated by a road traﬃc simulator.

2

Concept Approximation Problem

In many real life situations, we are not able to give an exact deﬁnition of the concept. For example, frequently we are using adjectives such as “good”, “nice”, “young”, to describe some classes of peoples, but no one can give their exact

Layered Learning for Concept Synthesis

189

deﬁnition. The concept “young person” appears be easy to deﬁne by age, e.g., with the rule: if age(X) ≤ 30 then X is young, but it is very unnatural to explain that “Andy is not young because yesterday was his 30th birthday”. Such uncertain situations are caused either by the lack of information about the concept or by the richness of natural language. Let us assume that there exists a concept X deﬁned over the universe U of objects (X ⊆ U). The problem is to ﬁnd a description of the concept X, that can be expressed in a predeﬁned descriptive language L consisting of formulas that are interpretable as subsets of U. In general, the problem is to ﬁnd a description of a concept X in a language L (e.g., consisting of boolean formulae deﬁned over subset of attributes) assuming the concept is deﬁnable in another language L (e.g., natural language, or deﬁned by other attributes, called decision attributes). Inductive learning is one of the most important approaches to concept approximation. This approach assumes that the concept X is speciﬁed partially, i.e., values of characteristic function of X are given only for objects from a training sample U ⊆ U. Such information makes it possible to search for patterns in a given language L deﬁned on the training sample sets included (or suﬃciently included) into a given concept (or its complement). Observe that the approximations of a concept can not be deﬁned uniquely from a given sample of objects. The approximations of the whole concept X are induced from given information on a sample U of objects (containing some positive examples from X ∩ U and negative examples from U − X). Hence, the quality of such approximations should be veriﬁed on new testing objects. One should also consider uncertainty that may be caused by methods of object representation. Objects are perceived by some features (attributes). Hence, some objects become indiscernible with respect to these features. In practice, objects from U are perceived by means of vectors of attribute values (called information vectors or information signature). In this case, the language L consists of boolean formulas deﬁned over accessible attributes such that their values are eﬀectively measurable on objects. We assume that L is a set of formulas deﬁning subsets of U and boolean combinations of formulas from L are expressible in L. Due to bounds on expressiveness of language L in the universe U, we are forced to ﬁnd some approximate rather than exact description of a given concept. There are diﬀerent approaches to deal with uncertain and vague concepts like multi-valued logics, fuzzy set theory, or rough set theory. Using those approaches, concepts are deﬁned by “multi-valued membership function” instead of classical “binary (crisp) membership relations” (set characteristic functions). In particular, rough set approach oﬀers a way to establish membership functions that are data-grounded and signiﬁcantly diﬀerent from others. In this paper, the input data set is represented in a form of information system or decision system. An information system [13] is a pair S = (U, A), where U is a non-empty, ﬁnite set of objects and A is a non-empty, ﬁnite set, of attributes. Each a ∈ A corresponds to the function a : U → Va called an

190

Sinh Hoa Nguyen et al.

evaluation function, where Va is called the value set of a. Elements of U can be interpreted as cases, states, patients, or observations. For a given information system S = (U, A), we associate with any non-empty set of attributes B ⊆ A the B-information signature for any object x ∈ U by inf B (x) = {(a, a(x)) : a ∈ B}. The set {infA (x) : x ∈ U } is called the A-information set and it is denoted by IN F (S). The above formal deﬁnition of information systems is very general and it covers many diﬀerent systems such as database systems, or information table which is a two–dimensional array (matrix). In an information table, we usually associate its rows with objects (more precisely information vectors of objects), its columns with attributes and its cells with values of attributes. In supervised learning, objects from a training set are pre-classiﬁed into several categories or classes. To deal with this type of data we use a special information systems called decision systems that are information systems of the form S = (U, A, dec), where dec ∈ / A is a distinguished attribute called decision. The elements of attribute set A are called conditions. In practice, decision systems contain a description of a ﬁnite sample U of objects from a larger (may be inﬁnite) universe U. Usually decision attribute is a characteristic function of an unknown concept or concepts (in the case of several classes). The main problem of learning theory is to generalize the decision function (concept description) partially deﬁned on the sample U , to the universe U. Without loss of generality for our considerations we assume that the domain Vdec of the decision dec is equal to {1, . . . , d}. The decision dec determines a partition U = CLASS1 ∪ . . . ∪ CLASSd of the universe U , where CLASSk = {x ∈ U : dec(x) = k} is called the k th decision class of S for 1 ≤ k ≤ d.

3

Concept Approximation Based on Rough Set Theory

Rough set methodology for concept approximation can be described as follows (see [5]). Deﬁnition 1. Let X ⊆ U be a concept and let U ⊆ U be a ﬁnite sample of U. Assume that for any x ∈ U there is given information whether x ∈ X ∩ U or x ∈ U − X. A rough approximation of the concept X in a given language L (induced by the sample U ) is any pair (LL , UL ) satisfying the following conditions: 1. LL ⊆ UL ⊆ U, 2. LL , UL are expressible in the language L, i.e., there exist two formulas φL , φU ∈ L such that LL = {x ∈ U : x satisﬁes φL } and UL = {x ∈ U : x satisﬁes φU }, 3. LL ∩ U ⊆ X ∩ U ⊆ UL ∩ U ; 4. the set LL (UL ) is maximal (minimal) in the family of sets deﬁnable in L satisfying (3).

Layered Learning for Concept Synthesis

191

The sets LL and UL are called the lower approximation and the upper approximation of the concept X ⊆ U, respectively. The set BN = UL \ LL is called the boundary region of approximation of X. The set X is called rough with respect to its approximations (LL , UL ) if LL = UL , otherwise X is called crisp in U. The pair (LL , UL ) is also called the rough set (for the concept X). Condition (3) in the above list can be substituted by inclusion to a degree to make it possible to induce approximations of higher quality of the concept on the whole universe U. In practical applications the last condition in the above deﬁnition can be hard to satisfy. Hence, by using some heuristics we construct sub-optimal instead of maximal or minimal sets. Also, since during the process of approximation construction we only know U it may be necessary to change the approximation after we gain more information about new objects from U. The rough approximation of concept can be also deﬁned by means of a rough membership function. Deﬁnition 2. Let X ⊆ U be a concept and let U ⊆ U be a ﬁnite sample. A function f : U → [0, 1] is called a rough membership function of the concept X ⊆ U if and only if (Lf , Uf ) is an approximation of X (induced from sample U ) where Lf = {x ∈ U : f (x) = 1} and Uf = {x ∈ U : f (x) > 0}. Note that the proposed approximations are not deﬁned uniquely from information about X on the sample U . They are obtained by inducing the concept X ⊆ U approximations from such information. Hence, the quality of approximations should be veriﬁed on new objects and information about classiﬁer performance on new objects can be used to improve gradually concept approximations. Parameterizations of rough membership functions corresponding to classiﬁers make it possible to discover new relevant patterns on the object universe extended by adding new (testing) objects. In the following sections we present illustrative examples of such parameterized patterns. By tuning parameters of such patterns one can obtain patterns relevant for concept approximation on the extended training sample by some testing objects. 3.1

Case-Based Rough Approximations

For case-base reasoning methods, like kNN (k nearest neighbors) classiﬁer [1, 6], we deﬁne a distance (similarity) function between objects δ : U × U → [0, ∞). The problem of determining the distance function from given data set is not trivial, but in this paper, we assume that such a distance function has been already deﬁned for all object pairs. In kNN classiﬁcation methods (kNN classiﬁers), the decision for a new object x ∈ U − U is made by decisions of k objects from U that are nearest to x with respect to the distance function δ. Usually, k is a parameter deﬁned by an expert or automatically constructed by experiments from data. Let us denote by N N (x; k) the set of k nearest neighbors of x, and by ni (x) = |N N (x; k) ∩ CLASSi | the number of objects from N N (x; k) that belong to ith decision class. The kNN classiﬁers often use a voting algorithm for decision making, i.e., dec(x) = V oting( n1 (x), . . . , nd (x) ) = arg max ni (x), i

192

Sinh Hoa Nguyen et al.

In case of imbalanced data, the vector n1 (x), . . . , nd (x) can be scaled with respect to the global class distribution before applying the voting algorithm. Rough approximation based on the set N N (x; k), that is, an extension of a kNN classiﬁer can be deﬁned as follows. Assume that 0 ≤ t1 < t2 < k and let us consider for ith decision class CLASSi ⊆ U a function with parameters t1 , t2 deﬁned on any object x ∈ U by: if ni (x) ≥ t2 1 t1 ,t2 ni (x)−t1 (1) µCLASSi (x) = if ni (x) ∈ (t1 , t2 ) t2−t1 0 if ni (x) ≤ t1 , where ni (x) is the ith coordinate in the class distribution ClassDist(N N (x; k)) = n1 (x), . . . , nd (x) of N N (x; k). Let us assume that parameters to1 , to2 have been chosen in such a way that the above function satisﬁes for every x ∈ U the following conditions: to ,to

1 2 (x) = 1 then [x]A ⊆ CLASSi ∩ U, if µCLASS i o to 1 ,t2

if µCLASSi (x) = 0 then [x]A ∩ (CLASSi ∩ U ) = ∅,

(2) (3)

where [x]A = {y ∈ U : inf A (x) = inf A (y)} denotes the indiscernibility class deﬁned by x relatively to a ﬁxed set of attributes A. o to 1 ,t2 Then the function µCLASS considered on U can be treated as the rough i membership function of the ith decision class. It is the result of induction on U of the rough membership function of ith decision class restricted to the sample o to 1 ,t2 deﬁnes a rough approximations LkN N (CLASSi ) and U . The function µCLASS i UkN N (CLASSi ) of ith decision class CLASSi . For any object x ∈ U we have x ∈ LkN N (CLASSi ) ⇔ ni (x) ≥ to2 and x ∈ UkN N (CLASSi ) ⇔ ni (x) ≥ to1 . Certainly, one can consider in conditions (2)-(3) an inclusion to a degree and equality to a degree instead of the crisp inclusion and the crisp equality. Such degrees parameterize additionally extracted patterns and by tuning them one can search for relevant patterns. As we mentioned above kNN methods have some drawbacks. One of them is caused by the assumption that the distance function is deﬁned a priori for all pairs of objects, that is not the case for many complex data sets. In the next section we present an alternative way to deﬁne rough approximations from data. 3.2

Rule-Based Rough Approximations

In this section we describe the rule-based rough set approach to approximations. Let S = (U, A, dec) be a decision table. A decision rule for the k th decision class is any expression of the form (ai1 = v1 ) ∧ ... ∧ (aim = vm ) ⇒ (dec = k),

(4)

where aij ∈ A and vj ∈ Vaij . Any decision rule r of the form (4) can be characterized by following parameters:

Layered Learning for Concept Synthesis

– – – –

193

length(r): the number of descriptors in the the left hand side of implication; [r] = carrier of r, i.e., the set of objects satisfying the premise of r, support(r) = card([r] ∩ CLASSk ), conf idence(r) introduced to measure the truth degree of the decision rule: conf idence(r) =

support(r) , card([r])

(5)

The decision rule r is called consistent with S if conf idence(r) = 1. Among decision rule generation methods developed using the rough set approach one of the most interesting is related to minimal consistent decision rules. Given a decision table S = (U, A, dec), the rule r is called a minimal consistent decision rule (with S) if is consistent with S and any decision rule r created from r by removing any of descriptors from the left hand side of r is not consistent with S. The set of all minimal consistent decision rules for a given decision table S, denoted by M in Cons Rules(S), can be computed by extracting from the decision table object oriented reducts (called also local reducts relative to objects) [3, 9, 26]. The elements of M in Cons Rules(S) can be treated as interesting, valuable and useful patterns in data and used as a knowledge base in classiﬁcation systems. Unfortunately, the number of such patterns can be exponential with respect to the size of a given decision table [3, 9, 26, 23]. In practice, we must apply some heuristics, like rule ﬁltering or object covering, for selection of subsets of decision rules Given a decision table S = (U, A, dec). Let us assume that RULES(S) is a set of decision rules induced by some rule extraction method. For any object x ∈ U, let M atchRules(S, x) be the set of rules from RULES(S) supported by x. One can deﬁne the rough membership function µCLASSk : U → [0, 1] for the concept determined by CLASSk as follows: 1. Let Ryes (x) be the set of all decision rules from M atchRules(S, x) for k th class and let Rno (x) ⊂ M atchRules(S, x) be the set of decision rules for other classes. 2. We deﬁne two real values wyes (x), wno (x), called “for” and “against” weights for the object x by: wyes (x) = strength(r) wno (x) = strength(r) (6) r∈Ryes (x)

r∈Rno (x)

where strength(r) is a normalized function depending on length, support, conf idence of r and some global information about the decision table S like table size, class distribution (see [3]). 3. One can deﬁne the value of µCLASSk (x) by undetermined if max(wyes (x), wno (x)) < ω 0 if wno (x) − wyes (x) ≥ θ and wno (x) > ω µCLASSk (x) = 1 if wyes (x) − wno (x) ≥ θ and wyes (x) > ω θ+(wyes (x)−wno (x)) in other cases 2θ

194

Sinh Hoa Nguyen et al.

where ω, θ are parameters set by user. These parameters make it possible in a ﬂexible way to control the size of boundary region for the approximations established according to Deﬁnition 2. Let us assume that for θ = θo > 0 the above function satisﬁes for every x ∈ U the following conditions: o if µθCLASS (x) = 1 then [x]A ⊆ CLASSk ∩ U, k

(7)

o if µθCLASS (x) = 0 then [x]A ∩ (CLASSk ∩ U ) = ∅, k

(8)

where [x]A = {y ∈ U : inf A (x) = inf A (y)} denotes the indiscernibility class deﬁned by x with respect to the set of attributes A. o Then the function µθCLASS considered on U can be treated as the rough k membership function of the k th decision class. It is the result of induction on U of the rough membership function of k th decision class restricted to the sample o U . The function µθCLASS deﬁnes a rough approximations Lrule (CLASSk ) and k th Urule (CLASSk ) of k decision class CLASSk , where Lrule (CLASSk ) = {x : wyes (x) − wno (x) ≥ θo } and Urule (CLASSk ) = {x : wyes (x) − wno (x) ≥ −θo }.

4

Hierarchical Scheme for Concept Synthesis

In this section we present general layered learning scheme for concept synthesizing. We recall the main principles of the layered learning paradigm [25]. 1. Layered learning is designed for domains that are too complex for learning a mapping directly from the input to the output representation. The layered learning approach consists of breaking a problem down into several task layers. At each layer, a concept needs to be acquired. A learning algorithm solves the local concept-learning task. 2. Layered learning uses a bottom-up incremental approach to hierarchical concept decomposition. Starting with low-level concepts, the process of creating new sub-concepts continues until the high-level concepts, that deal with the full domain complexity, are reached. The appropriate learning granularity and sub-concepts to be learned are determined as a function of the speciﬁc domain. Concept decomposition in layered learning is not automated. The layers and concept dependencies are given as a background knowledge of the domain. 3. Sub-concepts may be learned independently and in parallel. Learning algorithms may be diﬀerent for diﬀerent sub-concepts in the decomposition hierarchy. Layered learning is eﬀective for huge data sets and it is useful for adaptation when a training set changes dynamically. 4. The key characteristic of layered learning is that each learned layer directly aﬀects learning at the next layer. When using the layered learning paradigm, we assume that the target concept can be decomposed into simpler ones called sub-concepts. A hierarchy of

Layered Learning for Concept Synthesis

195

concepts has a treelike structure. A higher level concept is constructed from those concepts in lower levels. We assume that a concept decomposition hierarchy is given by domain knowledge [18, 21]. However, one should observe that concepts and dependencies among them represented in domain knowledge are expressed often in natural language. Hence, there is a need to approximate such concepts and such dependencies as well as the whole reasoning. This issue is directly related to the computing with words paradigm [27, 28] and to roughneural approach [12], in particular to rough mereological calculi on information granules (see, e.g., [15–19]). ...

3

C0

C

. .. .. .. .. .. .. .. .. ...

C1

...... .. . . . .. .. 7 6K....... .. K.... .. .. .. .. .. .. . . . .. .

...

...

Q k Q Q Q h : output of ALGk Qk ...

Ck

.. .. .. .. .. ... .. .... . .. .. .. .

O

... ←− the lth level

Ak : attributes for learning Ck Uk : training objects for learning Ck

Fig. 1. Hierarchical scheme for concept approximation

The goal of a layered learning algorithm is to construct a scheme for concept composition. This scheme is a structure consisting of levels. Each level consists of concepts (C0 , C1 , ..., Cn ). Each concept Ck is deﬁned as a tuple Ck = (Uk , Ak , Ok , ALGk , hk ),

(9)

where (Figure 1): – – – –

Uk is a set of objects used for learning the concept Ck , Ak is the set of attributes relevant for learning the concept Ck , Ok is the set of outputs used to deﬁne the concept Ck , ALGk is the algorithm used for learning the function mapping vector values over Ak into Ok , – hk is the hypothesis returned by the algorithm ALGk as a result of its run on the training set Uk .

The hypothesis hk of the concept Ck in a current level directly aﬀects the next level in the following ways: 1. hk is used to construct a set of training examples U of a concept C in the next level, if C is a direct ancestor of Ck in the decomposition hierarchy. 2. hk is used to construct a set of features A of a concept C in the next level, if C is a direct ancestor of Ck in the decomposition hierarchy.

196

Sinh Hoa Nguyen et al.

To construct a layered learning algorithm, for any concept Ck in the concept decomposition hierarchy, one must solve the following problems: 1. Deﬁne a set of training examples Uk used for learning Ck . A training set in the lowest level are subsets of an input data set. The training set Uk at the higher level is composed from training sets of sub-concepts of Ck . 2. Deﬁne an attribute set Ak relevant to approximate the concept Ck . In the lowest level the attribute set Ak is a subset of an available attribute set. In higher levels the set Ak is created from attribute sets of sub-concepts of Ck , from an attribute set of input data and/or they are newly created attributes. The attribute set Ak is chosen depending on the domain of the concept Ck . 3. Deﬁne an output set to describe the concept Ck . 4. Choose an algorithm to learn the concept Ck that is based on a diven object set and on the deﬁned attribute set. In the next section we discuss in detail methods for concept synthesis. The foundation of our methods is rough set theory. We have already presented some preliminaries of rough set theory as well as parameterized methods for basic concepts approximation. They are a generalization of existing rough set based methods. Let us describe strategies for concept composing from sub-concepts. 4.1

Approximation of Compound Concept

We assume that a concept hierarchy H is given. A training set is represented by decision table SS = (U, A, D). D is a set of decision attributes. Among them are decision attributes corresponding to all basic concepts and a decision attribute for the target concept. Decision values indicate if an object belong to basic concepts or to the target concept, respectively. Using information available from a concept hierarchy for each basic concept Cb , one can create a training decision system SCb = (U, ACb , decCb ), where ACb ⊆ A, and decCb ∈ D. To approximate the concept Cb one can apply any classical method (e.g., k-NN, supervised clustering, or rule-based approach [7, 11]) to the table SCb . For example, one can use a case-based reasoning approach presented in Section 3.1 or a rule-based reasoning approach proposed in Section 3.2 for basic concept approximation. In further discussion we assume that basic concepts are approximated by rule based classiﬁers derived from relevant decision tables. To avoid overly complicated notation let us limit ourselves to the case of constructing compound concept approximation on the basis of two simpler concept approximations. Assume we have two concepts C1 and C2 that are given to us in the form of rule-based approximations derived from decision systems SC1 = (U, AC1 , decC1 ) and SC1 = (U, AC1 , decC1 ). Henceforth, we are given two rough membership functions µC1 (x), µC2 (x). These functions are determined C1 C1 C2 C2 with use of parameter sets {wyes , wno , ω C1 , θC1 } and {wyes , wno , ω C2 , θC2 }, reC C , ω C , θC } spectively. We want to establish a similar set of parameters {wyes , wno for the target concept C, which we want to describe with use of rough membership function µC . As previously noted, the parameters ω, θ controlling of the

Layered Learning for Concept Synthesis

197

C C boundary region are user-conﬁgurable. But, we need to derive {wyes , wno } from data. The issue is to deﬁne a decision system from which rules used to deﬁne approximations can be derived. We assume that both simpler concepts C1 , C2 and the target concept C are deﬁned over the same universe of objects U. Moreover, all of them are given on the same sample U ⊂ U. To complete the construction of the decision system SC = (U, AC , decC ), we need to specify the conditional attributes from AC and the decision attribute decC . The decision attribute value decC (x) is given for any object x ∈ U. For conditional attributes, we assume that they are either rough membership functions for simpler concepts (i.e., AC = {µC1 (x), µC2 (x)}) C1 C1 C2 C2 , wno , wyes , wno }). The output or weights for simpler concepts (i.e., AC = {wyes set Oi for each concept Ci , where i = 1, 2, consists of one attribute that is Ci Ci , wno a rough membership function µCi in the ﬁrst case or two attributes wyes that describe ﬁtting degrees of objects to the concept Ci and its complement, respectively. The rule-based approximations of the concept C are created by extracting rules from SC . It is important to observe that such rules describing C use attributes that are in fact classiﬁers themselves. Therefore, in order to have a more readable and intuitively understandable description as well as more control over the quality of approximation (especially for new cases), it pays to stratify and interpret attribute domains for attributes in AC . Instead of using just a value of a membership function or weight we would prefer to use linguistic statements such as “the likelihood of the occurrence of C1 is low”. In order to do that we have to map the attribute value sets onto some limited family of subsets. Such subsets are then identiﬁed with notions such us “certain”, “low”, “high” etc. It is quite natural, especially in case of attributes being membership functions, to introduce linearly ordered subsets of attribute ranges, e.g., {negative, low, medium, high, positive}. That yields a fuzzy-like layout of attribute values. One may (and in some cases should) consider also the case when these subsets overlap. Then, there may be more linguistic value attached to attribute values since variables like low or medium appear. Stratiﬁcation of attribute values and introduction of a linguistic variable attached to the strata serves multiple purposes. First, it provides a way for representing knowledge in a more human-readable format since if we have a new situation (new object x∗ ∈ U \ U ) to be classiﬁed (checked against compliance with concept C) we may use rules like: If compliance of x∗ with C1 is high or medium and compliance of x∗ with C2 is high then x∗ ∈ C. Another advantage in imposing the division of attribute value sets lays in extended control over ﬂexibility and validity of system constructed in this way. As we may deﬁne the linguistic variables and corresponding intervals, we gain the ability of making a system more stable and inductively correct. In this way we control the general layout of boundary regions for simpler concepts that contribute to construction of the target concept. The process of setting the intervals for attribute values may be performed by hand, especially when additional back-

198

Sinh Hoa Nguyen et al.

ground information about the nature of the described problem is available. One may also rely on some automated methods for such interval construction, such as, e.g., clustering, template analysis and discretization. Some extended discussion on the foundations of this approach, which is related to rough-neural computing [12, 18] and computing with words can be found in [24, 20].

Algorithm 1 Layered learning algorithm Input: Decision system S = (U, A, d), concept hierarchy H; Output: Scheme for concept composition 1: begin 2: for l := 0 to max level do 3: for (any concept Ck at the level l in H) do 4: if l = 0 then 5: Uk := U ; 6: Ak := B; // where B ⊆ A is a set relevant to define Ck 7: else U; 8: Uk := 9: Ak = Oi ; // for all sub-concepts Ci of Ck , where Oi is the output vector of Ci 10: Generate a rule set RU LE(Ck ) to determine the approximation of Ck ; 11: for any object x ∈ Uk do Ck C (x), wnok (x)); 12: generate the output vector (wyes Ck (x) is a fitting degree of x to the concept Ck // where wyes Ci // and wno (x) is a fitting degree of x to the complement of Ck . 13: end for 14: end if 15: end for 16: end for 17: end

Algorithm 1 is the layered learning algorithm used in our experiments.

5

Experimental Results

To verify eﬀectiveness of layered learning approach, we have implemented Algorithm 1 for concept composition presented in Section 4.1. The experiments were performed on data generated by road traﬃc simulator. In the following section we present a description of the simulator. 5.1

Road Traﬃc Simulator

The road simulator is a computer tool that generates data sets consisting of recording vehicle movements on the roads and at the crossroads. Such data sets are used to learn and test complex concept classiﬁers working on information coming from diﬀerent devices and sensors monitoring the situation on the road.

Layered Learning for Concept Synthesis

199

Fig. 2. Left: the board of simulation.

A driving simulation takes place on a board (see Figure 2) that presents a crossroads together with access roads. During the simulation the vehicles may enter the board from all four directions that is east, west, north and south. The vehicles coming to the crossroads form the south and north have the right of way in relation to the vehicles coming from the west and east. Each of the vehicles entering the board has only one goal - to drive through the crossroads safely and leave the board. Both the entering and exiting roads of a given vehicle are determined at the beginning, that is, at the moment the vehicle enters the board. Each vehicle may perform the following maneuvers during the simulation like passing, overtaking, changing direction (at the crossroads), changing lane, entering the traﬃc from the minor road into the main road, stopping, pulling out. Planning each vehicle’s further steps takes place independently in each step of the simulation. Each vehicle, “observing” the surrounding situation on the road, keeping in mind its destination and its own parameters, makes an independent decision about its further steps; whether it should accelerate, decelerate and what (if any) maneuver should be commenced, continued, or stopped. We associate the simulation parameters with the readouts of diﬀerent measuring devices or technical equipment placed inside the vehicle or in the outside environment (e.g., by the road, in a helicopter observing the situation on the road, in a police car). These are devices and equipment playing the role of detecting devices or converters meaning sensors (e.g., a thermometer, range ﬁnder, video camera, radar, image and sound converter). The attributes taking the simulation parameter values, by analogy to devices providing their values will be called sensors. Here are exemplary sensors: distance from the crossroads (in screen units), vehicle speed, acceleration and deceleration, etc.

200

Sinh Hoa Nguyen et al.

Apart from sensors the simulator registers a few more attributes, whose values are determined based on the sensor’s values in a way determined by an expert. These parameters in the present simulator version take over the binary values and are therefore called concepts. Concepts deﬁnitions are very often in a form of a question which one can answer YES, NO or DOES NOT CONCERN (NULL value). In Figure 3 there is an exemplary relationship diagram for some concepts that are used in our experiments.

Fig. 3. The relationship diagram for exemplary concepts

During the simulation, when a new vehicle appears on the board, its so called driver’s proﬁle is determined and may not be changed until it disappears from the board. It may take one of the following values: a very careful driver, a careful driver and a careless driver. Driver’s proﬁle is the identity of the driver and according to this identity further decisions as to the way of driving are made. Depending on the driver’s proﬁle and weather conditions speed limits are determined, which cannot be exceeded. The humidity of the road inﬂuences the length of braking distance, for depending on humidity diﬀerent speed changes take place within one simulation step, with the same braking mode. The driver’s proﬁle inﬂuences the speed limits dictated by visibility. If another vehicle is invisible for a given vehicle, this vehicle is not taken into consideration in the independent planning of further driving by a given car. Because this may cause dangerous situations, depending on the driver’s proﬁle, there are speed limits for the vehicle. During the simulation data may be generated and stored in a text ﬁle. The generated data are in a form of information table. Each line of the board depicts the situation of a single vehicle and the sensors’ and concepts’ values are registered for a given vehicle and its neighboring vehicles. Within each simulation step descriptions of situations of all the vehicles are saved to ﬁle.

Layered Learning for Concept Synthesis

5.2

201

Experiment Description

A number of diﬀerent data sets have been created with the road traﬃc simulator. They are named by cxx syyy, where xx is the number of cars and yyy is the number of time units of the simulation process. The following data sets: c10 s100, c10 s200, c10 s300, c10 s400, c10 s500, c20 s500 have been generated for our experiments. Let us emphasize that the ﬁrst data set consists of about 800 situations, whereas the last data set is the largest one, which can be generated by the simulator. This data set consists of about 10000 situations. Every data set has 100 attributes and has imbalanced class distribution, i.e., about 6% ± 2% of situations are unsave. Every data set cxx syyy was divided randomly into two subsets cxx syyy.trn and cxx syyy.test with proportion 80% and 20%, respectively. The data sets of form cxx syyy.trn are used in learning the concept approximations. We consider two testing models called testing for similar situations and testing for new situations. They are described as follows: Model I: Testing for similar situations. This model uses the data sets of the form cxx syyy.test for testing the quality of approximation algorithms. The situations, which are used in this testing model, are generated from the same simulation process as the training situations. Model II: Testing for new situations. This model uses data from a new simulation process. In this model, we create new data sets using the simulator. They are named by c10 s100N, c10 s200N, c10 s300N, c10 s400N, c10 s500N, c20 s500N, respectively. We compare the quality of two learning approaches called RS rule-based learning (RS) and RS-layered learning (RS-L). In the ﬁrst approach, we employed the RSES system [4] to generate the set of minimal decision rules and classiﬁed the situations from testing data. The conﬂicts are resolved by simple voting strategy. The comparison analysis is performed with respect to the following criteria: 1. 2. 3. 4.

accuracy of classiﬁcation, covering rate of new cases (generality), computing time necessary for classiﬁer synthesis, and size of rule set used for target concept approximation.

In the layered learning approach, from training table we create ﬁve sub-tables to learn ﬁve basic concepts (see Figure 3): C1 : “safe distance from FL during overtaking,” C2 : “possibility of safe stopping before crossroads,” C3 : “possibility of going back to the right lane,” C4 : “safe distance from preceding car,” C5 : “forcing the right of way.” These tables are created using information available from the concept decomposition hierarchy. A concept in the next level is C6 : ”safe overtaking”. C6 is located over the concepts C1 , C2 and C3 in the concept decomposition hierarchy. To approximate concept C6 , we create a table with three conditional attributes. These attributes describe ﬁtting degrees of object to concepts C1 , C2 ,

202

Sinh Hoa Nguyen et al.

C3 , respectively. The decision attribute has three values Y ES, N O, or N U LL corresponding to the cases of overtaking made by car: safe, not safe, not applicable. The target concept C7 : “safe driving” is located at the third level of the concept decomposition hierarchy. The concept C7 is obtained by composition from concepts C4 , C5 and C6 . To approximate C7 we also create a decision table with three attributes, representing ﬁtting degrees of objects to the concepts C4 , C5 , C6 , respectively. The decision attribute has two possible values Y ES or N O if a car is satisfying global safety condition, or not, respectively. Classiﬁcation Accuracy. As we mentioned before, the decision class “safe driving = YES” is dominating in all training data sets. It takes above 90% of training sets. Sets of training examples belonging to the “NO” class are small relative to the training set size. Searching for approximation of the “NO” class with the high precision and generality is a challenge for learning algorithms. In experiments we concentrate on approximation of the “NO” class. In Table 1 we present the classiﬁcation accuracy of RS and RS-L classiﬁers for the ﬁrst of testing models. It means that training sets and test sets are disjoint and samples are chosen from the same simulation data set. Table 1. Classiﬁcation accuracy for the ﬁrst testing model Testing model I Total accuracy Accuracy of YES RS RS-L RS RS-L c10 s100 0.98 0.93 0.99 0.98 c10 s200 0.99 0.99 1 0.99 c10 s300 0.99 0.96 0.99 0.96 c10 s400 0.99 0.97 0.99 0.98 c10 s500 0.99 0.94 0.99 0.93 c20 s500 0.99 0.93 0.99 0.94 Average 0.99 0.95 0.99 0.96

Accuracy of NO RS RS-L 0.67 0 0.90 1 0.82 0.81 0.88 0.85 0.94 0.96 0.91 0.91 0.85 0.75

One can observe that the classiﬁcation accuracy of the testing model I is higher, because the testing the training sets are chosen from the same data set. Although accuracy of the “YES” class is better than the “NO” class but accuracy of the “NO” class is quite satisfactory. In those experiments, the standard classiﬁer shows a little bit better performance than hierarchical classiﬁer. One can observe that when training sets reach a suﬃcient size (over 2500 objects) the accuracy on the class “NO” of both classiﬁers are comparable. To verify if classiﬁer approximations are of high precision and generality, we use the second testing model, where training tables and testing tables are chosen from the new generated simulation data sets. One can observe that accuracy of the “NO” class strongly decreased. In this case the hierarchical classiﬁer shows much better performance. In Table 2 we present the accuracy of the standard classiﬁer and the hierarchical classiﬁer using the second testing model.

Layered Learning for Concept Synthesis

203

Table 2. Classiﬁcation accuracy for the second testing model Testing model II Total accuracy Accuracy of YES RS RS-L RS RS-L c10 s100N 0.94 0.97 1 1 c10 s200N 0.99 0.96 1 0.98 c10 s300N 0.99 0.98 1 0.98 c10 s400N 0.96 0.77 0.96 0.77 c10 s500N 0.96 0.89 0.99 0.90 c20 s500N 0.99 0.89 0.99 0.88 Average 0.97 0.91 0.99 0.92

Accuracy of NO RS RS-L 0 0 0.75 0.60 0 0.78 0.57 0.64 0.30 0.80 0.44 0.93 0.34 0.63

Covering Rate. Generality of classiﬁers usually is evaluated by the recognition ability of unseen objects. In this section we analyze covering rate of classiﬁers for new objects. In Table 3 we present coverage degrees using the ﬁrst testing model. One can observe that the coverage degrees of standard and hierarchical classiﬁers are comparable in this case. Table 3. Covering rate for the ﬁrst testing model Testing model I Total accuracy Accuracy of YES RS RS-L RS RS-L c10 s100 0.97 0.96 0.98 0.96 c10 s200 0.95 0.95 0.96 0.96 c10 s300 0.94 0.93 0.97 0.95 c10 s400 0.96 0.94 0.96 0.94 c10 s500 0.96 0.95 0.97 0.96 c20 s500 0.93 0.97 0.94 0.98 Average 0.95 0.95 0.96 0.96

Accuracy of NO RS RS-L 0.85 1 0.67 0.80 0.59 0.55 0.91 0.87 0.84 0.86 0.79 0.92 0.77 0.83

We also examined the coverage degrees using the second testing model. We obtained the similar scenarios to the accuracy degree. The coverage rate for the both decision classes strongly decreases. Again the hierarchical classiﬁer shows to be more stable than the standard classiﬁer. The results are presented in Table 4. Computing Speed. A time computation necessary for concept approximation synthesis is one of the important features of learning algorithms. Quality of learning approach should be assessed not only by quality of the classiﬁer. In many real-life situations it is necessary not only to make precise decisions but also to learn classiﬁers in a short time. The layered learning approach shows a tremendous advantage in comparison with the standard learning approach with respect to computation time. In the case of standard classiﬁer, computational time is measured as a time required for computing a rule set used for decision class approximation. In the case of

204

Sinh Hoa Nguyen et al. Table 4. Covering rate for the second testing model Testing model II Total accuracy Accuracy of YES RS RS-L RS RS-L c10 s100N 0.44 0.72 0.44 0.74 c10 s200N 0.72 0.73 0.73 0.74 c10 s300N 0.47 0.68 0.49 0.69 c10 s400N 0.74 0.90 0.76 0.93 c10 s500N 0.72 0.86 0.74 0.88 c20 s500N 0.62 0.89 0.65 0.89 Average 0.62 0.79 0.64 0.81

Accuracy of NO RS RS-L 0.50 0.38 0.50 0.63 0.10 0.44 0.23 0.35 0.40 0.69 0.17 0.86 0.32 0.55

Table 5. Time for standard and hierarchical classiﬁer generation Table names RS c10 s100 94 s c10 s200 714 s c10 s300 1450 s c10 s400 2103 s c10 s500 3586 s c20 s500 10209 s Average

RS-L Speed up ratio 2.3 s 40 6.7 s 106 10.6 s 136 34.4 s 60 38.9 s 92 98s 104 90

hierarchical classiﬁer computational time is equal to the total time required for all sub-concepts and target concept approximation. The experiments were performed on computer with processor AMD Athlon 1.4GHz. One can see in Table 5 that the speed up ratio of the layered learning approach over the standard one reaches from 40 to 130 times. Description Size. Now, we consider the complexity of concept description. We approximate concepts using decision rule sets. The size of a rule set is characterized by rule lengths and its cardinality. In Table 6 we present rule lengths and the number of decision rules generated by the standard learning approach. One can observe that rules generated by the standard approach are quite long. They contain above 40 descriptors (on average). Table 6. Rule set size for the standard learning approach Tables Rule length # Rule set c10 s100 34.1 12 c10 s200 39.1 45 c10 s300 44.7 94 c10 s400 42.9 85 c10 s500 47.6 132 c20 s500 60.9 426 Average 44.9

Layered Learning for Concept Synthesis

205

Table 7. Description length: C1 , C2 , C3 for the hierarchical learning approach Concept C1 Concept C2 Concept C3 Tables Ave. rule l. # Rules Ave. rule l. # Rules Ave. rule l. # Rules c10 s100 5.0 10 5.3 22 4.5 22 c10 s200 5.1 16 4.5 27 4.6 41 c10 s300 5.2 18 6.6 61 4.1 78 c10 s400 7.3 47 7.2 131 4.9 71 c10 s500 5.6 21 7.5 101 4.7 87 c20 s500 6.5 255 7.7 1107 5.8 249 Average 5.8 6.5 4.8 Table 8. Description length: C4 , C5 for the hierarchical learning approach Concept C4 Concept C5 Tables Rule length # Rule set Rule length # Rule set c10 s100 4.5 22 1.0 2 c10 s200 4.6 42 4.7 14 c10 s300 5.2 90 3.4 9 c10 s400 6.0 98 4.7 16 c10 s500 5.8 146 4.9 15 c20 s500 5.4 554 5.3 25 Average 5.2 4.0 Table 9. Description length: C6 , C7 , hierarchical learning approach Concept C6 Concept C7 Tables Rule length # Rule set Rule length # Rule set c10 s100 2.2 6 3.5 8 c10 s200 1.3 3 3.7 13 c10 s300 2.4 7 3.6 18 c10 s400 2.5 11 3.7 27 c10 s500 2.6 8 3.7 30 c20 s500 2.9 16 3.8 35 Average 2.3 3.7

The size of rule sets generated by layered learning approach are presented in Tables 7, 8 and 9. One can notice that rules approximating sub-concepts are short. The average rule length is from 4 to 6.5 for the basic concepts and from 2 to 3.7 for the super-concepts. Therefore rules generated by layered learning approach are more understandable and easier to interpret than rules induced by the standard learning approach. Two concepts C2 and C4 are more complex than the others. The rule set induced for C2 takes 28% and the rule set induced for C4 takes above 27% of the number of rules generated for all seven concepts in the traﬃc road problem.

206

6

Sinh Hoa Nguyen et al.

Conclusion

We presented a method for concept synthesis based on the layered learning approach. Unlike the traditional learning approach, in the layered learning approach the concept approximations are induced not only from accessed data sets but also from expert’s domain knowledge. In the paper, we assume that knowledge is represented by a concept dependency hierarchy. The layered learning approach proved to be promising for complex concept synthesis. Experimental results with road traﬃc simulation are showing advantages of this new approach in comparison to the standard learning approach. The main advantages of the layered learning approach can be summarized as follows: 1. 2. 3. 4. 5.

High precision of concept approximation. High generality of concept approximation. Simplicity of concept description. High computational speed. Possibility of localization of sub-concepts that are diﬃcult to approximate. It is important information, because is specifying a task on which we should concentrate to improve the quality of the target concept approximation.

In future we plan to investigate more advanced approaches for concept composition. One interesting possibility is to use patterns deﬁned by rough approximations of concepts deﬁned by diﬀerent kinds of classiﬁers in synthesis of compound concepts. We also would like to develop methods for rough-fuzzy classiﬁer’s synthesis (see Section 4.1). In particular, the method mentioned in Section 4.1 based on rough-fuzzy classiﬁers introduces more ﬂexibility for such composing because a richer class of patterns introduced by diﬀerent layers of rough-fuzzy classiﬁers can lead to improving of the classiﬁer quality [18]. On the other hand, such a process is more complex and eﬃcient heuristics for synthesis of rough-fuzzy classiﬁers should be developed. We also plan to apply the layered learning approach to real-life problems especially when domain knowledge is speciﬁed in natural language. This can make further links with the computing with words paradigm [27, 28, 12]. This is in particular linked with the rough mereological approach (see, e.g., [15, 17]) and with the rough set approach for approximate reasoning in distributed environments [20, 21], in particular with methods of information system composition [20, 2].

Acknowledgements The research has been partially supported by the grant 3T11C00226 from Ministry of Scientiﬁc Research and Information Technology of the Republic of Poland.

References 1. Aha, D.W.. The omnipresence of case-based reasoning in science and application. Knowledge-Based Systems, 11 (5-6) (1998) 261-273.

Layered Learning for Concept Synthesis

207

2. Barwise, J., Seligman, J., eds.: Information Flow: The Logic of Distributed Systems. Volume 44 of Tracts in Theoretical Computer Scienc. Cambridge University Press, Cambridge, UK (1997) 3. Bazan, J.G.: A comparison of dynamic and non-dynamic rough set methods for extracting laws from decision tables. In Polkowski, L., Skowron, A., eds.: Rough Sets in Knowledge Discovery 1: Methodology and Applications. Physica-Verlag, Heidelberg, Germany (1998) 321–365 4. Bazan, J.G., Szczuka, M.: RSES and RSESlib - a collection of tools for rough set computations. In Ziarko, W., Yao, Y., eds.: Second International Conference on Rough Sets and Current Trends in Computing RSCTC. LNAI 2005. Banﬀ, Canada, Springer-Verlag (2000) 106–113 5. Bazan, J., Nguyen, H.S., Skowron, A., Szczuka, M.: A view on rough set concept approximation. In Wang, G., Liu, Q., Yao, Y., Skowron, A., eds.: Proceedings of the Ninth International Conference on Rough Sets, Fuzzy Sets, Data Mining and Granular Computing (RSFDGrC’2003),Chongqing, China. LNAI 2639. Heidelberg, Germany, Springer-Verlag (2003) 181–188 6. Cover, T.M. and Hart, P.E.: Nearest neighbor pattern classiﬁcation. IEEE Transactions on Information Theory, 13 (1967) 21-27. 7. Friedman, J., Hastie, T., Tibshirani, R.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer-Verlag, Heidelberg, Germany (2001) 8. Grzymala-Busse, J.: A new version of the rule induction system lers. Fundamenta Informaticae 31(1) (1997) 27–39 9. Komorowski, J., Pawlak, Z., Polkowski, L., Skowron, A.: Rough sets: a tutorial. In Pal, S.K., Skowron, A., eds.: Rough Fuzzy Hybridization: A New Trend in Decision-Making. Springer-Verlag, Singapore (1999) 3–98 ˙ 10. Kloesgen, W., Zytkow, J., eds.: Handbook of Knowledge Discovery and Data Mining. Oxford University Press, Oxford (2002) 11. Mitchell, T.: Machine Learning. Mc Graw Hill (1998) 12. Pal, S.K., Polkowski, L., Skowron, A., eds.: Rough-Neural Computing: Techniques for Computing with Words. Cognitive Technologies. Springer-Verlag, Heidelberg, Germany (2003) 13. Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning about Data. Volume 9 of System Theory, Knowledge Engineering and Problem Solving. Kluwer Academic Publishers, Dordrecht, The Netherlands (1991) 14. Poggio, T., Smale, S.: The mathematics of learning: Dealing with data. Notices of the AMS 50 (2003) 537–544 15. Polkowski, L., Skowron, A.: Rough mereology: A new paradigm for approximate reasoning. International Journal of Approximate Reasoning 15 (1996) 333–365 16. Polkowski, L., Skowron, A.: Rough mereological calculi of granules: A rough set approach to computation. Computational Intelligence 17 (2001) 472–492 17. Polkowski, L., Skowron, A.: Towards adaptive calculus of granules. In Zadeh, L.A., Kacprzyk, J., eds.: Computing with Words in Information/Intelligent Systems, Heidelberg, Germany, Physica-Verlag (1999) 201–227 18. Skowron, A., Stepaniuk, J.: Information granules and rough-neural computing. [12] 43–84 19. Skowron, A., Stepaniuk, J.: Information granules: Towards foundations of granular computing. International Journal of Intelligent Systems 16 (2001) 57–86 20. Skowron, A., Stepaniuk, J.: Information granule decomposition. Fundamenta Informaticae 47(3-4) (2001) 337–350

208

Sinh Hoa Nguyen et al.

21. Skowron, A.: Approximate reasoning by agents in distributed environments. In Zhong, N., Liu, J., Ohsuga, S., Bradshaw, J., eds.: Intelligent Agent Technology Research and Development: Proceedings of the 2nd Asia-Paciﬁc Conference on Intelligent Agent Technology IAT01, Maebashi, Japan, October 23-26. World Scientiﬁc, Singapore (2001) 28–39 22. Skowron, A.: Approximation spaces in rough neurocomputing. In Inuiguchi, M., Tsumoto, S., Hirano, S., eds.: Rough Set Theory and Granular Computing. Volume 125 of Studies in Fuzziness and Soft Computing. Springer-Verlag, Heidelberg, Germany (2003) 13–22 23. Skowron, A., Rauszer, C.: The discernibility matrices and functions in information systems. In Slowi´ nski, R., ed.: Intelligent Decision Support - Handbook of Applications and Advances of the Rough Sets Theory. Volume 11 of D: System Theory, Knowledge Engineering and Problem Solving. Kluwer Academic Publishers, Dordrecht, Netherlands (1992) 331–362 24. Skowron, A., Szczuka, M.: Approximate reasoning schemes: Classiﬁers for computing with words. In: Proceedings of SMPS 2002. Advances in Soft Computing, Heidelberg, Canada, Springer-Verlag (2002) 338–345 25. Stone, P.: Layered Learning in Multi-Agent Systems: A Winning Approach to Robotic Soccer. The MIT Press, Cambridge, MA (2000) 26. Wr´ oblewski, J.: Covering with reducts - a fast algorithm for rule generation. In Polkowski, L., Skowron, A., eds.: Proceedings of the First International Conference on Rough Sets and Current Trends in Computing (RSCTC’98), Warsaw, Poland. LNAI 1424, Heidelberg, Germany, Springer-Verlag (1998) 402–407 27. Zadeh, L.A.: Fuzzy logic = computing with words. IEEE Transactions on Fuzzy Systems 4 (1996) 103–111 28. Zadeh, L.A.: A new direction in AI: Toward a computational theory of perceptions. AI Magazine 22 (2001) 73–84

Basic Algorithms and Tools for Rough Non-deterministic Information Analysis Hiroshi Sakai and Akimichi Okuma Department of Computer Engineering Kyushu Institute of Technology Tobata, Kitakyushu 804, Japan [email protected]

Abstract. Rough non-deterministic inf ormation analysis is a framework for handling the rough sets based concepts, which are deﬁned in not only DISs (Deterministic Inf ormation Systems) but also N ISs (N on-deterministic Inf ormation Systems), on computers. N ISs were proposed for dealing with information incompleteness in DISs. In this paper, two modalities, i.e., the certainty and the possibility, are deﬁned for each concept like the deﬁnability of a set, the consistency of an object, data dependency, rule generation, reduction of attributes, criterion of rules support, accuracy and coverage. Then, each algorithm for computing two modalities is investigated. An important problem is how to compute two modalities depending upon all derived DISs. A simple method, such that two modalities are sequentially computed in all derived DISs, is not suitable. Because the number of all derived DISs increases in exponential order. This problem is uniformly solved by means of applying either inf and sup information or possible equivalence relations. An information analysis tool for N ISs is also presented.

1

Introduction

Rough set theory oﬀers a new mathematical approach to vagueness and uncertainty, and the rough sets based concepts have been recognized to be very useful [1,2,3,4]. This theory usually handles tables with deterministic information, which we call Deterministic Inf ormation Systems (DISs). Many applications of this theory to data mining, rule generation, machine learning and knowledge discovery have been investigated [5–11]. N on-deterministic Inf ormation Systems (N ISs) and Incomplete Inf ormation Systems have been proposed for handling information incompleteness in DISs, like null values, unknown values, missing values and etc. [12–16]. For any N IS, we usually suppose that there exists a DIS with unknown real information in a set of all derived DISs. Let DIS real denote this deterministic information system from N IS. Of course, it is impossible to know DIS real itself without additional information. However, if a formula α holds in every derived DIS from a N IS, α also holds in DIS real . This formula α is not inﬂuenced by the information incompleteness in N IS. If a formula α holds in some derived DISs J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 209–231, 2004. c Springer-Verlag Berlin Heidelberg 2004

210

Hiroshi Sakai and Akimichi Okuma

from a N IS, there exists such a possibility that α holds in DIS real . We call the former the certainty (of the formula α for DIS real ) and the latter the possibility, respectively. In N ISs, such two modalities for DIS real have been employed, and several work on logic in N ISs has been studied [12,14,15,17]. Very few work deals with algorithms for handling N ISs on computers. In [15,16], Lipski showed a question-answering system besides an axiomatization of logic. In [18,19], Grzymala-Busse surveyed the unknown attribute values, and studied the learning from examples with unknown attribute values. In [20,21,22], Kryszkiewicz investigated rules in incomplete information systems. These are the most important work for handling information incompleteness in DISs on computers. This paper follows these two modalities for DIS real , and focuses on the issues in the following. (1) The deﬁnability of a set in N ISs and an algorithm for handling it on computers. (2) The consistency of an object in N ISs and an algorithm for handling it on computers. (3) Data dependency in N ISs and an algorithm for handling it on computers. (4) Rules in N ISs and an algorithm for handling them on computers. (5) Reduction of attributes in N ISs and an algorithm for handling it on computers. An important problem is how to compute two modalities depending upon all derived DISs from a N IS. A simple method, such that every deﬁnition is sequentially computed in all derived DISs from a N IS, is not suitable. Because the number of derived DISs from a N IS increases in exponential order. This problem is uniformly solved by means of applying either inf and sup information or possible equivalence relations in the subsequent sections. In Preliminary, deﬁnitions in DISs and rough sets based concepts are surveyed. Then, each algorithm for ﬁve issues is sequentially examined. Tool programs for these issues are also implemented, which are presented in appendixes.

2

Preliminary

This section surveys some deﬁnitions in DISs, and connects these deﬁnitions with equivalence relations. 2.1

Some Definitions in DISs

A Deterministic Information System (DIS) is a quadruplet (OB, AT, {V ALA | A ∈ AT }, f ), where OB is a ﬁnite set whose elements are called objects, AT is a ﬁnite set whose elements are called attributes, V ALA is a ﬁnite set whose elements are called attribute values and f is such a mapping that f : OB×AT → ∪A∈AT V ALA which is called a classif ication f unction. For AT R={A1 , · · · , An } ⊆ AT , we call (f (x, A1 ), · · · , f (x, An )) a tuple (for AT R) of x ∈ OB. If f (x, A)=f (y, A) holds for every A ∈ AT R ⊆ AT , we see there is a relation between x and y for AT R. This relation is an equivalence

Basic Algorithms for Rough Non-deterministic Information Analysis

211

relation over OB. Let eq(AT R) denote this equivalence relation, and let [x]AT R ∈ eq(AT R) denote an equivalence class {y ∈ OB|f (y, A)=f (x, A) for every A ∈ AT R}. Now, let us show some rough sets based concepts deﬁned in DISs [1,3]. (D-i) The Definability of a Set: If a set X ⊆ OB is the union of some equivalence classes in eq(AT R), we say X is def inable (for AT R) in DIS. Otherwise, we say X is rough (for AT R) in DIS. (D-ii) The Consistency of an Object: Let us consider two disjoint sets CON ⊆ AT which we call condition attributes and DEC ⊆ AT which we call decision attributes. An object x ∈ OB is consistent (with any other object y ∈ OB in the relation from CON to DEC), if f (x, A)=f (y, A) holds for every A ∈ CON implies f (x, A)=f (y, A) holds for every A ∈ DEC. (D-iii) Dependencies among Attributes: We call a ratio deg(CON, DEC)= |{x ∈ OB| x is consistent in the relation from CON to DEC }|/|OB| the degree of dependency from CON to DEC. Clearly, deg(CON, DEC)=1 holds if and only if every object x ∈ OB is consistent. (D-iv) Rules and Criteria (Support, Accuracy and Coverage): For any object x ∈ OB, let imp(x, CON, DEC) denote a formula called an implication: ∧A∈CON [A, f (x, A)] ⇒ ∧A∈DEC [A, f (x, A)], where a formula [A, f (x, A)] implies that f (x, A) is the value of the attribute A. This is called a descriptor in [15,22]. In most of work on rule generation, a rule is deﬁned by an implication τ : imp(x, CON, DEC) satisfying some constraints. A constraint, such that deg(CON, DEC)=1 holds from CON to DEC, has been proposed in [1]. Another familiar constraint is deﬁned by three values in the following: support(τ )= |[x]CON ∩[x]DEC |/|OB|, accuracy(τ )=|[x]CON ∩[x]DEC |/|[x]CON | and coverage (τ )=|[x]CON ∩ [x]DEC |/|[x]DEC | [9]. (D-v) Reduction of Condition Attributes in Rules: Let us consider such an implication imp(x, CON, DEC) that x is consistent in the relation from CON to DEC. An attribute A ∈ CON is dispensable in CON , if x is consistent in the relation from CON − {A} to DEC. These are the deﬁnitions of rough sets based concepts in DISs. Several tools for DISs have been realized according to these deﬁnitions [5,6,7,8,9,10,11]. 2.2

Definitions from D-i to D-v and Equivalence Relations over OB

Rough set theory makes use of equivalence relations for solving problems. Each deﬁnition from D-i to D-v is solved by means of applying equivalence relations. As for the deﬁnability of a set X ⊆ OB, X is deﬁnable (for AT R) in a DIS, if ∪x∈K [x]AT R =X holds for a set K ⊆ X ⊆ OB. According to this deﬁnition, it is possible to derive such a necessary and suﬃcient condition that a set X is deﬁnable if and only if ∪x∈X [x]AT R =X holds. Now, let us show the most important proposition, which connects two equivalence classes [x]CON and [x]DEC with the consistency of x.

212

Hiroshi Sakai and Akimichi Okuma

Proposition 1 [1]. For any DIS, (1) and (2) in the following are equivalent. (1) An object x ∈ OB is consistent in the relation from CON to DEC. (2) [x]CON ⊆ [x]DEC . According to Proposition 1, the degree of dependency from CON to DEC is equal to |{x ∈ OB|[x]CON ⊆ [x]DEC }|/|OB|. As for criteria support, accuracy and coverage, they are deﬁned by equivalence classes [x]CON and [x]DEC . As for the reduction of attributes values in rules, let us consider such an implication imp(x, CON, DEC) that x is consistent in the relation from CON to DEC. Here, an attribute A ∈ CON is dispensable, if [x]CON −{A} ⊆ [x]DEC holds. In this way, deﬁnitions from D-i to D-v are uniformly computed by means of applying equivalence relations in DISs.

3

A Framework of Rough Non-deterministic Information Analysis

This section gives deﬁnitions in N ISs and two modalities due to the information incompleteness in N ISs. Then, a framework of rough non-deterministic information analysis is proposed. 3.1

A Proposal of Rough Non-deterministic Information Analysis

A N on-deterministic Inf ormation System (N IS) is also a quadruplet (OB, AT, {V ALA |A ∈ AT }, g), where g : OB × AT → P (∪A∈AT V ALA ) (a power set of ∪A∈AT V ALA ). Every set g(x, A) is interpreted as that there is a real value in this set but this value is not known [13,15,21]. Especially if the real value is not known at all, g(x, A) is equal to V ALA . This is called the null value interpretation [12]. Definition 1. Let us consider a N IS=(OB, AT, {V ALA |A ∈ AT }, g), a set AT R ⊆ AT and a mapping h : OB×AT R → ∪A∈AT R V ALA such that h(x, A) ∈ g(x, A). We call a DIS=(OB, AT R, {V ALA |A ∈ AT R}, h) a derived DIS (for AT R) from N IS. Example 1. Let us consider N IS1 in Table 1, which is automatically produced by means of applying a random number program. There are 2176782336(=212 × 312 ) derived DISs for AT R={A, B, C, D, E, F }. As for AT R={A, B, C}, there are 1118744(=27 × 34 ) derived DISs. Definition 2. Let us consider a N IS. There exists a derived DIS with unknown real attribute values due to the interpretation of g(x, A). So, let DIS real denote a derived DIS with unknown real attribute values. Of course, it is impossible to know DIS real without additional information. However, some information based on DIS real may be derived. Let us consider a relation from CON ={A, B} to DEC={C} and object 2 in N IS1 . The tuple of object 2 is either (2,2,2) or (4,2,2). In both cases, object 2 is consistent. Thus, it is possible to conclude that object 2 is consistent in DIS real , too. In order to handle

Basic Algorithms for Rough Non-deterministic Information Analysis

213

Table 1. A Table of N IS1 OB 1 2 3 4 5 6 7 8 9 10

A B C {3} {1, 3, 4} {3} {2, 4} {2} {2} {1, 2} {2, 4, 5} {2} {1, 5} {5} {2, 4} {3, 4} {4} {3} {3, 5} {4} {1} {1, 5} {4} {5} {4} {2, 4, 5} {2} {2} {5} {3} {2, 3, 5} {1} {2}

D {2} {3, 4} {3} {2} {1, 2, 3} {2, 3, 5} {1, 4} {1, 2, 3} {5} {3}

E {5} {1, 3, 4} {4, 5} {1, 4, 5} {1} {5} {3, 5} {2} {4} {1}

F {5} {4} {5} {5} {2, 5} {2, 3, 4} {1} {1, 2, 5} {2} {1, 2, 3}

such information based on DIS real , two modalities certainty and possibility are usually deﬁned in most of work handling information incompleteness. (Certainty). If a formula α holds in every derived DIS from a N IS, α also holds in DIS real . In this case, we say α certainly holds in DIS real . (Possibility). If a formula α holds in some derived DISs from a N IS, there exists such a possibility that α holds in DIS real . In this case, we say α possibly holds in DIS real . According to two modalities for DIS real , it is possible to extend deﬁnitions from D-i to D-v in DISs to the deﬁnitions in N ISs. In the subsequent sections, we sequentially give deﬁnitions from N-i to N-v in N ISs. We name information analysis, which depends upon deﬁnitions from N-i to N-v and other extended deﬁnitions in N ISs, Rough N on-deterministic Inf ormation Analysis (RN IA) from now on. 3.2

Incomplete Information Systems and NISs

Incomplete information systems in [21,22] and N ISs seem to be the same, but there exist some distinct diﬀerences. Example 2 clariﬁes the diﬀerence between incomplete information systems and N ISs. Example 2. Let us consider an incomplete information system in Table 2. Table 2. A Table of an Incomplete Information System OB 1 2

A ∗ 3

B 2 3

214

Hiroshi Sakai and Akimichi Okuma Table 3. A Table of a N IS OB 1 2

A {1, 2} 3

B 2 3

Here, let us suppose V ALA be {1, 2, 3}, CON ={A} and DEC={B}. The attribute value of object 1 is not deﬁnite, and the ∗ symbol is employed for describing it. In this case, the null value interpretation is applied to this ∗, and 3 ∈ V ALA may occur instead of ∗. Therefore, object 2 is not consistent in this case. According to the deﬁnition in [22], a formula [A, 3] ⇒ [B, 3] is a possible rule. Now, let us consider a N IS in Table 3. The attribute value of object 1 is not deﬁnite, either. However in this N IS, object 2 is consistent in every derived DIS. So, a formula [A, 3] ⇒ [B, 3] is a certain rule according to the deﬁnition in [22]. Thus, the meaning of the formula [A, 3] ⇒ [B, 3] in Table 2 is diﬀerent from that in Table 3. In incomplete information systems, each indeﬁnite value is uniformly identiﬁed with unknown value ∗. However in N ISs, each indeﬁnite value is identiﬁed with a subset of V ALA (A ∈ AT ). Clearly, N ISs are more informative than incomplete information systems. 3.3

The Core Problem for RNIA and the Purpose of This Work

Deﬁnitions from N-i to N-v, which are sequentially given in the following sections, depend upon every derived DISs from a N IS. Therefore, it is necessary to compute deﬁnitions from D-i to D-v in every derived DIS. The number of all derived DISs, which is the product x∈OB,A∈AT R |g(x, A)| for AT R ⊆ AT , increases in exponential order. Even though each deﬁnition from D-i to D-iv can be solved in the polynomial time order for input data size [23], each deﬁnition from N-i to N-iv depends upon all derived DISs. The complexity for ﬁnding a minimal reduct in a DIS is also proved to be NP-hard [3]. Namely for handling N ISs with large number of derived DISs, it may take much execution time without eﬀective algorithms. This is the core problem for RN IA. This paper proposes the application of inf and sup information and possible equivalence relations, which are deﬁned in the next subsection, to solving the above core problem. In Section 2.2, the connection between deﬁnitions from Di to D-v and equivalence relations is proved. Analogically, we think about the connection between deﬁnitions from N-i to N-v and possible equivalence relations [24,25]. 3.4

Basic Definitions for RNIA

Now, we give some basic deﬁnitions, which appear through this paper. Definition 3. Let us consider a derived DIS (for AT R) from a N IS. We call an equivalence relation eq(AT R) in DIS a possible equivalence relation (perelation) in N IS. We also call every element in eq(AT R) a possible equivalence class (pe-class) in N IS.

Basic Algorithms for Rough Non-deterministic Information Analysis

215

For AT R={C} in N IS1 , there exist two derived DISs and two pe-relations i.e., {{1, 5, 9}, {2, 3, 4, 8, 10}, {6}, {7}} and {{1, 5, 9}, {2, 3, 8, 10}, {4}, {6}, {7}}. Every element in two pe-relations is a pe-class for AT R={C} in N IS1 . Definition 4. Let us consider a N IS and a set AT R={A1, · · · , An } ⊆ AT . For any x ∈ OB, let P T (x, AT R) denote the Cartesian product g(x, A1 ) × · · · × g(x, An ). We call every element a possible tuple (f or AT R) from x. For a possible tuple ζ=(ζ1 , · · · , ζn ) ∈ P T (x, AT R), let [AT R, ζ] denote a formula 1≤i≤n [Ai , ζi ]. Furthermore for disjoint sets CON, DEC ⊆ AT , and two possible tuples ζ=(ζ1 , · · · , ζn ) ∈ P T (x, CON ) and η=(η1 , · · · , ηm ) ∈ P T (x, DEC), let (ζ, η) denote a possible tuple (ζ1 , · · · , ζn , η1 , · · · , ηm ) ∈ P T (x, CON ∪ DEC). Definition 5. Let us consider a N IS and a set AT R ⊆ AT . For any ζ ∈ P T (x, AT R), let DD(x, ζ, AT R) denote a set {ϕ| ϕ is such a derived DIS for AT R that the tuple of x in ϕ is ζ}. Furthermore in this DD(x, ζ, AT R), we deﬁne (1) and (2) below. (1) inf (x, ζ, AT R)={y ∈ OB|P T (y, AT R)={ζ}}, (2) sup(x, ζ, AT R)={y ∈ OB|ζ ∈ P T (y, AT R)}. For object 1 and AT R={A, B} in N IS1 , P T (1, {A, B})={(3, 1), (3, 3), (3, 4)} holds. The possible tuple (3, 1) ∈ P T (1, {A, B}) appears 1/3 derived DISs for AT R={A, B}. The number of elements in DD(1, (3, 1), {A, B}) is 1728(=26 × 33 ). In this set DD(1, (3, 1), {A, B}), inf (1, (3, 1), {A, B})={1} and sup(1, (3, 1), {A, B})={1, 10} hold. These inf and sup in Deﬁnition 5 are key information for RN IA, and each algorithm in the following depends upon these two sets. The set sup is semantically equal to a set deﬁned by the similarity relation SIM in [20,21]. In [20,21], some theorems are presented based on the relation SIM , and our theoretical results are closely related to those theorems. However, the set inf causes new properties, which hold just in N ISs. Now, let us consider a relation between a pe-class [x]AT R and two sets inf and sup. In every DIS, P T (y, AT R) is a singleton set, so [x]AT R =inf (x, ζ, AT R)= sup(x, ζ, AT R) holds. However in every N IS, [x]AT R depends upon derived DISs, and {x} ⊆ inf (x, ζ, AT R) ⊆ [x]AT R ⊆ sup(x, ζ, AT R) holds. Proposition 2 in the following connects a pe-class [x]AT R with inf (x, ζ, AT R) and sup(x, ζ, AT R). Proposition 2 [25]. For a N IS, an object x, AT R ⊆ AT and ζ ∈ P T (x, AT R), conditions (1) and (2) in the following are equivalent. (1) X is an equivalence class [x]AT R in some ϕ ∈ DD(x, ζ, AT R). (2) inf (x, ζ, AT R) ⊆ X ⊆ sup(x, ζ, AT R).

4

Algorithms and Tool Programs for the Deﬁnability of a Set in NISs

This section proposes algorithms and tool programs for the deﬁnability of a set. It is possible to obtain distinct pe-relations as a side eﬀect of an algorithm. An algorithm for merging pe-relations is also proposed.

216

4.1

Hiroshi Sakai and Akimichi Okuma

An Algorithm for Checking the Definability of a Set in NISs

The deﬁnability of a set in N ISs is given, and an algorithm is proposed. Definition 6. (N-i. The Definability of a Set) We say X ⊆ OB is certainly def inable for AT R ⊆ AT in DIS real , if X is deﬁnable (for AT R) in every derived DIS. We say X ⊆ OB is possibly def inable for AT R ⊆ AT in DIS real , if X is deﬁnable (for AT R) in some derived DISs. In a DIS, it is enough to check a formula ∪x∈X [x]AT R =X for the deﬁnability of X ⊆ OB. However in every N IS, [x]AT R depends upon a derived DIS, and inf (x, ζ, AT R) ⊆ [x]AT R ⊆ sup(x, ζ, AT R) holds. Algorithm 1 in the following checks the formula ∪x∈X [x]AT R =X according to these inclusion relations, and ﬁnds a subset of a pe-relation which makes the set X deﬁnable. Algorithm 1. Input: A N IS, a set AT R ⊆ AT and a set X ⊆ OB. Output: The deﬁnability of a set X for AT R. (1) X ∗ =X, eq=∅, count=0 and total= x∈X,A∈AT R |g(x, A)|. (2) For any x ∈ X ∗ , ﬁnd [x]AT R satisfying constraints (CL-1) and (CL-2). (CL-1) [x]AT R ⊆ X ∗ , (CL-2) eq ∪ {[x]AT R } is a subset of a pe-relation. (2-1) If there is a set [x]AT R , eq=eq ∪ {[x]AT R } and X ∗ =X ∗ − [x]AT R . If X ∗ = ∅, go to (2). If X ∗ =∅, X is deﬁnable in a derived DIS. Set count=count+1, and backtrack. (2-2) If there is no [x]AT R , backtrack. (3) After ﬁnishing the search, X is certainly deﬁnable for AT R in DIS real , if count=total. X is possibly deﬁnable for AT R in DIS real , if count ≥ 1. Algorithm 1 tries to ﬁnd a set of pe-classes, which satisfy constraints (CL-1) and (CL-2). Whenever X ∗ =∅ holds in Algorithm 1, a subset of a pe-relation is stored in the variable eq. At the same time, a derived DIS (restricted to the set X) from N IS is also detected [24,25]. Because X=∪K∈eq K holds for eq, X is deﬁnable in this detected DIS. In order to count such a case that X ∗ =∅, the variable count is employed. At the end of execution, if count is equal to the number of derived DISs (restricted to the set X), it is possible to conclude X is certainly deﬁnable. The constraints (CL-1) and (CL-2) keep the correctness of this search. For example in Table 1, inf (1, (3), {A})={1} and sup(1, (3), {A})={1, 5, 6, 10} hold. So, {1} ⊆ [1]{A} ⊆ {1, 5, 6, 10} holds. Let us suppose [1]{A} ={1, 5, 10}. Since 6 ∈ [1]{A} holds in this case, the tuple from object 6 is not (3). In a branch with [1]{A} ={1, 5, 10}, the tuple from object 6 is implicitly ﬁxed to (5) ∈ P T (6, {A})= {(3), (5)}. The details of (CL-1), (CL-2) and an illustrative example based on a previous version of Algorithm 1 are presented in [25]. Algorithm 1 is a solution to handle deﬁnition N-i. Algorithm 1 is extended to Algorithm 2 in the subsequent sections. A real execution of a tool, which simulates Algorithm 1, is shown in Appendix 1.

Basic Algorithms for Rough Non-deterministic Information Analysis

4.2

217

The Definability of a Set and Pe-relations in NISs

In Algorithm 1, let X be OB. Since every pe-relation is an equivalence relation over OB, OB is deﬁnable in every derived DIS. Thus, OB is certainly deﬁnable in DIS real . In Algorithm 1, every pe-relation is asserted in the variable eq whenever X ∗ =∅ is derived. In this way, it is possible to obtain all pe-relations. However in this case, the number of the search branches with X ∗ =∅ is equal to the number of all derived DISs. Therefore, it is hard to apply Algorithm 1 directly to N ISs with large number of derived DISs. We solve this problem by means of applying Proposition 3 in the following, which shows us a way to merge equivalence relations. Proposition 3 [1]. Let us suppose eq(A) and eq(B) be equivalence relations for A, B ⊆ AT in a DIS. The equivalence relation eq(A ∪ B) is {M ⊆ OB|M = [x]A ∩ [x]B for [x]A ∈ eq(A) and [x]B ∈ eq(B), x ∈ OB}. 4.3

A Property of Pe-relations in NISs

Before proposing another algorithm for producing pe-relations, we clarify a property of pe-relations. Because some pe-relations in distinct derived DISs may be the same, generally the number of distinct pe-relations is smaller than the number of derived DISs. Let us consider Table 4, which shows the relation between the numbers of derived DISs and distinct pe-relations. This result is computed by tool programs in the subsequent sections. For AT R={A, B, C}, there are 10368(=27 × 34 ) derived DISs. However in reality, there are 10 distinct pe-relations. For larger attributes set AT R, every object is much more discerned from other objects, i.e., every [x]AT R will become {x}. Therefore, every pe-relation will become a unique equivalence relation {{1}, {2}, · · · , {10}}. For AT R={A, B, C, D, E, F }, there exists in reality only 1 distinct pe-relation {{1}, {2}, · · · , {10}}. Table 4. The numbers of derived DISs and distinct pe-relations in N IS1 AT R {A, B} {A, B, C} {A, B, C, D} {A, B, C, D, E} {A, B, C, D, E, F } derived DISs 5184 10368 1118744 40310784 2176782336 10 6 2 1 pe relations 107

We examined several N ISs, and we experimentally conclude such a property that the number of distinct possible equivalence relations is generally much smaller than the number of all derived DISs. We make use of this property for computing deﬁnitions from N-i to N-v. 4.4

A Revised Algorithm for Producing Pe-relations

Algorithm 1 produces pe-relations as a side eﬀect of the search, but this algorithm is not suitable for N ISs with large number of derived DISs. This section revises Algorithm 1 by means of applying Proposition 3.

218

Hiroshi Sakai and Akimichi Okuma

Algorithm 2. Input: A N IS and a set AT R ⊆ AT . Output: A set of distinct pe-relations for AT R: pe rel(AT R). (1) Produce a set of pe-relations pe rel({A}) for every A ∈ AT R. (2) Set temp={} and pe rel(AT R)={{{1, 2, 3, · · · , |OB|}}}. (3) Repeat (4) to pe rel(AT R) and pe rel({K}) (K ∈ AT R − temp) until temp=AT R. (4) For each pair of pei ∈ pe rel(AT R) and pej ∈ pe rel({K}), apply Proposition 3 and produce pei,j ={M ⊆ OB|M = [x]i ∩ [x]j for [x]i ∈ pei and [x]j ∈ pej , x ∈ OB}. Let pe rel(AT R) be {pei,j |pei ∈ pe rel(AT R) pej ∈ pe rel({K})}, and set temp=temp ∪ {K}. In step (1), Algorithm 1 is applied to producing pe rel({A}) for every A ∈ AT R. In steps (3) and (4), Proposition 3 is repeatedly applied to merging two sets of pe-relations. For N IS1 , let us consider a case of AT R={A, B, C, D} in Algorithm 2. After ﬁnishing step (1), Table 5 is obtained. Table 5 shows the numbers of derived DISs and distinct pe-relations in every attribute. Table 5. The numbers of derived DISs and distinct pe-relations in every attribute Attribute A derived DISs 192 pe relations 176

B 27 27

C 2 2

D 108 96

E 36 36

F 54 54

For AT R={A, B, C, D}, Algorithm 2 sequentially produces pe rel({A, B}), pe rel({A, B, C}) and pe rel({A, B, C, D}). For producing pe rel({A, B}), it is necessary to handle 4752(=176×27) combinations of pe-relations, and it is possible to know |pe rel({A, B})|=107 in Table 4. However after this execution, the number of these combinations is reduced due to the property of pe-relations. For producing pe rel({A, B, C}), it is enough to handle 214(=107×2) combinations for 10368 derived DISs, and |pe rel({A, B, C})|=10 in Table 4 is obtained. For producing pe rel({A, B, C, D}), it is enough to handle 960(=10×96) combinations for 1118744 derived DISs. Generally, Algorithm 1 depends upon the number x∈OB,A∈AT R |g(x, A)|, which is the number of derived DISs. 2 at the most depends However, Algorithm 2 upon the number (|AT R| − 1)| × |g(x, A)| for such an attribute A that x∈OB |g(x, B)| ≤ |g(x, A)| for any B ∈ AT R. The product of |g(x, A)| x∈OB x∈OB in the number for Algorithm 1 is almost corresponding to the sum of |g(x, A)|2 in the number for Algorithm 2. Thus, in order to handle N ISs with large AT R and large number of derived DISs, Algorithm 2 will be more eﬃcient than Algorithm 1. In reality, the result in Table 4 is calculated by Algorithm 2. It is hard to apply Algorithm 1 to calculating pe-relations for AT R={A, B, C, D, E} or AT R={A, B, C, D, E, F }.

Basic Algorithms for Rough Non-deterministic Information Analysis

219

As for the implementation of Algorithm 2, the data structure of pe-relations and the program depending upon this structure are following the deﬁnitions in [23,25]. A real execution of a tool, which simulates Algorithm 2, is shown in Appendix 2. 4.5

Another Solution of the Definability of a Set

Algorithm 1 solves the deﬁnability of a set in N ISs, and it is also possible to apply pe-relations for solving the deﬁnability of a set in N ISs. After obtaining distinct pe-relations, we have only to check ∪x∈X [x]=X for every pe-relation. Example 3. Let us consider N IS1 . For AT R={A, B, C, D, E}, there are two distinct pe-relations pe1 ={{1}, {2}, · · · , {10}} and pe2 ={{1}, {2, 3}, {4}, · · ·, {10}}. A set {1, 2} is deﬁnable in pe1 , but it is not deﬁnable in pe2 . Therefore, a set {1, 2} is possibly deﬁnable in DIS real . Since a set {1, 2, 3} is deﬁnable in every pe-relation, this set is certainly deﬁnable in DIS real . In this way the load of the calculation, which depends upon all derived DISs, can be reduced.

5

The Necessary and Suﬃcient Condition for Checking the Consistency of an Object

This section examines the necessary and suﬃcient condition for checking the consistency of an object. Definition 7. (N-ii. The Consistency of an Object) Let us consider two disjoint sets CON, DEC ⊆ AT in a N IS. We say x ∈ OB is certainly consistent (in the relation from CON to DEC) in DIS real , if x is consistent (in the relation from CON to DEC) in every derived DIS from N IS. We say x is possibly consistent in DIS real , if x is consistent in some derived DISs from N IS. According to pe-relations and Proposition 1, it is easy to check the consistency of x. Let us consider two sets of pe-relations pe rel(CON ) and pe rel(DEC). An object x is certainly consistent in DIS real , if and only if [x]CON ⊆ [x]DEC ([x]CON ∈ pei and [x]DEC ∈ pej ) for any pei ∈ pe rel(CON ) and any pej ∈ pe rel(DEC). An object x is possibly consistent in DIS real , if and only if [x]CON ⊆ [x]DEC ([x]CON ∈ pei and [x]DEC ∈ pej ) for some pei ∈ pe rel(CON ) and some pej ∈ pe rel(DEC). However, it is also possible to check the consistency of an object by means of applying inf and sup information in Deﬁnition 5. Theorem 4. For a N IS and an object x, let CON be condition attributes and let DEC be decision attributes. (1) x is certainly consistent in DIS real if and only if sup(x, ζ, CON ) ⊆ inf (x, η, DEC) holds for any ζ ∈ P T (x, CON ) and any η ∈ P T (x, DEC). (2) x is possibly consistent in DIS real if and only if inf (x, ζ, CON ) ⊆ sup(x, η, DEC) holds for a pair of ζ ∈ P T (x, CON ) and η ∈ P T (x, DEC).

220

Hiroshi Sakai and Akimichi Okuma

Proof. Let us consider pe-classes [x]CON and [x]DEC in ϕ ∈ DD(x, (ζ, η), CON ∪ DEC). Then, inf (x, ζ, CON ) ⊆ [x]CON ⊆ sup(x, ζ, CON ) and inf (x, η, DEC) ⊆ [x]DEC ⊆ sup(x, η, DEC) hold according to Proposition 2. (1) Let us suppose sup(x, ζ, CON ) ⊆ inf (x, η, DEC) holds. Then, [x]CON ⊆ sup(x, ζ, CON ) ⊆ inf (x, η, DEC) ⊆ [x]DEC , and [x]CON ⊆ [x]DEC is derived. According to Proposition 1, object x is consistent in any ϕ ∈ DD(x, (ζ, η), CON ∪ DEC). This holds for any ζ ∈ P T (x, CON ) and any η ∈ P T (x, DEC). Thus, x is certainly consistent in DIS real . Conversely, let us suppose sup(x, ζ, CON ) ⊆ inf (x, η, DEC) holds for a pair of ζ and η. According to Proposition 2, [x]CON = sup(x, ζ, CON ) and [x]DEC =inf (x, η, DEC) hold in some ϕ ∈ DD(x, (ζ, η), CON ∪ DEC). Since [x]CON ⊆ [x]DEC holds in ϕ, x is not certainly consistent. By contraposition, the converse is also proved. (2) Let us suppose inf (x, ζ, CON ) ⊆ sup(x, η, DEC) holds for a pair of ζ ∈ P T (x, CON ) and η ∈ P T (x, DEC). According to Proposition 2, [x]CON =inf (x, ζ, CON ) and [x]DEC =sup(x, η, DEC) hold in some ϕ ∈ DD(x, (ζ, η), CON ∪ DEC). Namely, x is consistent in this ϕ. Conversely, let us suppose inf (x, ζ, CON ) ⊆ sup(x, η, DEC) holds. Since inf (x, ζ, CON ) ⊆ [x]CON and [x]DEC ⊆ sup(x, η, DEC) hold for any [x]CON and [x]DEC , [x]CON ⊆ [x]DEC is derived. Namely, x is not possibly consistent. By contraposition, the converse is also proved. Theorem 4, which is one of the most important results in this paper, is an extension of Proposition 1 and results in [20,21]. In Proposition 1, [x]CON and [x]DEC are mutually unique. However in N ISs, these pe-classes may not be unique. In order to check the consistency of objects in N ISs, it is necessary to consider possible tuples and derived DISs. Algorithm 1 and 2 produce pe-relations according to inf and sup information in Deﬁnition 5. Theorem 4 also characterizes the consistency of an object by means of applying inf and sup information, therefore inf and sup information in Deﬁnition 5 is the most essential information.

6

An Algorithm and Tool Programs for Data Dependency in NISs

The formal deﬁnition of data dependency in N ISs has not been established yet. This section extends the deﬁnition D-iii to N-iii in the following, and examines an algorithm and tool programs for data dependency in N ISs. Definition 8 [26] (N-iii. Data Dependencies among Attributes). Let us consider any N IS, condition attributes CON , decision attributes DEC and all derived DIS1 , · · ·, DISm from N IS. For two threshold values val1 and val2 (0 ≤ val1 , val2 ≤ 1), if conditions (1) and (2) hold then we see DEC depends on CON in N IS. (1) |{DISi |deg(CON, DEC)=1 in DISi (1 ≤ i ≤ m)}|/m ≥ val1 . (2) mini {deg(CON, DEC) in DISi } ≥ val2 .

Basic Algorithms for Rough Non-deterministic Information Analysis

221

In Deﬁnition 8, condition (1) requires most of derived DISs are consistent, i.e., every object is consistent in most of derived DISs. Condition (2) speciﬁes the minimal value of the degree of dependency. If both two conditions are satisﬁed, it is expected that deg(CON, DEC) in DIS real will also be high. The deﬁnition N-iii is easily computed according to pe rel(CON ) and pe rel(DEC). For each pair of pei ∈ pe rel(CON ) and pej ∈ pe rel(DEC), the degree of dependency is |{x ∈ OB|[x]CON ⊆ [x]DEC for [x]CON ∈ pei , [x]DEC ∈ pej }|/|OB|. Namely, all kinds of the degrees of dependency are obtained by means of calculating all combinations of pairs. For example, let us consider CON ={A, B, C, D, E} and DEC={F } in N IS1 . Since |pe rel({A, B, C, D, E})| =2 and |pe rel({F })|=54, it is possible to obtain all kinds of degrees by means of examining 108(=2×54) combinations. This calculation is the same as the calculation depending upon 2176782336 derived DISs. A real execution handling data dependency and the consistency of objects is shown in Appendix 3.

7

An Algorithm and Tool Programs for Rules in NISs

This section investigates an algorithm and tool programs [27] for rules in N ISs. 7.1

Certain Rules and Possible Rules in NISs

Possible implications in N ISs are proposed, and certain rules and possible rules are deﬁned by possible implications satisfying some constraints. Definition 9. For any N IS, let CON be condition attributes and let DEC be decision attributes. For any x ∈ OB, let P I(x, CON, DEC) denote a set {[CON, ζ] ⇒ [DEC, η]|ζ ∈ P T (x, CON ), η ∈ P T (x, DEC)}. We call an element of P I(x, CON, DEC) a possible implication (in the relation from CON to DEC) from x. We call a possible implication, which satisﬁes some constraints, a rule in N IS. It is necessary to remark that a possible implication τ : [CON, ζ] ⇒ [DEC, η] from x appears in every ϕ ∈ DD(x, (ζ, η), CON ∪ DEC). This set DD(x, (ζ, η), CON ∪ DEC) is a subset of all derived DISs for AT R=CON ∪ DEC. In N IS1 , P T (1, {A, B})={(3, 1), (3, 3), (3, 4)}, P T (1, {C})={(3)} and P I(1, {A, B}, {C}) consists of three possible implications [A, 3] ∧ [B, 1] ⇒ [C, 3], [A, 3] ∧ [B, 3] ⇒ [C, 3] and [A, 3] ∧ [B, 4] ⇒ [C, 3]. The ﬁrst possible implication appears in every ϕ ∈ DD(1, (3, 1, 3), {A, B, C}). This set DD(1, (3, 1, 3), {A, B, C}) consists of 1/3 of derived DISs for {A, B, C}. Definition 10. Let us consider a N IS, condition attributes CON and decision attributes DEC. If P I(x, CON, DEC) is a singleton set {τ } (τ : [CON, ζ] ⇒ [DEC, η]), we say τ (from x) is def inite. Otherwise we say τ (from x) is indef inite. If a set {ϕ ∈ DD(x, (ζ, η), CON ∪ DEC)| x is consistent in ϕ} is equal to DD(x, (ζ, η), CON ∪ DEC), we say τ is globally consistent (GC). If this set is equal to ∅, we say τ is globally inconsistent (GI). Otherwise we say τ is marginal (M A). According to two cases, i.e., ‘D(ef inite) or I(ndef inite)’ and ‘GC or M A or GI’, we deﬁne six classes, D-GC, D-M A, D-GI, I-GC, I-M A, I-GI, for possible implications.

222

Hiroshi Sakai and Akimichi Okuma

If a possible implication from x belongs to either D-GC, I-GC, D-M A or I-M A, x is consistent in some derived DISs. If a possible implication from x belongs to D-GC, x is consistent in every derived DISs. Thus, we give Deﬁnition 11 in the following. Definition 11 (N-iv. Certain and Possible Rules).s For a N IS, let CON be condition attributes and let DEC be decision attributes. We say τ ∈ P I(x, CON, DEC) is a possible rule in DIS real , if τ belongs to either D-GC, I-GC, D-M A or I-M A class. Especially, we say τ is a certain rule in DIS real , if τ belongs to D-GC class. Theorem 5 in the following characterizes certain and possible rules according to inf and sup information. Theorem 5 is also related to results in [20,21], but there exist some diﬀerences, which we have shown in Example 2. Theorem 5 [27]. For a N IS, let CON be condition attributes and let DEC be decision attributes. For τ : [CON, ζ] ⇒ [DEC, η] ∈ P I(x, CON, DEC), the following holds. (1) τ is a possible rule if and only if inf (x, ζ, CON ) ⊆ sup(x, η, DEC) holds. (2) τ is a certain rule if and only if P I(x, CON, DEC)={τ } and sup(x, ζ, CON ) ⊆ inf (x, η, DEC) hold. Proposition 6. For any N IS, let AT R ⊆ AT be {A1 , · · · , An }, and let a possible tuple ζ ∈ P T (x, AT R) be (ζ1 , · · · , ζn ). Then, the following holds. (1) inf (x, ζ, AT R)=∩i inf (x, (ζi ), {Ai }). (2) sup(x, ζ, AT R)=∩i sup(x, (ζi ), {Ai }). Proof of (1): For any y ∈ inf (x, ζ, AT R), P T (y, AT R)={(ζ1, · · · , ζn )} holds due to the deﬁnition of inf . Namely, P T (y, {Ai })={(ζi )} holds for every i, and y ∈ inf (x, (ζi ), {Ai }) for every i. Namely, y ∈ ∩i inf (x, (ζi ), {Ai }). The converse of this proof clearly holds. Proposition 6 shows us a way to manage inf and sup information in Deﬁnition 5. Namely, we ﬁrst prepare inf and sup information for every x ∈ OB, Ai ∈ AT and (ζi,j ) ∈ P T (x, {Ai }). Then, we produce inf and sup information by means of repeating the set intersection operations. For obtained inf (x, ζ, CON ), sup(x, ζ, CON ), inf (x, η, DEC) and sup(x, η, DEC), Theorem 5 is applied to checking the certainty or the possibility of τ : [CON, ζ] ⇒ [DEC, η]. 7.2

The Minimum and Maximum of Three Criterion Values

This section proposes the minimum and maximum of three criterion values for possible implications, and investigates an algorithm to calculate them. Definition 12. For a N IS, let us consider a possible implication τ :[CON, ζ] ⇒ [DEC, η] ∈ P I(x, CON, DEC) and DD(x, (ζ, η), CON ∪ DEC). Let minsup(τ ) denote minϕ∈DD(x,(ζ,η),CON ∪DEC){support(τ ) in ϕ}, and let maxsup(τ ) denote maxϕ∈DD(x,(ζ,η),CON ∪DEC){support(τ ) in ϕ}. As for accuracy and coverage, minacc(τ ), maxacc(τ ), mincov(τ ) and maxcov(τ ) are similarly deﬁned. Let us suppose DIS real ∈ DD(x, (ζ, η), CON ∪ DEC). According to Deﬁnition 12, clearly minsup(τ ) ≤ support(τ ) in DIS real ≤ maxsup(τ ), minacc(τ ) ≤

Basic Algorithms for Rough Non-deterministic Information Analysis

223

accuracy(τ ) in DIS real ≤ maxacc(τ ) and mincov(τ ) ≤ coverage(τ ) in DIS real ≤ maxcov(τ ) hold. For calculating each deﬁnition, it is necessary to examine every ϕ ∈ DD(x, (ζ, η), CON ∪ DEC), and this calculation depends upon |DD(x, (ζ, η), CON ∪ DEC)|. However, these minimum and maximum values are also calculated by means of applying inf and sup information, again. Theorem 7. For a N IS, let us consider a possible implication τ :[CON, ζ] ⇒ [DEC, η] ∈ P I(x, CON, DEC). The following holds. (1) minsup(τ )=|inf (x, ζ, CON ) ∩ inf (x, η, DEC)|/|OB|. (2) maxsup(τ )=|sup(x, ζ, CON ) ∩ sup(x, η, DEC)|/|OB|. Theorem 8. For a N IS, let us consider a possible implication τ :[CON, ζ] ⇒ [DEC, η] ∈ P I(x, CON, DEC). Let IN ACC denote a set [sup(x, ζ, CON ) − inf (x, ζ, CON )] ∩ sup(x, η, DEC), and let OU T ACC denote a set [sup(x, ζ, CON ) − inf (x, ζ, CON )] − inf (x, η, DEC). Then, the following holds. (x,ζ,CON )∩inf (x,η,DEC)| (1) minacc(τ )= |inf |inf (x,ζ,CON )|+|OUT ACC| . )∩sup(x,η,DEC)|+|IN ACC| . (2) maxacc(τ )= |inf (x,ζ,CON |inf (x,ζ,CON )|+|IN ACC|

Proof of (1) According to Proposition 2, inf (x, ζ, CON ) ⊆ [x]CON ⊆ sup(x, ζ, CON ) holds. Therefore, the denominator is in the form of |inf (x, ζ, CON )| + |K1 | (K1 ⊆ [sup(x, ζ, CON ) − inf (x, ζ, CON )]). Since P I(y, CON, DEC)={τ } for any y ∈ inf (x, ζ, CON ) ∩ inf (x, η, DEC), the numerator is in the form of |inf (x, ζ, CON ) ∩ inf (x, η, DEC)| + |K2 | + |K3 | (K2 ⊆ inf (x, ζ, CON ) ∩ [sup(x, η, DEC) − inf (x, η, DEC)] and K3 ⊆ K1 ). Thus, accuracy(τ ) is in the form of (|inf (x, ζ, CON ) ∩ inf (x, η, DEC)| + |K2 | + |K3 |)/(|inf (x, ζ, CON )| + |K1 |). In order to produce the minacc(τ ), we show such ϕ1 ∈ DD(x, (ζ, η), CON ∪ DEC) that K2 =K3 =∅ and |K1 | is maximum. This is approved by such a formula that b/(a + (k1 − k3 )) ≤ (b + k2 + k3 )/(a + k1 ) for any 0 ≤ b ≤ a (a = 0), any 0 ≤ k3 ≤ k1 and any 0 ≤ k2 . Since sup(x, ζ, CON )−inf (x, ζ, CON ) is equal to the union of disjoint sets ([sup(x, ζ, CON ) − inf (x, ζ, CON )] − inf (x, η, DEC)) ([sup(x, ζ, CON ) − inf (x, ζ, CON )] ∩ inf (x, η, DEC)), let us consider two disjoint sets. The ﬁrst set is OU T ACC. For any y ∈ OU T ACC, there exists a possible implication τ ∗ : [CON, ζ] ⇒ [DEC, η ∗ ] ∈ P I(y, CON , DEC) (η ∗ = η) by the deﬁnition of inf and sup. For any y ∈ [sup(x, ζ, CON ) − inf (x, ζ, CON )] ∩ inf (x, η, DEC), P T (y, DEC)={η} holds, and there exists a possible implication τ ∗∗ : [CON, ζ ∗ ] ⇒ [DEC, η] ∈ P I(y, CON, DEC) (ζ ∗ = ζ). In ϕ1 ∈ DD(x, (ζ, η), CON ∪ DEC) with these τ ∗ and τ ∗∗ , the denominator is |inf (x, ζ, CON )| + |OU T ACC| and the numerator is |inf (x, ζ, CON ) ∩ inf (x, η, DEC)|. Theorem 9. For a N IS, let us consider a possible implication τ :[CON, ζ] ⇒ [DEC, η] ∈ P I(x, CON, DEC). Let IN COV denote a set [sup(x, η, DEC) − inf (x, η, DEC)] ∩ sup(x, ζ, CON ), and let OU T COV denote a set [sup(x, η, DEC) − inf (x, η, DEC)] − inf (x, ζ, CON ). Then, the following holds. (x,ζ,CON )∩inf (x,η,DEC)| (1) mincov(τ )= |inf |inf (x,η,DEC)|+|OUT COV | . )∩inf (x,η,DEC)|+|IN COV | . (2) maxcov(τ )= |sup(x,ζ,CON |inf (x,η,DEC)|+|IN COV |

224

8

Hiroshi Sakai and Akimichi Okuma

An Algorithm for Reduction of Attributes in NISs

This section gives an algorithm for reducing the condition attributes in rules. Definition 13 (N-v. Reduction of Condition Attributes in Rules). Let us consider a certain rule τ : [CON, ζ] ⇒ [DEC, η] ∈ P I(x, CON, DEC). We say K ∈ CON is certainly dispensable from τ in DIS real , if τ : [CON −{K}, ζ ] ⇒ [DEC, η] is a certain rule. We say K ∈ CON is possibly dispensable from τ in DIS real , if τ : [CON − {K}, ζ ] ⇒ [DEC, η] is a possible rule. Let us consider a possible implication τ : [A, 3] ∧ [C, 3] ∧ [D, 2] ∧ [E, 5] ⇒ [F, 5] in N IS1 . This τ is deﬁnite, and τ belongs to D-GC class, i.e., τ is a certain rule. For AT R={A, C, E}, inf (1, (3, 3, 5), {A, C, E})=inf (1, (3), {A}) ∩ inf (1, (3), {C})∩inf (1, (5), {E})={1}∩{1, 5, 9}∩{1, 6}={1} and sup(1, (3, 3, 5), {A, C, E})=sup(1, (3), {A}) ∩ sup(1, (3), {C}) ∩ sup(1, (5), {E})={1, 5, 6, 10} ∩ {1, 5, 9} ∩ {1, 3, 4, 6, 7}={1} hold according to Proposition 6. For AT R={F }, inf (1, (5), {F })={1, 3, 4} and sup(1, (5), {F })={1, 3, 4, 5, 8}. Because sup(1, (3, 3, 5), {A, C, E})={1} ⊆ {1, 3, 4}=inf (1, (5), {F }) holds, τ : [A, 3] ∧ [C, 3] ∧ [E, 5] ⇒ [F, 5] is also a certain rule by Theorem 5. Thus, an attribute D is certainly dispensable from τ . In this way, it is possible to examine the reduction of attributes. In this case also, inf and sup information in Deﬁnition 5 is essential. An important problem on the reduction in DISs is to ﬁnd some minimal sets of condition attributes. Several work deals with reduction for ﬁnding minimal reducts. In [3], this problem is proved to be NP-hard, which means that to compute reducts is a non-trivial task. For solving this problem, a discernibility f unction is proposed also in [3], and this function is extended to a discernibility function in incomplete information systems [21,22]. In [19], an algorithm for ﬁnding a minimal complex is presented. In N ISs, it is also important to deal with this problem on the minimal reducts. Definition 14. For any N IS and any disjoint CON, DEC ⊆ AT , let us consider a possible implication τ :[CON, ζ] ⇒ [DEC, η], which belongs to either D-GC, IGC, D-M A or I-M A class. Furthermore, let Φ be a set {ϕ ∈ DD(x, (ζ, η), CON ∪ DEC)| x is consistent in ϕ }. If there is no proper subset CON ∗ ⊆ CON such that {ϕ ∈ DD(x, (ζ, η), CON ∪DEC)|x is consistent (in the relation from CON ∗ to DEC) in ϕ } is equal to the set Φ, we say τ is minimal (in this class). Problem 1. For any N IS, let DEC be decision attributes and let η be a tuple of decision attributes values for DEC. Then, ﬁnd all minimal certain or minimal possible rules in the form of [CON, ζ] ⇒ [DEC, η]. For additional information, calculate the minimum and maximum values of support, accuracy and coverage for every rule, too. For solving Problem 1, we introduced a total order, which is deﬁned by the signiﬁcance of attributes, over (AT –DEC), and we think about rules based on this order. Under this assumption, we have realized a tool for solving Problem 1, which is shown in Appendix 4. For example, let us suppose {A, B, C, D, E}

Basic Algorithms for Rough Non-deterministic Information Analysis

225

be an ordered set, and let [A, ζA ] ∧ [B, ζB ] ∧ [C, ζC ] ∧ [D, ζD ] ⇒ [F, ηF ] and [B, ζB ] ∧ [E, ζE ] ⇒ [F, ηF ] be certain rules. The latter seems simple, but we choose the former rule according to the order of signiﬁcance. In this case, each attribute Ai ∈ (AT –DEC) is sequentially picked up based on this order, and the necessity of the descriptor [Ai , ζi,j ] is checked. Then, Proposition 6 and Theorem 5 are applied. Of course, the introduction of total order over attributes is too strong simpliﬁcation of the problem. Therefore in the next step, it is necessary to solve the problem of reduction in N ISs without using any total order.

9

Concluding Remarks

A framework of RN IA (rough non-deterministic information analysis) is proposed, and an overview of algorithms is presented. Throughout this paper, rough sets based concepts in N ISs and the application of either inf and sup information or equivalence relations are studied. Especially, inf and sup in Deﬁnition 5 are key information for RN IA. This paper also presented some tool programs for RN IA. The authors would be grateful to Professor J.W. Grzymala-Busse and anonymous referees.

References 1. Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Dordrecht, (1991) 2. Pawlak, Z.: New Look on Bayes’ Theorem - The Rough Set Outlook. Bulletin of Int’l. Rough Set Society 5 (2001) 1–8 3. Komorowski, J., Pawlak, Z., Polkowski, L., Skowron, A.: Rough Sets: A Tutorial. Rough Fuzzy Hybridization. Springer (1999) 3–98 4. Nakamura, A., Tsumoto, S., Tanaka, H., Kobayashi, S.: Rough Set Theory and Its Applications. Journal of Japanese Society for AI 11 (1996) 209–215 5. Polkowski, L., Skowron, A.(eds.): Rough Sets in Knowledge Discovery 1. Studies in Fuzziness and Soft Computing, Vol.18. Physica-Verlag (1998) 6. Polkowski, L., Skowron, A.(eds.): Rough Sets in Knowledge Discovery 2. Studies in Fuzziness and Soft Computing, Vol.19. Physica-Verlag (1998) 7. Grzymala-Busse, J.: A New Version of the Rule Induction System LERS. Fundamenta Informaticae 31 (1997) 27–39 8. Ziarko, W.: Variable Precision Rough Set Model. Journal of Computer and System Sciences 46 (1993) 39–59 9. Tsumoto, S.: Knowledge Discovery in Clinical Databases and Evaluation of Discovered Knowledge in Outpatient Clinic. Information Sciences 124 (2000) 125–137 10. Zhong, N., Dong, J., Fujitsu, S., Ohsuga, S.: Soft Techniques to Rule Discovery in Data. Transactions of Information Processing Society of Japan 39 (1998) 2581– 2592 11. Rough Set Software. Bulletin of Int’l. Rough Set Society 2 (1998) 15–46 12. Codd, E.: A Relational Model of Data for Large Shared Data Banks. Communication of the ACM 13 (1970) 377–387

226

Hiroshi Sakai and Akimichi Okuma

13. Orlowska, E., Pawlak, Z.: Representation of Nondeterministic Information. Theoretical Computer Science 29 (1984) 27–39 14. Orlowska, E.: What You Always Wanted to Know about Rough Sets. Incomplete Information: Rough Set Analysis. Studies in Fuzziness and Soft Computing, Vol.13. Physica-Verlag (1998) 1–20 15. Lipski, W.: On Semantic Issues Connected with Incomplete Information Databases. ACM Transaction on Database Systems 4 (1979) 262–296 16. Lipski, W.: On Databases with Incomplete Information. Journal of the ACM 28 (1981) 41–70 17. Nakamura, A.: A Rough Logic based on Incomplete Information and Its Application. Int’l. Journal of Approximate Reasoning 15 (1996) 367-378 18. Grzymala-Busse, J.: On the Unknown Attribute Values in Learning from Examples. Lecture Notes in AI, Vol.542. Springer-Verlag (1991) 368–377 19. Grzymala-Busse, J., Werbrouck, P.: On the Best Search Method in the LEM1 and LEM2 Algorithms. Incomplete Information: Rough Set Analysis. Studies in Fuzziness and Soft Computing, Vol.13. Physica-Verlag (1998) 75–91 20. Kryszkiewicz, M.: Properties of Incomplete Information Systems in the Framework of Rough Sets. Rough Sets in Knowledge Discovery 1. Studies in Fuzziness and Soft Computing, Vol.18. Physica-Verlag (1998) 442-450 21. Kryszkiewicz, M.: Rough Set Approach to Incomplete Information Systems. Information Sciences 112 (1998) 39–49 22. Kryszkiewicz, M.: Rules in Incomplete Information Systems. Information Sciences 113 (1999) 271–292 23. Sakai, H.: Eﬀective Procedures for Data Dependencies in Information Systems. Rough Set Theory and Granular Computing. Studies in Fuzziness and Soft Computing, Vol.125. Springer (2003) 167–176 24. Sakai, H., Okuma, A.: An Algorithm for Finding Equivalence Relations from Tables with Non-deterministic Information. Lecture Notes in AI, Vol.1711. SpringerVerlag (1999) 64–72 25. Sakai, H.: Eﬀective Procedures for Handling Possible Equivalence Relations in Nondeterministic Information Systems. Fundamenta Informaticae 48 (2001) 343–362 26. Sakai, H., Okuma, A.: An Algorithm for Checking Dependencies of Attributes in a Table with Non-deterministic Information: A Rough Sets based Approach. Lecture Notes in AI, Vol.1886. Springer-Verlag (2000) 219–229 27. Sakai, H.: A Framework of Rough Sets based Rule Generation in Non-deterministic Information Systems. Lecture Notes in AI, Vol.2871. Springer-Verlag (2003) 143– 151

Appendixes Throughout the appendixes, every input to Unix system and programs is underlined. Furthermore, every attribute is identiﬁed with the ordinal number. For example, attributes A and C are identiﬁed with 1 and 3, respectively. These tool programs are implemented on a workstation with 450MHz Ultrasparc CPU.

Basic Algorithms for Rough Non-deterministic Information Analysis

Appendix 1. % more nis1.pl · · · (A1-1) object(10,6). data(1,[3,[1,3,4],3,2,5,5]). data(2,[[2,4],2,2,[3,4],[1,3,4],4]). data(3,[[1,2],[2,4,5],2,3,[4,5],5]). data(4,[[1,5],5,[2,4],2,[1,4,5],5]). data(5,[[3,4],4,3,[1,2,3],1,[2,5]]). data(6,[[3,5],4,1,[2,3,5],5,[2,3,4]]). data(7,[[1,5],4,5,[1,4],[3,5],1]). data(8,[4,[2,4,5],2,[1,2,3],2,[1,2,5]]). data(9,[2,5,3,5,4,2]). data(10,[[2,3,5],1,2,3,1,[1,2,3]]). % more attrib1.pl · · · (A1-2) condition([1,2,3]). decision([6]). % prolog · · · (A1-3) K-Prolog Compiler version 4.11 (C). ?-consult(define.pl). yes ?-translate1. · · · (A1-4) Data File Name: ’nis1.pl’. Attribute File Name: ’attrib1.pl’. EXEC TIME=0.073(sec) yes ?-class(con,[4,5,6]). · · · (A1-5) [1] Pe-classes: [4],[5],[6] Positive Selection Tuple from 4: [1,5,2] * Tuple from 5: [3,4,3] * Tuple from 6: [3,4,1] * Negative Selection Tuple from 1: [3,4,3] * Tuple from 3: [1,5,2] * [2] Pe-classes: [4],[5],[6] : : : [16] Pe-classes: [4],[5],[6] Positive Selection Tuple from 4: [5,5,4] * Tuple from 5: [4,4,3] * Tuple from 6: [5,4,1] * Negative Selection Certainly Definable EXEC TIME=0.058(sec) yes

227

228

Hiroshi Sakai and Akimichi Okuma

In (A1-1), data N IS1 is displayed. In (A1-2), condition attributes {A, B, C} and decision attributes {F } are displayed. In (A1-3), prolog interpreter is invoked. In (A1-4), inf and sup information is produced according to attribute ﬁle. In (A1-5), the deﬁnability of a set {4, 5, 6} for AT R={A, B, C} is examined. In the ﬁrst response, tuples from 4, 5 and 6 are ﬁxed to (1,5,2), (3,4,3) and (3,4,1). At the same time, tuples (3,4,3) from object 1 and (1,5,2) from object 3 are implicitly rejected. There are 16 responses, and a set {4, 5, 6} is proved to be certainly deﬁnable. Appendix 2. ?-translate2. · · · (A2-1) Data File Name: ’nis1.pl’. EXEC TIME=0.189(sec) yes ?-pe. · · · (A2-2) [1] Derived DISs: 192 Distinct Pe-relations: 176 [2] Derived DISs: 27 Distinct Pe-relations: 27 : : : [6] Derived DISs: 54 Distinct Pe-relations: 54 EXEC TIME=1.413(sec) yes % more 3.rs · · · (A2-3) object(10). attrib(3). cond(1,3,1,3). pos(1,3,1). cond(2,3,1,2). pos(2,3,1). : : : inf([7,3,1],[7,3,1],[[7],[1]]). sup([7,3,1],[7,3,1],[[7],[1]]). % more 3.pe · · · (A2-4) 10 1 3 2 2 1 2 2 2 1 6 7 2 1 2 5 3 4 8 9 0 0 10 0 0 1 1 2 2 4 1 6 7 2 1 2 5 3 8 0 9 0 0 10 0 0 1 % merge · · · (A2-5) EXEC TIME=0.580(sec) % more 12345.pe · · · (A2-6) 10 5 1 2 3 4 5 40310784 2 1 2 3 4 5 6 7 8 9 10 0 0 0 0 0 0 0 0 0 0 40030848 1 2 2 4 5 6 7 8 9 10 0 3 0 0 0 0 0 0 0 0 279936 In (A2-1), inf and sup information is produced for each attribute. In (A2-2), the deﬁnability of a set OB is examined for each attribute. As a side eﬀect, every pe-relation is obtained. In (A2-3), inf and sup information for the attribute C

Basic Algorithms for Rough Non-deterministic Information Analysis

229

Table 6. Deﬁnitions of N ISs N IS N IS2 N IS3 N IS4

|OB| |AT | Derived DISs 30 5 7558272(= 27 × 310 ) 50 5 120932352(= 211 × 310 ) 100 5 1451188224(= 213 × 311 )

is displayed, and the contents of pe rel({C}) are displayed in (A2-4). In (A2-5), program merge is invoked for merging ﬁve pe-relations pe rel({A}), pe rel({B}), · · ·, pe rel({E}). The produced pe rel({A, B, C, D, E}) are displayed in (A2-6). Here, we show each execution time for the following N ISs, which are automatically produced by means of applying a random number program. Appendix 3. % depend · · · (A3-1) File Name for Condition: 1.pe File Name for Decision: 6.pe CRITERION 1 Derived DISs: 10368 Derived Consistent DISs: 0 Degree of Consistent DISs: 0.000 CRITERION 2 Minimum Degree of Dependency: 0.000 Maximum Degree of Dependency: 0.600 EXEC TIME=0.030(sec) % depratio · · · (A3-2) File Name for Condition: 12345.pe File Name for Decision: 6.pe CRITERION 1 Derived DISs: 2176782336 Derived Consistent DISs: 2161665792 Degree of Consistent DISs: 0.993 CRITERION 2 Minimum Degree of Dependency: 0.800 Maximum Degree of Dependency: 1.000 Consistency Ratio Object 1: 1.000(=2176782336/2176782336) Object 2: 0.993(=2161665792/2176782336) Object 3: 0.993(=2161665792/2176782336) : : : Object 10: 1.000(=2176782336/2176782336) EXEC TIME=0.020(sec) In (A3-1), the dependency from {A} to {F } is examined. Here, two pe-relations pe rel({A}) and pe rel({F }) are applied. There is no consistent derived DIS. Furthermore, the maximum value of the dependency is 0.6. Therefore, it will be diﬃcult to recognize the dependency from {A} to {F }. In (A3-2), the depen-

230

Hiroshi Sakai and Akimichi Okuma

Table 7. Each execution time(sec) of translate2, pe and merge for {A, B, C}. N 1 denotes the number of derived DISs, and N 2 denotes the number of distinct perelations N IS translate2 pe merge N 1 N IS2 0.308 1.415 0.690 5832 0.548 8.157 0.110 5184 N IS3 1.032 16.950 2.270 20736 N IS4

N2 120 2 8

Table 8. Each execution time(sec) of depend and depratio from {A, B, C} to {E}. N 3 denotes the number of derived DISs for {A, B, C, E}, and N 4 denotes the number of combined pairs pei ∈ pe rel({A, B, C}) and pej ∈ pe rel({E}) N IS N IS2 N IS3 N IS4

depend depratio N3 0.020 0.080 104976 0.010 0.060 279936 0.070 0.130 4478976

N4 2160 108 1728

dency from {A, B, C, D, E} to {F } is examined. This time, it will be possible to recognize the dependency from {A, B, C, D, E} to {F }. Appendix 4. % more attrib2.pl · · · (A4-1) decision([6]). decval([5]). order([1,2,3,4,5]). ?-translate3. · · · (A4-2) Data File Name: ’nis1.pl’. Attribute File Name: ’attrib2.pl’. EXEC TIME=0.066(sec) yes ?-certain. · · · (A4-3) DECLIST: [A Certain Rule from Object 1] [1,3]&[3,3]&[5,5]=>[6,5] [746496/746496,DGC] [(0.1,0.1),(1.0,1.0),(0.2,0.333)] [A Certain Rule from Object 3] [A Certain Rule from Object 4] EXEC TIME=0.026(sec) yes ?-possible. · · · (A4-4) DECLIST: [Possible Rules from Object 1] === One Attribute === [1,3]=>[6,5] [10368/10368,DMA] [(0.1,0.2),(0.25,1.0),(0.2,0.5)]

Basic Algorithms for Rough Non-deterministic Information Analysis

231

[2,3]=>[6,5] [486/1458,IGC] [(0.1,0.1),(1.0,1.0),(0.2,0.333)] [4,2]=>[6,5] [5832/5832,DMA] [(0.2,0.4),(0.4,1.0),(0.4,0.8)] [Possible Rules from Object 3] === One Attribute === [1,1]=>[6,5] [5184/10368,IMA] : : : [Possible Rules from Object 8] === One Attribute === [1,4]=>[6,5] [3456/10368,IMA] : : : [(0.3,0.4),(0.6,1.0),(0.6,0.8)] [5,2]=>[6,5] [648/1944,IGC] [(0.1,0.1),(1.0,1.0),(0.2,0.25)] EXEC TIME=0.118(sec) yes In order to handle rules, it is necessary to prepare a ﬁle like in (A4-1). In (A4-2), inf and sup information is produced according to attrib2.pl. Program certain extracts possible implications belonging to D-GC class in (A4-3). As an additional information, minsup, maxsup, minacc, maxacc, mincov and maxcov are sequentially displayed. Program possible extracts possible implications belonging to I-GC or M A classes in (A4-4). Table 9 shows each execution time for three N ISs. Here, the order is sequentially A, B, C and D for the decision attribute {E}, and the decision attribute value is 1. This execution time depends upon the number of such object x that 1 ∈ P T (x, {E}). Table 9. Each execution time(sec) of translate3, possible and certain. N 5 denotes the number of such object x that 1 ∈ P T (x, {E}) N IS translate3 possible certain N IS2 0.178 0.115 0.054 0.173 0.086 0.039 N IS3 0.612 0.599 0.391 N IS4

N5 7 4 9

A Partition Model of Granular Computing Yiyu Yao Department of Computer Science, University of Regina Regina, Saskatchewan, Canada S4S 0A2 [email protected] http://www.cs.uregina.ca/∼yyao

Abstract. There are two objectives of this chapter. One objective is to examine the basic principles and issues of granular computing. We focus on the tasks of granulation and computing with granules. From semantic and algorithmic perspectives, we study the construction, interpretation, and representation of granules, as well as principles and operations of computing and reasoning with granules. The other objective is to study a partition model of granular computing in a set-theoretic setting. The model is based on the assumption that a ﬁnite set of universe is granulated through a family of pairwise disjoint subsets. A hierarchy of granulations is modeled by the notion of the partition lattice. The model is developed by combining, reformulating, and reinterpreting notions and results from several related ﬁelds, including theories of granularity, abstraction and generalization (artiﬁcial intelligence), partition models of databases, coarsening and reﬁning operations (evidential theory), set approximations (rough set theory), and the quotient space theory for problem solving.

1

Introduction

The basic ideas of granular computing, i.e., problem solving with diﬀerent granularities, have been explored in many ﬁelds, such as artiﬁcial intelligence, interval analysis, quantization, rough set theory, Dempster-Shafer theory of belief functions, divide and conquer, cluster analysis, machine learning, databases, and many others [73]. There is a renewed and fast growing interest in granular computing [21, 30, 32, 33, 41, 43, 48, 50, 51, 58, 60, 70, 77]. The term “granular computing (GrC)” was ﬁrst suggested by T.Y. Lin [74]. Although it may be diﬃcult to have a precise and uncontroversial deﬁnition, we can describe granular computing from several perspectives. We may deﬁne granular computing by examining its major components and topics. Granular computing is a label of theories, methodologies, techniques, and tools that make use of granules, i.e., groups, classes, or clusters of a universe, in the process of problem solving [60]. That is, granular computing is used as an umbrella term to cover these topics that have been studied in various ﬁelds in isolation. By examining existing studies in a uniﬁed framework of granular computing and extracting their commonalities, one may be able to develop a general theory for problem solving. Alternatively, we may deﬁne granular computing by J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 232–253, 2004. c Springer-Verlag Berlin Heidelberg 2004

A Partition Model of Granular Computing

233

identifying its unique way of problem solving. Granular computing is a way of thinking that relies on our ability to perceive the real world under various grain sizes, to abstract and consider only those things that serve our present interest, and to switch among diﬀerent granularities. By focusing on diﬀerent levels of granularities, one can obtain various levels of knowledge, as well as inherent knowledge structure. Granular computing is essential to human problem solving, and hence has a very signiﬁcant impact on the design and implementation of intelligent systems. The ideas of granular computing have been investigated in artiﬁcial intelligence through the notions of granularity and abstraction. Hobbs proposed a theory of granularity based on the observation that “[w]e look at the world under various grain seizes and abstract from it only those things that serve our present interests” [18]. Furthermore, “[o]ur ability to conceptualize the world at diﬀerent granularities and to switch among these granularities is fundamental to our intelligence and ﬂexibility. It enables us to map the complexities of the world around us into simpler theories that are computationally tractable to reason in” [18]. Giunchigalia and Walsh proposed a theory of abstraction [14]. Abstraction can be thought of as “the process which allows people to consider what is relevant and to forget a lot of irrelevant details which would get in the way of what they are trying to do”. They showed that the theory of abstraction captures and generalizes most previous work in the area. The notions of granularity and abstraction are used in many subﬁelds of artiﬁcial intelligence. The granulation of time and space leads naturally to temporal and spatial granularities. They play an important role in temporal and spatial reasoning [3, 4, 12, 19, 54]. Based on granularity and abstraction, many authors studied some fundamental topics of artiﬁcial intelligence, such as, for example, knowledge representation [14, 75], theorem proving [14], search [75, 76], planning [24], natural language understanding [35], intelligent tutoring systems [36], machine learning [44], and data mining [16]. Granular computing recently received much attention from computational intelligence community. The topic of fuzzy information granulation was ﬁrst proposed and discussed by Zadeh in 1979 and further developed in the paper published in 1997 [71, 73]. In particular, Zadeh proposed a general framework of granular computing based on fuzzy set theory [73]. Granules are constructed and deﬁned based on the concept of generalized constraints. Relationships between granules are represented in terms of fuzzy graphs or fuzzy if-then rules. The associated computation method is known as computing with words (CW) [72]. Although the formulation is diﬀerent from the studies in artiﬁcial intelligence, the motivations and basic ideas are the same. Zadeh identiﬁed three basic concepts that underlie human cognition, namely, granulation, organization, and causation [73]. “Granulation involves decomposition of whole into parts, organization involves integration of parts into whole, and causation involves association of causes and eﬀects.” [73] Yager and Filev argued that “human beings have been developed a granular view of the world” and “. . . objects with which mankind perceives, measures, conceptualizes and reasons are granular” [58]. Therefore, as

234

Yiyu Yao

pointed out by Zadeh, “[t]he theory of fuzzy information granulation (TFIG) is inspired by the ways in which humans granulate information and reason with it.”[73] The necessity of information granulation and simplicity derived from information granulation in problem solving are perhaps some of the practical reasons for the popularity of granular computing. In many situations, when a problem involves incomplete, uncertain, or vague information, it may be diﬃcult to diﬀerentiate distinct elements and one is forced to consider granules [38–40]. In some situations, although detailed information may be available, it may be suﬃcient to use granules in order to have an eﬃcient and practical solution. In fact, very precise solutions may not be required at all for many practical problems. It may also happen that the acquisition of precise information is too costly, and coarsegrained information reduces cost [73]. They suggest a basic guiding principle of fuzzy logic: “Exploit the tolerance for imprecision, uncertainty and partial truth to achieve tractability, robustness, low solution cost and better rapport with reality” [73]. This principle oﬀers a more practical philosophy for real world problem solving. Instead of searching for the optimal solution, one may search for good approximate solutions. One only needs to examine the problem at a ﬁner granulation level with more detailed information when there is a need or beneﬁt for doing so [60]. The popularity of granular computing is also due to the theory of rough sets [38, 39]. As a concrete theory of granular computing, rough set model enables us to precisely deﬁne and analyze many notions of granular computing. The results provide an in-depth understanding of granular computing. The objectives of this chapter are two-fold based on investigations at two levels. Sections 2 and 3 focus on a high and abstract level development of granular computing, and Section 3 deals with a low and concrete level development by concentrating on a partition model of granular computing. The main results are summarized as follows. In Section 2, we discuss in general terms the basic principles and issues of granular computing based on related studies, such as the theory of granularity, the theory of abstraction, and their applications. The tasks of granulation and computing with granules are examined and related to existing studies. We study the construction, interpretation, and representation of granules, as well as principles and operations of computing and reasoning with granules. In Section 3, we argue that granular computing is a way of thinking. This way of thinking is demonstrated based on three problem solving domains, i.e., concept formation, top-down programming, and top-down theorem proving. In Section 4, we study a partition model of granular computing in a settheoretic setting. The model is based on the assumption that a ﬁnite set of universe is granulated through a family of pairwise disjoint subsets. A hierarchy of granulations is modeled by the notion of the partition lattice. Results from rough sets [38], quotient space theory [75, 76], belief functions [46], databases[27], data mining [31, 34], and power algebra [6] are reformulated, re-interpreted, reﬁned, extended and combined for granular computing. We introduce two basic

A Partition Model of Granular Computing

235

operations called zooming-in and zooming-out operators. Zooming-in allows us to expand an element of the quotient universe into a subset of the universe, and hence reveals more detailed information. Zooming-out allows us to move to the quotient universe by ignoring some details. Computations in both universes can be connected through zooming operations.

2

Basic Issues of Granular Computing

Granular computing may be studied based on two related issues, namely granulation and computation [60]. The former deals with the construction, interpretation, and representation of granules, and the latter deals with the computing and reasoning with granules. They can be further divided with respect to algorithmic and semantic aspects [60]. The algorithmic study concerns the procedures for constructing granules and related computation, and the semantic study concerns the interpretation and physical meaningfulness of various algorithms. Studies from both aspects are necessary and important. The results from semantic study may provide not only interpretations and justiﬁcations for a particular granular computing model, but also guidelines that prevent possible misuses of the model. The results from algorithmic study may lead to eﬃcient and eﬀective granular computing methods and tools. 2.1

Granulations

Granulation of a universe involves dividing the universe into subsets or grouping individual objects into clusters. A granule may be viewed as a subset of the universe, which may be either fuzzy or crisp. A family of granules containing every object in the universe is called a granulation of the universe, which provides a coarse-grained view of the universe. A granulation may consist of a family of either disjoint or overlapping granules. There are many granulations of the same universe. Diﬀerent views of the universe can be linked together, and a hierarchy of granulations can be established. The notion of granulation can be studied in many diﬀerent contexts. The granulation of the universe, particularly the semantics of granulation, is domain and application dependent. Nevertheless, one can still identify some domain independent issues [75]. Some of such issues are described in more detail below. Granulation Criteria. A granulation criterion deals with the semantic interpretation of granules and addresses the question of why two objects are put into the same granule. It is domain speciﬁc and relies on the available knowledge. In many situations, objects are usually grouped together based on their relationships, such as indistinguishability, similarity, proximity, or functionality [73]. One needs to build models to provide both semantical and operational interpretations of these notions. A model enables us to deﬁne formally and precisely various notions involved, and to study systematically the meanings and rationalities of granulation criteria.

236

Yiyu Yao

Granulation Structures. It is necessary to study granulation structures derivable from various granulations of the universe. Two structures can be observed, the structure of individual granules and structure of a granulation. Consider the case of crisp granulation. One can immediately deﬁne an order relation between granules based on the set inclusion. In general, a large granule may contain small granules, and small granules may be combined to form a large granule. The order relation can be extended to diﬀerent granulations. This leads to multi-level granulations in a natural hierarchical structure. Various hierarchical granulation structures have been studied by many authors [22, 36, 54, 75, 76]. Granulation Methods. From the algorithmic aspect, a granulation method addresses the problem of how to put two objects into the same granule. It is necessary to develop algorithms for constructing granules and granulations eﬃciently based on a granulation criterion. The construction process can be modeled as either top-down or bottom-up. In a top-down process, the universe is decomposed into a family of subsets, each subset can be further decomposed into smaller subsets. In a bottom-up process, a subset of objects can be grouped into a granule, and smaller granules can be further grouped into larger granules. Both processes lead naturally to a hierarchical organization of granules and granulations [22, 61]. Representation/Description of Granules. Another semantics related issue is the interpretation of the results of a granulation method. Once constructed, it is necessary to describe, to name and to label granules using certain languages. This can be done in several ways. One may assign a name to a granule such that an element in the granule is an instance of the named category, as being done in classiﬁcation [22]. One may also construct a certain type of center as the representative of a granule, as being done in information retrieval [45, 56]. Alternatively, one may select some typical objects from a granule as its representative. For example, in many search engines, the search results are clustered into granules and a few titles and some terms can be used as the description of a granule [8, 17]. Quantitative Characteristics of Granules and Granulations. One can associate quantitative measures to granules and granulations to capture their features. Consider again the case of crisp granulation. The cardinality of a granule, or Hartley information measure, can be used as a measure of the size or uncertainty of a granule [64]. The Shannon entropy measure can be used as a measure of the granularity of a partition [64]. These issues can be understood by examining a concrete example of granulation known as the cluster analysis [2]. This can be done by simply change granulation into clustering and granules into clusters. Clustering structures may be hierarchical or non-hierarchical, exclusive or overlapping. Typically, a similarity or distance function is used to deﬁne the relationships between objects. Clustering criteria may be deﬁned based on the similarity or distance function, and the required cluster structures. For example, one would expect strong similarities between objects in the same cluster, and weak similarities between objects

A Partition Model of Granular Computing

237

in diﬀerent clusters. Many clustering methods have been proposed and studied, including the families of hierarchical agglomerative, hierarchical divisive, iterative partitioning, density search, factor analytic, clumping, and graph theoretic methods [1]. Cluster analysis can be used as an exploratory tool to interpret data and ﬁnd regularities from data [2]. This requires the active participation of experts to interpret the results of clustering methods and judge their signiﬁcance. A good representation of clusters and their quantitative characterizations may make the task of exploration much easier. 2.2

Computing and Reasoning with Granules

Computing and reasoning with granules depend on the previously discussed notion of granulations. They can be similarly studied from both the semantic and algorithmic perspectives. One needs to design and interpret various methods based on the interpretation of granules and relationships between granules, as well as to deﬁne and interpret operations of granular computing. The two level structures, the granule level and the granulation level, provide the inherent relationships that can be explored in problem solving. The granulated view summarizes available information and knowledge about the universe. As a basic task of granular computing, one can examine and explore further relationships between granules at a lower level, and relationships between granulations at a higher level. The relationships include closeness, dependency, and association of granules and granulations [43]. Such relationships may not hold fully and certain measures can be employed to quantify the degree to which the relationships hold [64]. This allows the possibility to extract, analyze and organize information and knowledge through relationships between granules and between granulations [62, 63]. The problem of computing and reasoning with granules is domain and application dependent. Some general domain independent principles and issues are listed below. Mappings between Diﬀerent Level of Granulations. In the granulation hierarchy, the connections between diﬀerent levels of granulations can be described by mappings. Giunchglia and Walsh view an abstraction as a mapping between a pair of formal systems in the development of a theory of abstraction [14]. One system is referred to as the ground space, and the other system is referred to as the abstract space. At each level of granulation, a problem is represented with respect to the granularity of the level. The mapping links diﬀerent representations of the same problem at diﬀerent levels of details. In general, one can classify and study diﬀerent types of granulations by focusing on the properties of the mappings [14]. Granularity Conversion. A basic task of granular computing is to change views with respect to diﬀerent levels of granularity. As we move from one level of details to another, we need to convert the representation of a problem accordingly [12, 14]. A move to a more detailed view may reveal information that otherwise cannot be seen, and a move to a simpler view can improve the

238

Yiyu Yao

high level understanding by omitting irrelevant details of the problem [12, 14, 18, 19, 73, 75, 76]. The change between grain-sized views may be metaphorically stated as the change between the forest and trees. Property Preservation. Granulation allows the diﬀerent representations of the same problem in diﬀerent levels of details. It is naturally expected that the same problem must be consistently represented [12]. A granulation and its related computing methods are meaningful only they preserve certain desired properties [14, 30, 75]. For example, Zhang and Zhang studied the “false-preserving” property, which states that if a coarse-grained space has no solution for a problem then the original ﬁne-grained space has no solution [75, 76]. Such a property can be explored to improve the eﬃciency of problem solving by eliminating a more detailed study in a coarse-grained space. One may require that the structure of a solution in a coarse-grained space is similar to the solution in a ﬁne-grained space. Such a property is used in top-down problem solving techniques. More speciﬁcally, one starts with a sketched solution and successively reﬁnes it into a full solution. In the context of hierarchical planning, one may impose similar properties, such as upward solution property, downward solution property, monotonicity property, etc. [24]. Operators. The relationship between granules at diﬀerent levels and conversion of granularity can be precisely deﬁned by operators [12, 36]. They serve as the basic build blocks of granular computing. There are at least two types of operators that can be deﬁned. One type deals with the shift from a ﬁne granularity to a coarse granularity. A characteristics of such an operator is that it will discard certain details, which makes distinct objects no longer diﬀerentiable. Depending on the context, many interpretations and deﬁnitions are available, such as abstraction, simpliﬁcation, generalization, coarsening, zooming-out, etc. [14, 18, 19, 36, 46, 66, 75]. The other type deals with the change from a coarse granularity to a ﬁne granularity. A characteristics of such an operator is that it will provide more details, so that a group of objects can be further classiﬁed. They can be deﬁned and interpreted differently, such as articulation, speciﬁcation, expanding, reﬁning, zooming-in, etc. [14, 18, 19, 36, 46, 66, 75]. Other types of operators may also be deﬁned. For example, with the granulation, one may not be able to exactly characterize an arbitrary subset of a ﬁne-grained universe in a coarse-grained universe. This leads to the introduction of approximation operators in rough set theory [39, 59]. The notion of granulation describes our ability to perceive the real world under various grain sizes, and to abstract and consider only those things that serve our present interest. Granular computing methods describe our ability to switch among diﬀerent granularities in problem solving. Detailed and domain speciﬁc methods can be developed by elaborating these issues with explicit reference to an application. For example, concrete domain speciﬁc conversion methods and operators can be deﬁned. In spite of the diﬀerences between various methods, they are all governed by the same underlying principles of granular computing.

A Partition Model of Granular Computing

3

239

Granular Computing as a Way of Thinking

The underlying ideas of granular computing have been used either explicitly or implicitly for solving a wide diversity of problems. Their eﬀectiveness and merits may be diﬃcult to study and analyze based on some kind of formal proofs. They may be judged based on the powerful and yet imprecise and subjective tools of our experience, intuition, reﬂections and observations [28]. As pointed out by Leron [28], a good way of activating these tools is to carry out some case studies. For such a purpose, the general ideas, principles, and methodologies of granular computing are further examined with respect to several diﬀerent ﬁelds in the rest of this section. It should be noted that analytical and experimental results on the eﬀectiveness of granular computing in speciﬁc domains, though will not be discussed in this chapter, are available [20, 24, 75]. 3.1

Concept Formation

From philosophical point of view, granular computing can be understood as a way of thinking in terms of the notion of concepts that underlie the human knowledge. Every concept is understood as a unit of thoughts consisting of two parts, the intension and the extension of the concept [9, 52, 53, 55, 57]. The intension (comprehension) of a concept consists of all properties or attributes that are valid for all those objects to which the concept applies. The extension of a concept is the set of objects or entities which are instances of the concept. All objects in the extension have the same properties that characterize the concept. In other words, the intension of a concept is an abstract description of common features or properties shared by elements in the extension, and the extension consists of concrete examples of the concept. A concept is thus described jointly by its intension and extension. This formulation enables us to study concepts in a logic setting in terms of intensions and also in a set-theoretic setting in terms of extensions. The description of granules characterize concepts from the intension point of view, while granules themselves characterize concepts from the extension point of view. Through the connections between extensions of concepts, one may establish relationships between concepts [62, 63]. In characterizing human knowledge, one needs to consider two topics, namely, context and hierarchy [42, 47]. Knowledge is contextual and hierarchical. A context in which concepts are formed provides meaningful interpretation of the concepts. Knowledge is organized in a tower or a partial ordering. The baselevel, or ﬁrst-level, concepts are the most fundamental concepts, and higher-level concepts depend on lower-level concepts. To some extent, granulation and inherent hierarchical granulation structures reﬂect naturally the way in which human knowledge is organized. The construction, interpretation, and description of granules and granulations are of fundamental importance in the understanding, representation, organization and synthesis of data, information, and knowledge.

240

3.2

Yiyu Yao

Top-Down Programming

The top-down programming is an eﬀective technique to deal with the complex problem of programming, which is based on the notions of structured programming and stepwise reﬁnement [26]. The principles and characteristics of the topdown design and stepwise reﬁnement, as discussed by Ledgard, Gueras and Nagin [26], provide a convincing demonstration that granular computing is a way of thinking. According to Ledgard, Gueras and Nagin [26], the top-down programming approach has the following characteristics: Design in Levels. A level consists of a set of modules. At higher levels, only a brief description of a module is provided. The details of the module are to be reﬁned, divided into smaller modules, and developed in lower levels. Initial Language Independence. The high-level representations at initial levels focus on expressions that are relevant to the problem solution, without explicit reference to machine and language dependent features. Postponement of Details to Lower Levels. The initial levels concern critical broad issues and the structure of the problem solution. The details such as the choice of speciﬁc algorithms and data structures are postponed to lower, implementation levels. Formalization of Each Level. Before proceeding to a lower level, one needs to obtain a formal and precise description of the current level. This will ensure a full understanding of the structure of the current sketched solution. Veriﬁcation of Each Level. The sketched solution at each level must be veriﬁed, so that errors pertinent to the current level will be detected. Successive Reﬁnements. Top-down programming is a successive reﬁnement process. Starting from the top level, each level is redeﬁned, formalized, and veriﬁed until one obtains a full program. In terms of granular computing, program modules correspond to granules, and levels of the top-down programming correspond to diﬀerent granularities. One can immediately see that those characteristics also hold for granular computing in general. 3.3

Top-Down Theorem Proving

Another demonstration of granular computing as a way of thinking is the approach of top-down theorem proving, which is used by computer systems and human experts. The PROLOG interpreter basically employs a top-down, depthﬁrst search strategy to solve problem through theorem proving [5]. It has also been suggested that the top-down approach is eﬀective for developing, communicating and writing mathematical proofs [13, 14, 25, 28]. PROLOG is a logic programming language widely used in artiﬁcial intelligence. It is based on the ﬁrst-order predicate logic and models problem solving as theorem proving [5]. A PROLOG program consists of a set of facts and rules

A Partition Model of Granular Computing

241

in the form of Horn clauses that describe the objects and relations in a problem domain. The PROLOG interpreter answers a query, referred to as goals, by ﬁnding out whether the query is a logical consequence of the facts and rules of a PROLOG program. The inference is performed in a top-down, left to right, depth-ﬁrst manner. A query is a sequence of one or more goals. At the top level, the leftmost goal is reduced to a sequence of subgoals to be tried by using a clause whose head uniﬁes with the leftmost goal. The PROLOG interpreter then continues by trying to reduce the leftmost goal of the new sequence of goals. Eventually the lestmost goal is satisﬁed by a fact, and the second leftmost goal is tried in the same manner. Backtracking is used when the interpreter fails to ﬁnd a uniﬁcation that solves a goal, so that other clauses can be tried. A proof found by the PROLOG interpreter can be expressed naturally in a hierarchical structure, with the proofs of subgoals as the children of a goal. In the process of reducing a goal to a sequence of subgoals, one obtains more details of the proof. The strategy can be applied to general theorem proving. This may be carried out by abstracting the goal, proving its abstracted version and then using the structure of the resulting proof to help construct the proof of the original goal [14]. By observing the systematic way of top-down programming, some authors suggest that the similar approach can be used in developing, teaching and communicating mathematical proofs [13, 28]. Leron proposed a structured method for presenting mathematical proofs [28]. The main objective is to increase the comprehensibility of mathematical presentations, and at the same time, retain their rigor. The traditional linear fashion presents a proof step-by-step from hypotheses to conclusion. In contrast, the structured method arranges the proof in levels and proceeds in a top-down manner. Like the top-down, step-wise reﬁnement programming approach, a level consists of short autonomous modules, each embodying one major idea of the proof to be further developed in the subsequent levels. The top level is a very general description of the main line of the proof. The second level elaborates on the generalities of the top level by supplying proofs of unsubstantiated statements, details of general descriptions, and so on. For some more complicated tasks, the second level only gives brief descriptions and the details are postponed to the lower levels. The process continues by supplying more details of the higher levels until a complete proof is reached. Such a development of proofs procedure is similar to the strategy used by the PROLOG interpreter. A complicated proof task is successively divided into smaller and easier subtasks. The inherent structures of those tasks not only improve the comprehensibility of the proof, but also increase our understanding of the problem. Lamport proposed a proof style, a reﬁnement of natural deduction, for developing and writing structured proofs [25]. It is also based on hierarchical structuring, and divides proofs into levels. By using a numbering scheme to label various parts of a proof, one can explicitly show the structures of the proof. Furthermore, such a structure can be conveniently expressed using a computer-based hypertext system. One can concentrate on a particular level in the structure

242

Yiyu Yao

and suppress lower level details. In principle, the top-down design and stepwise reﬁnement strategy of programming can be applied in developing proofs to eliminate possible errors. 3.4

Granular Computing Approach of Problem Solving

In their book on research methods, Granziano and Raulin make a clear separation of research process and content [11]. They state, “... the basic processes and the systematic way of studying problems are common elements of science, regardless of each discipline’s particular subject matter. It is the process and not the content that distinguishes science from other ways of knowing, and it is the content – the particular phenomena and fact of interest – that distinguishes one scientiﬁc discipline from another.” [11] From the discussion of the previous examples, we can make a similar separation of the granular computing process and content (i.e., domains of applications). The systematic way of granular computing is generally applicable to diﬀerent domains, and can be studied based on the basic issues and principles discussed in the last section. In general, granular computing approach can be divided into top-down and bottom-up modes. They present two directions of switch between levels of granularities. The concept formation can be viewed as a combination of top-down and bottom-up. One can combine speciﬁc concepts to produce a general concept in a bottom-up manner, and divide a concept into more speciﬁc subconcepts in top-down manner. Top-down programming and top-down theorem proving are typical examples of top-down approaches. Independent of the modes, step-wise (successive) reﬁnement plays an important role. One needs to fully understand all notions of a particular level before moving up or down to another level. From the case studies, we can abstract some common features by ignoring irrelevant formulation details. It is easy to arrive at a conclusion that granular computing is a way of thinking and a philosophy for problem solving. At an abstract level, it captures and reﬂects our ability to solve a problem by focusing on diﬀerent levels of details, and move easily from diﬀerent levels at various stages. The principles of granular computing are the same and applicable to many domains.

4

A Partition Model

A partition model is developed by focusing on the basic issues of granular computing. The partition model has been studied extensively in rough set theory [39]. 4.1

Granulation by Partition and Partition Lattice

A simple granulation of the universe can be deﬁned based on an equivalence relation or a partition. Let U denote a ﬁnite and non-empty set called the universe. Suppose E ⊆ U × U denote an equivalence relation on U , where × denotes the Cartesian product of sets. That is, E is reﬂective, symmetric, and transitive.

A Partition Model of Granular Computing

243

The equivalence relation E divides the set U into a family of disjoint subsets called the partition of the universe induced by E and denoted by πE = U/E. The subsets in a partition are also called blocks. Conversely, given a partition π of the universe, one can uniquely deﬁne an equivalence relation Eπ : xEπ y ⇐⇒ x and y are in the same block of the partition π.

(1)

Due to the one to one relationship between equivalence relations and partitions, one may use them interchangeably. One can deﬁne an order relation on the set of all partitions of U , or equivalently the set of all equivalence relations on U . A partition π1 is a reﬁnement of another partition π2 , or equivalently, π2 is a coarsening of π1 , denoted by π1 π2 , if every block of π1 is contained in some block of π2 . In terms of equivalence relations, we have Eπ1 ⊆ Eπ2 . The reﬁnement relation is a partial order, namely, it is reﬂexive, antisymmetric, and transitive. It deﬁnes a partition lattice Π(U ). Given two partitions π1 and π2 , their meet, π1 ∧ π2 , is the largest partition that is a reﬁnement of both π1 and π2 , their join, π1 ∨ π2 , is the smallest partition that is a coarsening of both π1 and π2 . The blocks of the meet are all nonempty intersections of a block from π1 and a block from π2 . The blocks of the join are the smallest subsets which are exactly a union of blocks from π1 and π2 . In terms of equivalence relations, for two equivalence relations R1 and R2 , their meet is deﬁned by R1 ∩ R2 , and their join is deﬁned by (R1 ∪ R2 )∗ , the transitive closure of the relation R1 ∪ R2 . The lattice Π(U ) contains all possible partition based granulations of the universe. The reﬁnement partial order on partitions provides a natural hierarchy of granulations. The partition model of granular computing is based on the partition lattice or subsystems of the partition lattice. 4.2

Partition Lattice in an Information Table

Information tables provide a simple and convenient way to represent a set of objects by a ﬁnite set of attributes [39, 70]. Formally, an information table is deﬁned as the following tuple: (U, At, {Va | a ∈ At}, {Ia | a ∈ At}),

(2)

where U is a ﬁnite set of objects called the universe, At is a ﬁnite set of attributes or features, Va is a set of values for each attribute a ∈ At, and Ia : U −→ Va is an information function for each attribute a ∈ At. A database is an example of information tables. Information tables give a speciﬁc and concrete interpretation of equivalence relations used in granulation. With respect to an attribute a ∈ At, an object x ∈ U takes only one value from the domain Va of a. Let a(x) = Ia (x) denote the value of x on a. By extending to a subset of attributes A ⊆ At, A(x) denotes the value of x on attributes A, which can be viewed as a vector with each a(x), a ∈ A, as one of its components. For an attribute a ∈ At, an equivalence relation Ea is given by: for x, y ∈ U , xEa y ⇐⇒ a(x) = a(y). (3)

244

Yiyu Yao

Two objects are considered to be indiscernible, in the view of a single attribute a, if and only if they have exactly the same value. For a subset of attributes A ⊆ At, an equivalence relation EA is deﬁned by: xEA y ⇐⇒ A(x) = A(y) ⇐⇒ (∀a ∈ A)a(x) = a(y) ⇐⇒ Ea .

(4)

a∈A

With respect to all attributes in A, x and y are indiscernible, if and only if they have the same value for every attribute in A. The empty set ∅ produces the coarsest relation, i.e., E∅ = U × U . If the entire attribute set is used, one obtains the ﬁnest relation EAt . Moreover, if no two objects have the same description, EAt becomes the identity relation. The algebra ({EA }A⊆At , ∩) is a lower semilattice with the zero element EAt [37]. The family of partitions Π(At(U )) = {πEA | A ⊆ At} has been studied in databases [27]. In fact, Π(At(U )) is a lattice on its own right. While the meet of Π(At(U )) is the same as the meet of Π(U ), their joins are diﬀerent [27]. The lattice Π(At(U )) can be used to develop a partition model of databases. A useful result from the constructive deﬁnition of the equivalence relation is that one can associate a precise description with each equivalence class. This is done through the introduction of a decision logic language DL in an information table [39, 43, 65]. In the language DL, an atomic formula is given by a = v, where a ∈ At and v ∈ Va . If φ and ψ are formulas, then so are ¬φ, φ ∧ ψ, φ ∨ ψ, φ → ψ, and φ ≡ ψ. The semantics of the language DL can be deﬁned in the Tarski’s style through the notions of a model and satisﬁability. The model is an information table, which provides interpretation for symbols and formulas of DL. The satisﬁability of a formula φ by an object x, written x |= φ, is given by the following conditions: (1) x |= a = v iﬀ a(x) = v, (2) (3) (4) (5) (6)

x |= ¬φ iﬀ not x |= φ, x |= φ ∧ ψ iﬀ x |= φ and x |= ψ, x |= φ ∨ ψ iﬀ x |= φ or x |= ψ, x |= φ → ψ iﬀ x |= ¬φ ∨ ψ, x |= φ ≡ ψ iﬀ x |= φ → ψ and x |= ψ → φ.

If φ is a formula, the set m(φ) deﬁned by: m(φ) = {x ∈ U | x |= φ},

(5)

is called the meaning of the formula φ. For an equivalence class of EA , it can be described by a formula of the form, a∈A a = va , where va ∈ Va . Furthermore, [x]EA = m( a∈A a = a(x)), where a(x) is the value of x on attribute a.

A Partition Model of Granular Computing

4.3

245

Mappings between Two Universes

Given an equivalence relation E on U , we obtain a coarse-grained universe U/E called the quotient set of U . The relation E can be conveniently represented by a mapping from U to 2U , where 2U is the power set of U . The mapping [·]E : U −→ 2U is given by: [x]E = {y ∈ U | xEy}.

(6)

The equivalence class [x]E containing x plays dual roles. It is a subset of U and an element of U/E. That is, in U , [x]E is subset of objects, and in U/E, [x]E is considered to be a whole instead of many individuals [61]. In cluster analysis, one typically associates a name with a cluster such that elements of the cluster are instances of the named category or concept [22]. Lin [29], following Dubois and Prade [10], explicitly used [x]E for representing a subset of U and N ame([x]E ) for representing an element of U/E. In subsequent discussion, we use this convention. With a partition or an equivalence relation, we have two views of the same universe, a coarse-grained view U/E and a detailed view U . Their relationship can be deﬁned by a pair of mappings, r : U/E −→ U and c : U −→ U/E. More speciﬁcally, we have: r(N ame([x]E )) = [x]E , c(x) = N ame([x]E ).

(7)

A concept, represented as a subset of a universe, is described diﬀerently under diﬀerent views. As we move from one view to the other, we change our perceptions and representations of the same concept. In order to achieve this, we deﬁne zooming-in and zooming-out operators based on the pair of mappings [66]. 4.4

Zooming-in Operator for Reﬁnement

Formally, zooming-in can be deﬁned by an operator ω : 2U/E −→ 2U . Shafer referred to the zooming-in operation as reﬁning [46]. For a singleton subset {Xi } ∈ 2U/E , we deﬁne [10]: ω({Xi }) = [x]E ,

Xi = N ame([x]E ).

(8)

For an arbitrary subset X ⊆ U/E, we have: ω(X) =

ω({Xi }).

(9)

Xi ∈X

By zooming-in on a subset X ⊆ U/E, we obtain a unique subset ω(X) ⊆ U . The set ω(X) ⊆ U is called the reﬁnement of X.

246

Yiyu Yao

The zooming-in operation has the following properties [46]: (zi1)

ω(∅) = ∅,

(zi2) (zi3) (zi4) (zi5)

ω(U/E) = U, ω(X c ) = (ω(X))c , ω(X ∩ Y ) = ω(X) ∩ ω(Y ), ω(X ∪ Y ) = ω(X) ∪ ω(Y ),

(zi6)

X ⊆ Y ⇐⇒ ω(X) ⊆ ω(Y ),

where c denotes the set complement operator, the set-theoretic operators on the left hand side apply to the elements of 2U/E , and the same operators on the right hand side apply to the elements of 2U . From these properties, it can be seen that any relationships of subsets observed under coarse-grained view would hold under the detailed view, and vice versa. For example, in addition to (zi6), we have X ∩ Y = ∅ if and only if ω(X) ∩ ω(Y ) = ∅, and X ∪ Y = U/E if and only if ω(X) ∪ ω(Y ) = U . Therefore, conclusions drawn based on the coarse-grained elements in U/E can be carried over to the universe U . 4.5

Zooming-out Operators for Approximation

The change of views from U to U/E is called a zooming-out operation. By zooming-out, a subset of the universe is considered as a whole rather than many individuals. This leads to a loss of information. Zooming-out on a subset A ⊆ U may induce an inexact representation in the coarse-grained universe U/E. The theory of rough sets focuses on the zooming-out operation. For a subset A ⊆ U , we have a pair of lower and upper approximations in the coarse-grained universe [7, 10, 59]: apr(A) = {N ame([x]E ) | x ∈ U, [x]E ⊆ A}, apr(A) = {N ame([x]E ) | x ∈ U, [x]E ∩ A = ∅}.

(10)

The expression of lower and upper approximations as subsets of U/E, rather than subsets of U , has only been considered by a few researchers in rough set community [7, 10, 30, 59, 69]. On the other hand, such notions have been considered in other contexts. Shafer [46] introduced those notions in the study of belief functions and called them the inner and outer reductions of A ⊆ U in U/E. The connections between notions introduced by Pawlak in rough set theory and these introduced by Shafer in belief function theory have been pointed out by Dubois and Prade [10]. The expression of approximations in terms of elements of U/E clearly shows that representation of A in the coarse-grained universe U/E. By zooming-out, we only obtain an approximate representation. The lower and upper approximations satisfy the following properties [46, 69]: (zo1)

apr(∅) = apr(∅) = ∅,

A Partition Model of Granular Computing

(zo2) (zo3)

apr(U ) = apr(U ) = U/E, apr(A) = (apr(Ac ))c ,

(zo4)

apr(A) = (apr(Ac ))c ; apr(A ∩ B) = apr(A) ∩ apr(B),

(zo5)

apr(A ∩ B) ⊆ apr(A) ∩ apr(B), apr(A) ∪ apr(B) ⊆ apr(A ∪ B),

(zo6)

apr(A ∪ B) = apr(A) ∪ apr(B), A ⊆ B =⇒ [apr(A) ⊆ apr(B), apr(A) ⊆ apr(B)],

(zo7)

apr(A) ⊆ apr(A).

247

According to properties (zo4)-(zo6), relationships between subsets of U may not be carried over to U/E through the zooming-out operation. It may happen that A ∩ B = ∅, but apr(A ∩ B) = ∅, or A ∪ B = U , but apr(A ∪ B) = U/E. Similarly, we may have A = B, but apr(A) = apr(B) and apr(A) = apr(B). Nevertheless, we can draw the following inferences: (i1) (i2)

apr(A) ∩ apr(B) = ∅ =⇒ A ∩ B = ∅,

(i3)

apr(A) ∩ apr(B) = ∅ =⇒ A ∩ B = ∅, apr(A) ∪ apr(B) = U/E =⇒ A ∪ B = U,

(i4)

apr(A) ∪ apr(B) = U/E =⇒ A ∪ B = U.

If apr(A) ∩ apr(B) = ∅, by property (zo4) we know that apr(A ∩ B) = ∅. We say that A and B have a non-empty overlap, and hence are related, in U/E. By (i1), A and B must have a non-empty overlap, and hence are related, in U . Similar explanations can be associated with other inference rules. The approximation of a set can be easily extended to the approximation of a partition, also called a classiﬁcation [39]. Let π = {X1 , . . . , Xn } be a partition of the universe U . Its approximations are a pair of families of sets, the family of lower approximations apr(π) = {apr(X1 ), . . . , apr(Xn )} and the family of upper approximations apr(π) = {apr(X1 ), . . . , apr(Xn )}. 4.6

Classical Rough Set Approximations by a Combination of Zooming-out and Zooming-in

Traditionally, lower and upper approximations of a set are also subsets of the same universe. One can easily obtain the classical deﬁnition by performing a combination of zooming-out and zooming-in operators as follows [66]: ω(apr(A)) = ω({Xi }) Xi ∈apr(A)

=

ω(apr(A)) =

{[x]E | x ∈ U, [x]E ⊆ A}, ω({Xi })

Xi ∈apr(A)

=

{[x]E | x ∈ U, [x]E ∩ A = ∅}.

(11)

248

Yiyu Yao

For a subset X ⊆ U/E we can zoom-in and obtain a subset ω(X) ⊆ U , and then zoom-out to obtain a pair of subsets apr(ω(X)) and apr(ω(X)). The compositions of zooming-in and zooming-out operations have the properties [46]: for X ⊆ U/E and A ⊆ U , (zio1) (zio2)

ω(apr(A)) ⊆ A ⊆ ω(apr(A)), apr(ω(X)) = apr(ω(X)) = X.

The composition of zooming-out and zooming-in cannot recover the original set A ⊆ U . The composition zooming-in and zooming-out produces the original set X ⊆ U/E. A connection between the zooming-in and zooming-out operations can be established. For a pair of subsets X ⊆ U/E and A ⊆ U , we have [46]: (1) (2)

w(X) ⊆ A ⇐⇒ X ⊆ apr(A), A ⊆ ω(X) ⇐⇒ apr(A) ⊆ X.

Property (1) can be understood as follows. Any subset X ⊆ U/E, whose reﬁnement is a subset of A, is a subset of the lower approximation of A. Only a subset of the lower approximation of A has a reﬁnement that is a subset of A. It follows that apr(A) is the largest subset of U/E whose reﬁnement is contained in A, and apr(A) is the smallest subset of U/E whose reﬁnement containing A. 4.7

Consistent Computations in the Two Universes

Computation in the original universe is normally based on elements of U . When zooming-out to the coarse-grained universe U/E, we need to ﬁnd the consistent computational methods. The zooming-in operator can be used for achieving this purpose. Suppose f : U −→ is a real-valued function on U . One can lift the function f to U/E by performing set-based computations [67]. The lifted function f + is a set-valued function that maps an element of U/E to a subset of real numbers. More speciﬁcally, for an element Xi ∈ U/E, the value of function is given by: f + (Xi ) = {f (x) | x ∈ ω({Xi })}.

(12)

The function f + can be changed into a single-valued function f0+ in a number of ways. For example, Zhang and Zhang [75] suggested the following methods: f0+ (Xi ) = min f + (Xi ) = min{f (x) | x ∈ ω({Xi })}, f0+ (Xi ) = max f + (Xi ) = max{f (x) | x ∈ ω({Xi })}, f0+ (Xi ) = averagef + (Xi ) = average{f (x) | x ∈ ω({Xi })}.

(13)

The minimum, maximum, and average deﬁnitions may be regarded as the most permissive, the most optimistic, and the balanced view in moving functions from U to U/E. More methods can be found in the book by Zhang and Zhang [75]. For a binary operation ◦ on U , a binary operation ◦+ on U/E is deﬁned by [6, 67]: Xi ◦+ Xj = {xi ◦ xj | xi ∈ ω({Xi }), xj ∈ ω({Xj })}, (14)

A Partition Model of Granular Computing

249

In general, one may lift any operation p on U to an operation p+ on U/E, called the power operation of p. Suppose p : U n −→ U (n ≥ 1) is an n-ary operation on U . Its power operation p+ : (U/E)n −→ 2U is deﬁned by [6]: p+ (X0 , . . . , Xn−1 ) = {p(x0 , . . . , xn−1 ) | xi ∈ ω({Xi }) for i = 0, . . . , n − 1}, (15) for any X0 , . . . , Xn−1 ∈ U/E. This provides a universal-algebraic construction approach. For any algebra (U, p1 , . . . , pk ) with base set U and operations + p1 , . . . , pk , its quotient algebra is given by (U/E, p+ 1 , . . . , pk ). + The power operation p may carry some properties of p. For example, for a binary operation p : U 2 −→ U , if p is commutative and associative, p+ is commutative and associative, respectively. If e is an identity for some operation p, the set {e} is an identity for p+ . Many properties of p are not carried over by p+ . For instance, if a binary operation p is idempotent, i.e., p(x, x) = x, p+ may not be idempotent. If a binary operation g is distributive over p, g + may not be distributive over p+ . In some situations, we need to carry information from the quotient set U/E to U . This can be done through the zooming-out operators. A simple example is used to illustrate the basic idea. Suppose µ : 2U/E −→ [0, 1] is a set function on U/E. If µ satisﬁes the conditions: (i) (ii) (iii)

µ(∅) = 0, µ(U/E) = 1, X ⊆ Y =⇒ µ(X) ≤ µ(Y ),

µ is called a fuzzy measure [23]. Examples of fuzzy measures are probability functions, possibility and necessity functions, and belief and plausibility functions. Information about subsets in U can be obtained from µ on U/E and the zooming-out operation. For a subset A ⊆ U , we can deﬁne a pair of inner and outer fuzzy measures [68]: µ(A) = µ(apr(A)), µ(A) = µ(apr(A)).

(16)

They are fuzzy measures. If µ is a probability function, µ and µ are a pair of belief and plausibility functions [15, 49, 46, 68]. If µ is a belief function, µ is a belief function, and if µ is a plausibility function, µ is a plausibility [68].

5

Conclusion

Granular computing, as a way of thinking, has been explored in many ﬁelds. It captures and reﬂects our ability to perceive the world at diﬀerent granularity and to change granularities in problem solving. In this chapter, the same approach is used to study the granular computing itself in two levels. In the ﬁrst part of the chapter, we consider the fundamental issues of granular computing in general

250

Yiyu Yao

terms. The objective is to present a domain-independent way of thinking without details of any speciﬁc formulation. The second part of the chapter concretizes the high level investigations by considering a partition model of granular computing. To a large extent, the model is based on the theory of rough sets. However, results from other theories, such as the quotient space theory, belief functions, databases, and power algebras, are incorporated. In the development of diﬀerent research ﬁelds, each ﬁeld may develop its theories and methodologies in isolation. However, one may ﬁnd that these theories and methodologies share the same or similar underlying principles and only diﬀer in their formulation. It is evident that granular computing may be a basic principle that guides many problem solving methods. The results of rough set theory have drawn our attention to granular computing. On the other hand, the study of rough set theory in the wide context of granular computing may result in an in-depth understanding of rough set theory.

References 1. Aldenderfer, M.S., Blashﬁeld, R.K.: Cluster Analysis. Sage Publications, The International Professional Publishers, London (1984) 2. Anderberg, M.R.: Cluster Analysis for Applications. Academic Press, New York (1973) 3. Bettini, C., Montanari, A. (Eds.): Spatial and Temporal Granularity: Papers from the AAAI Workshop. Technical Report WS-00-08. The AAAI Press, Menlo Park, CA. (2000) 4. Bettini, C., Montanari, A.: Research issues and trends in spatial and temporal granularities. Annals of Mathematics and Artiﬁcial Intelligence 36 (2002) 1-4 5. Bratko, I.: PROLOG: Programming for Artiﬁcial Intelligence, Second edition. Addison-Wesley, New York (1990) 6. Brink, C.: Power structures. Algebra Universalis 30 (1993) 177-216 7. Bryniarski, E.: A calculus of rough sets of the ﬁrst order. Bulletin of the Polish Academy of Sciences, Mathematics 37 (1989) 71-77 8. de Loupy, C., Bellot, P., El-B`eze, M., Marteau, P.F.: Query expansion and classiﬁcation of retrieved documents. Proceedings of the Seventh Text REtrieval Conference (TREC-7) (1998) 382-389 9. Demri, S, Orlowska, E.: Logical analysis of indiscernibility. In: Incomplete Information: Rough Set Analysis, Orlowska, E. (Ed.). Physica-Verlag, Heidelberg (1998) 347-380 10. Dubois, D., Prade, P.: Fuzzy rough sets and rough fuzzy sets. International Journal of General Systems 17 (1990) 191-209 11. Graziano, A.M., Raulin, M.L.: Research Methods: A Process of Inquiry, 4th edition. Allyn and Bacon, Boston (2000) 12. Euzenat, J.: Granularity in relational formalisms - with application to time and space representation. Computational Intelligence 17 (2001) 703-737 13. Friske, M.: Teaching proofs: a lesson from software engineering. American Mathematical Monthly 92 (1995) 142-144 14. Giunchglia, F., Walsh, T.: A theory of abstraction. Artiﬁcial Intelligence 56 (1992) 323-390

A Partition Model of Granular Computing

251

15. Grzymala-Busse, J.W.: Rough-set and Dempster-Shafer approaches to knowledge acquisition under uncertainty – a comparison. Manuscript. Department of Computer Science, University of Kansas (1987) 16. Han, J., Cai, Y., Cercone, N.: Data-driven discovery of quantitative rules in data bases. IEEE Transactions on Knowledge and Data Engineering 5 (1993) 29-40 17. Hearst, M.A., Pedersen, J.O.: Reexamining the cluster hypothesis: Scatter/Gather on retrieval results. Proceedings of SIGIR’96 (1996) 76-84 18. Hobbs, J.R.: Granularity. Proceedings of the Ninth Internation Joint Conference on Artiﬁcial Intelligence (1985) 432-435 19. Hornsby, K.: Temporal zooming. Transactions in GIS 5 (2001) 255-272 20. Imielinski, T.: Domain abstraction and limited reasoning. Proceedings of the 10th International Joint Conference on Artiﬁcial Intelligence (1987) 997-1003 21. Inuiguchi, M., Hirano, S., Tsumoto, S. (Eds.): Rough Set Theory and Granular Computing. Springer, Berlin (2003) 22. Jardine, N., Sibson, R.: Mathematical Taxonomy. Wiley, New York (1971) 23. Klir, G.J., Folger, T.A.: Fuzzy Sets, Uncertainty, and Information. Prentice Hall, Englewood Cliﬀs (1988) 24. Knoblock, C.A.: Generating Abstraction Hierarchies: an Automated Approach to Reducing Search in Planning. Kluwer Academic Publishers, Boston (1993) 25. Lamport, L.: How to write a proof. American Mathematical Monthly 102 (1995) 600-608 26. Ledgard, H.F., Gueras, J.F., Nagin, P.A.: PASCAL with Style: Programming Proverbs. Hayden Book Company, Inc., Rechelle Park, New Jersey (1979) 27. Lee, T.T.: An information-theoretic analysis of relational databases – part I: data dependencies and information metric. IEEE Transactions on Software Engineering SE-13 (1987) 1049-1061 28. Leron, U.: Structuring mathematical proofs. American Mathematical Monthly 90 (1983) 174-185 29. Lin, T.Y.: Topological and fuzzy rough sets. In: Intelligent Decision Support: Handbook of Applications and Advances of the Rough Sets Theory, Slowinski, R. (Ed.). Kluwer Academic Publishers, Boston (1992) 287-304 30. Lin, T.Y.: Granular computing on binary relations I: data mining and neighborhood systems, II: rough set representations and belief functions. In: Rough Sets in Knowledge Discovery 1. Polkowski, L., Skowron, A. (Eds.). Physica-Verlag, Heidelberg (1998) 107-140 31. Lin, T.Y.: Generating concept hierarchies/networks: mining additional semantics in relational data. Advances in Knowledge Discovery and Data Mining, Proceedings of the 5th Paciﬁc-Asia Conference, Lecture Notes on Artiﬁcial Intelligence 2035 (2001) 174-185 32. Lin, T.Y.: Granular computing. Rough Sets, Fuzzy Sets, Data Mining, and Granular Computing, Proceedings of the 9th International Conference, Lecture Notes in Artiﬁcial Intelligence 2639 (2003) 16-24. 33. Lin. T.Y., Yao, Y.Y., Zadeh, L.A. (Eds.): Rough Sets, Granular Computing and Data Mining. Physica-Verlag, Heidelberg (2002) 34. Lin, T.Y., Zhong, N., Dong, J., Ohsuga, S.: Frameworks for mining binary relations in data. Rough sets and Current Trends in Computing, Proceedings of the 1st International Conference, Lecture Notes in Artiﬁcial Intelligence 1424 (1998) 387393 35. Mani, I.: A theory of granularity and its application to problems of polysemy and underspeciﬁcation of meaning. Principles of Knowledge Representation and Reasoning, Proceedings of the Sixth International Conference (1998) 245-255

252

Yiyu Yao

36. McCalla, G., Greer, J., Barrie, J., Pospisil, P.: Granularity hierarchies. Computers and Mathematics with Applications 23 (1992) 363-375 37. Orlowska, E.: Logic of indiscernibility relations. Bulletin of the Polish Academy of Sciences, Mathematics 33 (1985) 475-485 38. Pawlak, Z.: Rough sets. International Journal of Computer and Information Sciences 11 (1982) 341-356. 39. Pawlak, Z.: Rough Sets: Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Boston (1991) 40. Pawlak, Z.: Granularity of knowledge, indiscernibility and rough sets. Proceedings of 1998 IEEE International Conference on Fuzzy Systems (1998) 106-110 41. Pedrycz, W.: Granular Computing: An Emerging Paradigm. Springer-Verlag, Berlin (2001) 42. Peikoﬀ, L.: Objectivism: the Philosophy of Ayn Rand. Dutton, New York (1991) 43. Polkowski, L., Skowron, A.: Towards adaptive calculus of granules. Proceedings of 1998 IEEE International Conference on Fuzzy Systems (1998) 111-116 44. Saitta, L., Zucker, J.-D.: Semantic abstraction for concept representation and learning. Proceedings of the Symposium on Abstraction, Reformulation and Approximation (1998) 103-120 http://www.cs.vassar.edu/∼ellman/sara98/papers/. retrieved on December 14, 2003. 45. Salton, G., McGill, M.: Introduction to Modern Information Retrieval. McGraw Hill, New York (1983) 46. Shafer, G.: A Mathematical Theory of Evidence. Princeton University Press, Princeton (1976) 47. Simpson, S.G.: What is foundations of mathematics? (1996). http://www.math.psu.edu/simpson/hierarchy.html. retrieved November 21, 2003. 48. Skowron, A.: Toward intelligent systems: calculi of information granules. Bulletin of International Rough Set Society 5 (2001) 9-30 49. Skowron, A., Grzymala-Busse, J.: From rough set theory to evidence theory. In: Advances in the Dempster-Shafer Theory of Evidence, Yager, R.R., Fedrizzi, M., Kacprzyk, J. (Eds.). Wiley, New York (1994) 193-236 50. Skowron, A., Stepaniuk, J.: Information granules and approximation spaces. Proceedings of 7th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (1998) 354-361 51. Skowron, A., Stepaniuk, J.: Information granules: towards foundations of granular computing. International Journal of Intelligent Systems 16 (2001) 57-85 52. Smith, E.E.: Concepts and induction. In: Foundations of Cognitive Science, Posner, M.I. (Ed.). The MIT Press, Cambridge (1989) 501-526 53. Sowa, J.F.: Conceptual Structures, Information Processing in Mind and Machine. Addison-Wesley, Reading (1984) 54. Stell, J.G., Worboys, M.F.: Stratiﬁed map spaces: a formal basis for multiresolution spatial databases. Proceedings of the 8th International Symposium on Spatial Data Handling (1998) 180-189 55. van Mechelen, I., Hampton, J., Michalski, R.S., Theuns, P. (Eds.): Categories and Concepts: Theoretical Views and Inductive Data Analysis. Academic Press, New York (1993) 56. van Rijsbergen, C.J.: Information Retrieval. Butterworths, London (1979) 57. Wille, R.: Concept lattices and conceptual knowledge systems. Computers Mathematics with Applications 23 (1992) 493-515

A Partition Model of Granular Computing

253

58. Yager, R.R., Filev,D.: Operations for granular computing: mixing words with numbers. Proceedings of 1998 IEEE International Conference on Fuzzy Systems (1998) 123-128 59. Yao, Y.Y.: Two views of the theory of rough sets in ﬁnite universes. International Journal of Approximation Reasoning 15 (1996) 291-317 60. Yao, Y.Y.: Granular computing: basic issues and possible solutions. Proceedings of the 5th Joint Conference on Information Sciences (2000) 186-189 61. Yao, Y.Y.: Information granulation and rough set approximation. International Journal of Intelligent Systems 16 (2001) 87-104 62. Yao, Y.Y.: Modeling data mining with granular computing. Proceedings of the 25th Annual International Computer Software and Applications Conference (COMPSAC 2001) (2001) 638-643 63. Yao, Y.Y.: A step towards the foundations of data mining. In: Data Mining and Knowledge Discovery: Theory, Tools, and Technology V, Dasarathy, B.V. (Ed.). The International Society for Optical Engineering (2003) 254-263 64. Yao, Y.Y.: Probabilistic approaches to rough sets. Expert Systems 20 (2003) 287297 65. Yao, Y.Y., Liau, C.-J.: A generalized decision logic language for granular computing. Proceedings of FUZZ-IEEE’02 in the 2002 IEEE World Congress on Computational Intelligence, (2002) 1092-1097 66. Yao, Y.Y., Liau, C.-J., Zhong, N.: Granular computing based on rough sets, quotient space theory, and belief functions. Proceedings of ISMIS’03 (2003) 152-159 67. Yao, Y.Y., Noroozi, N.: A uniﬁed framework for set-based computations. Proceedings of the 3rd International Workshop on Rough Sets and Soft Computing. The Society for Computer Simulation (1995) 252-255 68. Yao, Y.Y., Wong, S.K.M.: Representation, propagation and combination of uncertain information. International Journal of General Systems 23 (1994) 59-83 69. Yao, Y.Y., Wong, S.K.M., Lin, T.Y.: A review of rough set models. In: Rough Sets and Data Mining: Analysis for Imprecise Data, Lin, T.Y., Cercone, N. (Eds.). Kluwer Academic Publishers, Boston (1997) 47-75 70. Yao, Y.Y., Zhong, N.: Granular computing using information tables. In: Data Mining, Rough Sets and Granular Computing, Lin, T.Y., Yao, Y.Y., Zadeh, L.A. (Eds.). Physica-Verlag, Heidelberg (2002) 102-124 71. Zadeh, L.A.: Fuzzy sets and information granularity. In: Advances in Fuzzy Set Theory and Applications, Gupta, N., Ragade, R., Yager, R. (Eds.). North-Holland, Amsterdam (1979) 3-18 72. Zadeh, L.A.: Fuzzy logic = computing with words. IEEE Transactions on Fuzzy Systems 4 (1996) 103-111 73. Zadeh, L.A.: Towards a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Sets and Systems 19 (1997) 111-127 74. Zadeh, L.A.: Some reﬂections on soft computing, granular computing and their roles in the conception, design and utilization of information/intelligent systems. Soft Computing 2 (1998) 23-25 75. Zhang, B., Zhang, L.: Theory and Applications of Problem Solving, North-Holland, Amsterdam (1992) 76. Zhang, L., Zhang, B.: The quotient space theory of problem solving. Proceedings of International Conference on Rough Sets, Fuzzy Set, Data Mining and Granular Computing, Lecture Notes in Artriﬁcal Intelligence 2639 (2003) 11-15 77. Zhong, N., Skowron, A., Ohsuga S. (Eds.): New Directions in Rough Sets, Data Mining, and Granular-Soft Computing. Springer-Verlag, Berlin (1999)

Musical Phrase Representation and Recognition by Means of Neural Networks and Rough Sets Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek Multimedia Systems Department, Gdansk University of Technology Narutowicza 11/12, 80-952 Gdansk, Poland {andcz,marek,bozenka}@sound.eti.pg.gda.pl http://sound.eti.pg.gda.pl

Abstract. This paper discusses various musical phrase representations that can be used to classify musical phrases with a considerable accuracy. Musical phrase analysis plays an important role in music information retrieval domain. In the paper various representations of a musical phrase are described and analyzed. Also the experiments were designed to facilitate pitch prediction within a musical phrase by means of entropy-coding of music. We used the concept of predictive data coding introduced by Shannon. Encoded music representations, stored in the database, are then used for automatic recognition of musical phrases by means of Neural Networks (NN) and rough sets (RS). A discussion on obtained results is carried out and conclusions are included.

1 Introduction The ability to analyze musical phrases in the context of automatic retrieval is still not a fully achieved objective [11]. It should be however stated that such an objective depends both on a quality of a musical phrase representation and the inference engine utilized. Thorough analysis of musical phrase features would make possible searching for a particular melody in the musical databases, but also might reveal features that for example characterize music of the same epoch. Recognizing similarities between music of a particular epoch or particular genre would enable searching the Internet according to music taxonomy. For the purpose of this study a collection of MIDI encoded musical phrases was gathered containing Bach’s fugues. Musical phrase could be stored in various formats, such as mono- or polyphonic signal, MIDI code, and musical score. Any of such formats may be accompanied by textual information. In the study presented Bach’s fugues from the “Well Tempered Clavier” were played on a MIDI keyboard and then transferred to the computer hard disk through the MIDI card and Cubase VST 3.5 program. The automatic recognition of musical phrase patterns required some preliminary stages, such as MIDI data conversion, parametrization of musical phrases, and discretization of parameter values in the case of rule-based decision systems [4], [23]. These tasks resulted in the creation of musical phrase database containing feature vectors. J.F. Peters et al. (Eds.): Transactions on Rough Sets I, LNCS 3100, pp. 254–278, 2004. © Springer-Verlag Berlin Heidelberg 2004

Musical Phrase Representation and Recognition by Means of Neural Networks

255

Experiments performed consisted in preparing various representations on the basis of gathered musical phrases and then analyzing them in the context of automatic music information retrieval. Both Neural Networks (NNs) and Rough Set (RS) method were used to this end. NNs were also used for feature quality evaluation, this issue will be explained later on. The decision systems were used both as a classifier and a comparator.

2 Musical Phrase Description In the experiments it was assumed that musical phrases considered are single-voice, only. This means that at the moment t one musical event is occurring in the phrase. A musical event is defined as a single sound of defined pitch, amplitude and duration [26]. A musical pause – absence of sound – is a musical event as well. For practical reasons musical pause was assumed to be a musical event of pitch equal to the pitch of the preceding sound, but of amplitude equals zero. A single-voice musical phrase fr can be expressed as a sequence of musical events: fr = {e1 , e2 ,..., e n }

(1)

Musical event ei can be described as a pair of values denoting sound pitch hi (in the case of a pause, pitch of the previous sound), and sound duration ti: ei = {hi , t i }

(2)

One can therefore express a musical phrase by a sequence of pitches being a function of time fr(t). Sample illustration of the function fr(t) is presented in Fig. 1. Sound pitch is defined according to the MIDI standard, i.e. as a difference from the C0 sound measured in semitones [2].

Fig. 1. Sequence of pitches as a function of time. Sound pitch is expressed according to the MIDI standard.

256

Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek

One of the basic composer’s and performer’s tools is transforming musical phrases according to rules specific to music perception, and aesthetic and cultural conventions and constraints [1]. Generally, listeners perceive a modified musical phrase as identical to the unmodified original phrase. Modifications of musical phrases involve sound pitch shifting (transposition), time changes (e.g. augmentation), changes of ornament and/or transposition, shifting pitches of individual sounds etc. [27]. A formal definition of such modifications may be presented on the example of a transposed musical phrase, expressed as follows: frmod (t ) = frref (t ) + c

(3)

where frref (t ) denotes unmodified, original musical phrase, frmod (t ) modified musical phrase, and c is component expressing in semitones shift of individual sounds of the phrase (for |c|=12n there is an octave shift). A musical phrase with changed tempo can be expressed as follows: frmod (t ) = frref (kt )

(4)

where k is tempo change factor. Phrase tempo is slowed down for values of factor k < 1. Tempo increase is obtained for values of factor k > 1. A transposed musical phrase with changed tempo can be expressed as follows: frmod (t ) = frref (kt ) + c

(5)

Tempo variations in respects to the score can result mostly from inexactness in performance, which is often related to performer’s expressiveness [7], [8], [22]. Tempo changes can be expressed as function ∆k(t). A musical phrase with varying tempo can be expressed as follows: frmod (t ) = frref [t ⋅ ∆k (t )]

(6)

Modifications of musical phrase melodic content are often used. One can discern such modifications as: ornament, transposition, inversion, retrograde, scale change (major – minor), change of pitch of individual sounds (e.g. harmonic adjustment), etc. In general, they can be described by melodic modification function ψ(t). Therefore, a musical phrase with melodic content modifications can be expressed as follows: frmod (t ) = frref (t ) + ψ (t )

(7)

In consequence, a musical phrase modified by transposition, tempo change, tempo fluctuation and melodic content modification can be expressed as follows: frmod (t ) = frref [kt + t ⋅ ∆k (t )] + ψ [kt + t ⋅ ∆k (t )] + c

(8)

Above given formalism allows for defining the research problem of automatic classification of musical phrases. Let frmod be a modified musical phrase being classified and let FR be a set of unmodified reference phrases:

{

FR = fr1ref , fr2ref ,..., frNref

}

(9)

Musical Phrase Representation and Recognition by Means of Neural Networks

257

The task of recognizing musical phrase frmod can therefore be described as finding in the set FR such a phrase frnref, for which the musical phrase modification formula is fulfilled. If the applied modifications are limited to transposition and uniform tempo change, modification can be described using two constants: transposition constant c and tempo change constant k. In the discussed case the task of classifying a musical phrase is limited to determining such vales of constants c and k that the formula is fulfilled. If function ∆k(t)≠0, then classification algorithm should minimize the influence of the function ∆k(t) on the expression. Small values of the function ∆k(t) indicate slight changes resulting from articulation inexactness and moderate performer’s expressiveness [6]. Such changes can be easily corrected by using time quantization. Larger values of the function ∆k(t) indicate major temporal fluctuations resulting chiefly from performer’s expressiveness. Such changes can be corrected using advanced methods of time quantization [8]. Function ψ(t) describes a wide range of musical phrase modifications characteristic for composer, such as performer’s style and technique. Values of function ψ(t), which describe qualitatively the character of the above constants, are difficult or impossible to determine in a hard-defined manner. The last mentioned problem is the main issue associated with the task of automatic classification of musical phrases.

3 Parametrization of Musical Phrases A fundamental quality of decision systems is the ability to classify data that is not precisely defined or cannot be modeled mathematically. This quality allows for using intelligent decision algorithms for automatically classifying musical phrases in conditions when the character of the ψ(t) and ∆k(t) functions is rather qualitative. Parametrization can be considered as a part of feature selection, the latter process meaning finding a subset of features, from the original set of pattern features, optimally according to the defined criterion [25]. The data to be classified can be represented by a vector P of the form: P = [ p1 , p2 ,..., p N ]

(10)

The constant number N of elements of vector P requires the musical phrase fr to be represented by N parameters, independent of number of notes in phrase fr. Converting a musical phrase fr of the form of {e1, e2, …, en} into N-element vector of parameters allows for representing only the distinctive features of musical phrase fr. As shown above, transposition of a musical phrase and uniform proportional tempo change can be represented as alteration of values: c and k. It would therefore be advantageous to design such method of musical phrase parameterization, for which: where:

P( frmod ) = P( frref )

(11)

frmod (t ) = frref (kt ) + c

(12)

258

Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek

Creating numerical representation of musical structures to be used in automatic classification and prediction systems requires among others defining the following characteristics of musical phrases: sequence length, method of representing sound pitch, methods of representing time-scale and frequency properties, methods of representing other musical properties by feature vectors. In addition, having defined various subsets of features, feature selection should be performed. Typically, this process consists in finding an optimal feature subset from a whole original feature set, which guarantees the accomplishment of a processing goal while minimizing a defined feature selection criterion [25]. Feature relevance may be evaluated on the basis of openloop or closed-loop methods. In the first approach separability criteria are used. To this end Fisher criterion is often employed. The closed-loop methods are based on feature selection using a predictor performance. This means that the feedback from the predictor quality is used for the feature selection process [25]. On the other hand, here we deal with the situation, in which feature set contains several disjoint feature subsets. The feature selection defined for the purpose of this study consists in eliminating the less effective method of parametrization according to the processing goal, first, and then reducing number of parameters to the optimal one. Both, open- and closed-loop methods were used in the study performed. Individual musical structures may show significant differences in number of elements, i.e. sounds or other musical units. In an extreme case one can imagine that the classifier can be fed with the melody or the whole musical piece. It is therefore necessary to limit the number of elements in the numerical representation vector. Sound pitch can be expressed as absolute or relative value. An absolute representation is characterized by the exact definition of the reference sound (e.g. C1 sound). In the case of absolute representation the number of possible values defining a given note in a sequence is equal to the number of possible sound pitch values restricted to musical scale. A disadvantage of this representation is the fact of shifting the values for sequence elements by a constant in the case of transposition. In the case of relative representation the reference point is being updated all the time. The reference point may be e.g. the previous sound, a sound at the previously accented time part or a sound at the beginning of the onset. The number of possible values defining a given note in a sequence is equal to the number of possible intervals. An advantage of relative representation is absence of change of musical structures caused by transposition as well as the ability to limit the scope of available intervals without limiting the available musical scales. Its disadvantage is sensitivity to small structure modifications resulting in shifting the reference sound. 3.1 Parametric Representation Research performed so far resulted in designing a number of parametric representations of musical phrases. Some of these methods were described in detail in earlier authors’ publications [13], [14], [15], [16], [17], [24], therefore only their brief characteristics are given below. At the earlier stage of this study, both Fisher criterion and correlation coefficient for evaluation of parameter quality were used [17].

Musical Phrase Representation and Recognition by Means of Neural Networks

259

Statistical Parametrization. The designed statistical parameterization approach is aimed at describing structural features of a musical phrase based on music theory [27]. Statistical parametrization introduced by authors involves representing a musical phrase with five parameters [13], [15]: !

P1 – difference between weighted average sound pitch and pitch of the lowest sound of phrase, where T is phrase duration, hn denotes pitch of n-th sound, tn is duration of n-th sound, and N is number of sounds in phrase. 1 P1 = T

!

N

∑h t

n n

n =1

− min (hn ) n

(13)

P2 – ambitus – difference between pitches of the highest and the lowest sounds of phrase. Typically, the term ambitus denotes a range of pitches for a given voice in a part of music. It also may denote the pitch range that a musical instrument is capable of playing, however, in our experiments, the first meaning is closer to the definition given below: P2 = max (hn ) − min (hn ) n

n

(14)

! P3 – average absolute difference of pitch of subsequent sounds: P3 =

!

N −1

1 hn − hn +1 N − 1 n =1

∑

(15)

P4 – duration of the longest sound of phrase: P4 = max (t n ) n

(16)

! P5 – average sound duration: P5 =

1 N

N

∑t

n

(17)

n =1

Statistical parameters representing a musical phrase can be divided into two groups: parameters describing melodic features of musical phrase (P1, P2, P3) and ones describing rhythmical features of musical phrase (P4, P5). Trigonometric Parametrization. Trigonometric parametrization involves representing the shape of a musical phrase with a vector of parameters P = [p1, p2, …, pM] in the form of a series of cosines [15]: 1 π 1 π 1π fr * (t ) = p1 cos t − + p2 cos 2 t − + ... + p M cos M t − T T 2 2 2T

(18)

where M is the number of trigonometric parameters representing the musical phrase.

260

Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek

For discrete time domain it is assumed that the sampling period is a common denominator of durations of all rhythmic units of a musical phrase. Elements pm of the trigonometric parameter vector P are calculated according to the following formula: l

pm =

∑h

k

k =1

1 π cos[m(k − ) ] 2 l

(19)

T denotes phrase length exsnt pressed as a multiple of sampling period, snt is the shortest note duration, and hk denotes pitch of sound in k-th sample. According to the above assumption each rhythmic value being a multiple of the sampling period is transformed into a series of rhythmic values equal to the sampling period. This leads to loss of information on rhythmic structure of the phrase. Absolute changes of values concerning sound pitch and proportional time changes do not affect the values of trigonometric parameters. Trigonometric parameters allow for reconstructing the shape of the musical phrase they represent. Phrase shape is reconstructed using vector K=[k1, k2, ..., kN]. Elements of vector K are calculated according to the following formula:

where pm is m-th element of the feature vector, l =

kn =

1 N

M

∑2 p m=1

m

mnπ cos N

(20)

where M is the number of trigonometric parameters representing the musical phrase, and pm denotes m-th element of parameters vector. Values of elements kn express in semitones the difference between the current and the average sound pitch in the musical phrase being reconstructed. Polynomial Parametrization. Single-voice musical phrase fr can be represented by function fr(t), whose time domain is either discrete or continuous. In discrete time domain musical phrase fr can be represented as a set of points in two-dimensional space of time and sound pitch. A musical phrase can be represented in discrete time domain by means of points denoting sound pitch at time t, and by points denoting note onset. If tempo varies in time (function ∆k(t)≠0) or musical phrase includes additional sounds of duration inconsistent with the general rhythmic pattern (e.g. ornament or augmentation), sampling period can be determined by minimizing the quantization error defined by the formula:

ε (b ) =

1 N −1

N −1

∑ i =1

ti − t i −1 t −t − Round i i −1 b b

(21)

where b denotes sampling period, and Round is rounding function. On the basis of representation of a musical phrase in discrete time domain one can approximate a musical phrase by a polynomial of order M:

Musical Phrase Representation and Recognition by Means of Neural Networks

fr * (t ) = a0 + a1t + a 2 t 2 + ...a M t M

261

(22)

Coefficients a0, a1,…aM are found numerically by means of mean-square approximation, i.e. by minimizing the error ε of form: T

ε2 =

∫ fr

*

(t ) − fr (t )

2

dt

- for continuous case

0

ε2 =

(23) N

∑ fr

* i

− fri

2

- for discrete case

i =0

One can also express the error in semitones per sample, which facilitates approximation evaluation, according to the formula:

χ=

1 N

N

∑ fr

* i

− fri

(24)

i =1

3.2 Binary Representation Binary representation is based on dividing the time window W into n equal time sections T, where n is consistent with metric division and T corresponds to the smallest, basic rhythmic unit in the music material being represented. Each time section T is assigned a bit of information bT in the vector of rhythmic units. Bit bT takes the value of 1, if a sound begins in the given time section T. If time section T covers a sound started in a previous section or a pause, the rhythmic information bit bT assumes the value of 0. An advantage of binary representation of rhythmic structures is fixed length of the sequence representation vector. On the other hand, the disadvantages are: large size of vector length in comparison to other representation methods and the possibility of errors resulting from time quantization. On the basis of methods of representing values of individual musical parameters one can distinguish three types of representations: local, distributed and global ones. In the case of a local representation every musical unit en is represented by a vector of n bits, where n is the number of all possible values of a musical unit en. Current value of musical unit en is represented by ascribing the value of 1 to the bit of representation vector corresponding to this value. Other bits of the representation vector take the value of 0 (unipolar activation) or –1 (bipolar activation). This type of representation was used e.g. by Hörnel [10] and Todd [28]. The system of representing musical sounds proposed by Hörnel and co-workers is an example of parametric representation [9]. In this system each subsequent note p is represented by the following parameters: consonance of note p with respect to its harmony, relation of note p towards its successor and predecessor in the case of dissonance against the harmonic content, direction of p (up, down to next pitch), dis-

262

Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek Table 1. Distributed representation of sound pitches according to Mozer. Sound pitch C C# D D# E F F# G G# A A# B

–1 –1 –1 –1 –1 –1 +1 +1 +1 +1 +1 +1

Mozer’s distributed representation –1 –1 –1 –1 –1 –1 –1 –1 –1 –1 –1 +1 –1 –1 +1 +1 –1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 +1 –1 +1 +1 –1 –1 +1 –1 –1 –1 –1 –1 –1 –1

–1 +1 +1 +1 +1 +1 +1 –1 –1 –1 –1 –1

tance of note p to base note (if p is consonant), octave, tenuto – if p is an extension of the previous note of the same pitch. The presented method of coding does not employ direct representation of sound pitch, it is distributed with respect to pitch. Sound pitch is coded as a function of harmony. Such distributed representation was used among others by Mozer [19]. In the case of a distributed representation the value of musical unit E is encoded with m bits according to the formula: m = log 2 N

(24)

where N – number of possible values of musical unit en. An example of representing the sounds of the chromatic scale using a distributed representation is presented in Table 1. In the case of a global representation the value of a musical unit is represented by a real value. The above methods of representing values of individual musical units imply their suitability for processing certain types of music material, for certain tasks and analysis tools, classifiers and predictors. 3.3 Prediction of Musical Events Our experiments were aimed at designing a method of predicting and entropy-coding of music. We used the concept of predictive data coding presented by Shannon and later employed for investigating entropy coding of English text by Moradi, Grzymala–Busse and Roberts [18]. The engineered method was used as a musical event predictor in order to enhance a system of pitch detection of a musical sound. The block scheme of a prediction coding system for music is presented in Fig. 2. The idea of entropy coding involves using two identical predictors in the modules of data coding and decoding. The process of coding consists in determining the number of prediction attempts k required for correct prediction of event en+1. Prediction is based on parameters of musical events collected in data buffer. The number of prediction attempts k is sent to the decoder. The decoder module determines the value of

Musical Phrase Representation and Recognition by Means of Neural Networks

coder coder input data

decoder decoder

en-z,en-z+1,...,en buffer

en-z,en-z+1,...,en

predictor

eˆn+1

predictor

NO

eˆn+1 = en+1

j=k

k

buffer

eˆn+1

k = k+1

en+1

YES

263

i = i+1 NO

YES

k

en = eˆn

output data

Fig. 2. Block diagram of prediction coder and decoder.

event en+1 by repeating k prediction attempts. Subsequent values for samples – musical events – are then collected in a buffer. Two types of data buffers were implemented: fixed-size buffer, and fading memory model. In the first case the buffer stores data on z musical events; each event is represented by a separate vector. That means that z vectors representing z individual musical events are supplied to the predictor input. In the carried out experiments the value z was limited to 5, 10 and 20 samples (music events). On the other hand, the fading memory model involves storing preceding values of the vector elements and summing them with current ones according to the formula: n

bn =

∑e r k

n−k

(25)

k =1

where r is the fading factor from the range (0,1). In the case of using the fading memory model a single vector of parameters of musical events is supplied to the predictor input. This means a z-fold reduction of the number of input parameters compared with the buffer of size z. For the needs of investigating the music predictor a set of musical data, consisted of fugues from the set Well-Tempered Clavier by J. S. Bach, were used as musical material. In the experiments performed a neural network-based predictor was employed. A series of experiments aimed at optimizing the predictor structure, data buffer parameters and prediction algorithm parameters were performed. In the training process we utilized all voices of the individual fugues except the uppermost ones. The highest voices were used for testing the predictor. Three methods of parametric representation of sound pitch: binary method, a so-called modified Hörnel’s representation and modified Mozer’s representation were utilized. In all cases relative representation was used, i.e. differences between pitch of subsequent sounds were coded. In the case of binary representation individual musical intervals (differences between pitch of subsequent sounds) are represented as 27-bit vectors. The utilized representation of sound pitch is presented in Table. 2.

264

Andrzej Czyzewski, Marek Szczerba, and Bozena Kostek

Table 2. Illustration of binary representation of musical interval (example – 2 semitones up). -octave

-12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +10 +11 +12 +octave

Interval [in semitones]

0

0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0

Presented representation of intervals designed by Hörnel is a diatonic representation (corresponding to seven-step musical scale). For the needs of our research we modified Hörnel’s representation to allow for chromatic (twelve-step) representation. Individual intervals are represented by means of 11 parameters. The method of representing sound pitch designed by Mozer characterizes pitch as an absolute value. Within the scope of our research we modified Mozer’s representation to allow relative representation of interval size. The representation was complemented by adding direction and octave bits. An individual musical event is therefore coded by means of 8 parameters. A relative binary representation was designed for coding rhythmic values. Rhythmic values are coded by a parameters vector:

{

}

(26)

−2

(27)

p r = p1r , p 2r , p3r , p4r , p5r

where individual parameters assume the values: enr −1 p1r

=

e nr

−2 +

e nr −1 e nr

12

er er 8 − nr−1 + 8 − nr−1 en en 12 p2r = r r en−1 − 1 + en−1 − 1 er enr n 2 er 2 − rn en−1 p3r = r 2 − en−1 enr

er + 2 − rn en−1 2 er + 2 − nr−1 en 2

for

for

for

for

enr −1 enr

enr −1 enr

≥2

(28)

Our partners will collect data and use cookies for ad personalization and measurement. Learn how we and our ad partner Google, collect and use data. Agree & close