full issue (PDF 1.7MB)

Transcription

full issue (PDF 1.7MB)
<http://www.cepis.org>
CEPIS, Council of European Professional Informatics
Societies, is a non-profit organisation seeking to improve
and promote high standards among informatics
professionals in recognition of the impact that informatics
has on employment, business and society.
CEPIS unites 36 professional informatics societies
over 32 European countries, representing more than
200,000 ICT professionals.
CEPIS promotes
<http://www.eucip.com>
<http://www.ecdl.com>
<http://www.upgrade-cepis.org>
UPGRADE is the European Journal for the
Informatics Professional, published bimonthly
at <http://www.upgrade-cepis.org/>
UPGRADE is the anchor point for UPENET (UPGRADE European
NETwork), the network of CEPIS member societies’ publications, that
currently includes the following ones:
• Mondo Digitale, digital journal from the Italian CEPIS society AICA
• Novática, journal from the Spanish CEPIS society ATI
• OCG Journal, journal from the Austrian CEPIS society OCG
• Pliroforiki, journal from the Cyprus CEPIS society CCS
• Pro Dialog, journal from the Polish CEPIS society PTI-PIPS
Publisher
UPGRADE is published on behalf of CEPIS (Council of European Professional
Informatics Societies, <http://www.cepis.org/>) by Novática
<http://www.ati.es/novatica/>, journal of the Spanish CEPIS society ATI
(Asociación de Técnicos de Informática, <http://www.ati.es/>)
UPGRADE monographs are also published in Spanish (full version printed; summary,
abstracts and some articles online) by Novática, and in Italian (summary, abstracts and
some articles online) by the Italian CEPIS society ALSI (Associazione nazionale
Laureati in Scienze dell’informazione e Informatica, <http://www.alsi.it>) and the
Italian IT portal Tecnoteca <http://www.tecnoteca.it/>
UPGRADE was created in October 2000 by CEPIS and was first published by
Novática and INFORMATIK/INFORMATIQUE, bimonthly journal of SVI/FSI (Swiss
Federation of Professional Informatics Societies, <http://www.svifsi.ch/>)
Editorial Team
Chief Editor: Rafael Fernández Calvo, Spain, <[email protected]>
Associate Editors:
François Louis Nicolet, Switzerland, <[email protected]>
Roberto Carniel, Italy, <[email protected]>
Zakaria Maamar, Arab Emirates, <Zakaria. Maamar@ zu.ac.ae>
Soraya Kouadri Mostéfaoui, Switzerland,
<[email protected]>
Editorial Board
Prof. Wolffried Stucky, CEPIS Past President
Prof. Nello Scarabottolo, CEPIS Vice President
Fernando Piera Gómez and
Rafael Fernández Calvo, ATI (Spain)
François Louis Nicolet, SI (Switzerland)
Roberto Carniel, ALSI – Tecnoteca (Italy)
UPENET Advisory Board
Franco Filippazzi (Mondo Digitale, Italy)
Rafael Fernández Calvo (Novática, Spain)
Veith Risak (OCG Journal, Austria)
Panicos Masouras (Pliroforiki, Cyprus)
Andrzej Marciniak (Pro Dialog, Poland)
English Editors: Mike Andersson, Richard Butchart, David Cash, Arthur Cook,
Tracey Darch, Laura Davies, Nick Dunn, Rodney Fennemore, Hilary Green,
Roger Harris, Michael Hird, Jim Holder, Alasdair MacLeod, Pat Moody, Adam
David Moss, Phil Parkin, Brian Robson
Cover page designed by Antonio Crespo Foix, © ATI 2005
Layout Design: François Louis Nicolet
Composition: Jorge Llácer-Gil de Ramales
Editorial correspondence: Rafael Fernández Calvo <[email protected]>
Advertising correspondence: <[email protected]>
UPGRADE Newslist available at
<http://www.upgrade-cepis.org/pages/editinfo.html#newslist>
Copyright
© Novática 2005 (for the monograph and the cover page)
© CEPIS 2005 (for the sections MOSAIC and UPENET)
All rights reserved. Abstracting is permitted with credit to the source. For copying,
reprint, or republication permission, contact the Editorial Team
The opinions expressed by the authors are their exclusive responsibility
ISSN 1684-5285
Monograph of next issue (August 2005):
"Normalisation & Standardisation
in IT Security"
(The full schedule of UPGRADE
is available at our website)
Vol. VI, issue No. 3, June 2005
Monograph: Libre Software as A Field of Study
(published jointly with Novática*, in cooperation with the
European project CALIBRE)
Guest Editors: Jesús M. González-Barahona and Stefan Koch
2 Presentation
Libre Software under The Microscope — Jesús M. González-Barahona
and Stefan Koch
5 CALIBRE at The Crest of European Open Source Software Wave —
Andrea Deverell and Par Agerfalk
6 Libre Software Movement: The Next Evolution of The IT Production
Organization? — Nicolas Jullien
13 Measuring Libre Software Using Debian 3.1 (Sarge) as A Case Study:
Preliminary Results — Juan-José Amor-Iglesias, Jesús M. GonzálezBarahona, Gregorio Robles-Martínez, and Israel Herráiz-Tabernero
17 An Institutional Analysis Approach to Studying Libre Software
‘Commons’ — Charles M. Schweik
28 About Closed-door Free/Libre/Open Source (FLOSS) Projects: Lessons
from the Mozilla Firefox Developer Recruitment Approach — Sandeep
Krishnamurthy
33 Agility and Libre Software Development — Alberto Sillitti and Giancarlo
Succi
38 The Challenges of Using Open Source Software as A Reuse Strategy —
Christian Neumann and Christoph Breidert
MOSAIC
43 Computational Linguistics
Multilingual Approaches to Text Categorisation — Juan-José GarcíaAdeva, Rafael A. Calvo, and Diego López de Ipiña
52 Software Engineering
A Two Parameter Software Reliability Growth Model with An Implicit Adjustment Factor for Better Software Failure Prediction — S.
Venkateswaran, K. Ekambavanan, and P. Vivekanandan
59 News & Events: Proposal of Directive on Software Patents Rejected
by The European Parliament
UPENET (UPGRADE European NETwork)
61 From Pliroforiki (CCS, Cyprus)
Informatics Law
Security, Surveillance and Monitoring of Electronic Communications
at The Workplace — Olga Georgiades-Van der Pol
66 From Mondo Digitale (AICA, Italy)
Evolutionary Computation
Evolutionary Algorithms: Concepts and Applications — Andrea G. B.
Tettamanzi
* This monograph will be also published in Spanish (full version printed; summary, abstracts, and some
articles online) by Novática, journal of the Spanish CEPIS society ATI (Asociación de Técnicos de
Informática) at <http://www.ati.es/novatica/>, and in Italian (online edition only, containing summary,
abstracts, and some articles) by the Italian CEPIS society ALSI (Associazione nazionale Laureati in Scienze
dell’informazione e Informatica) and the Italian IT portal Tecnoteca at <http://www.tecnoteca.it>.
Libre Software as A Field of Study
Presentation
Libre Software under The Microscope
Jesús M. González-Barahona and Stefan Koch
1 Foreword
Libre (free, open source) software has evolved during
the last decade from an obscure, marginal phenomenon into
a relatively well-known, widely available, extensively used
set of applications. Libre software solutions are even market leaders in some segments and are experiencing huge
growth in others. Products such as OpenOffice.org, Linux,
Apache, Firefox and many others are part of the daily experience of many users. Companies and public administrations alike are paying more and more attention to the benefits that libre software can provide when used extensively.
However, despite this increasing popularity, libre software is still poorly understood. Perhaps because of this, in
recent years the research community has started to focus
some attention on libre software itself: its development
models, the business models that surround it, the motivations
of the developers, etc. In this context, we (invited by UPGRADE and Novática, two journals that have shown for
years a serious interest in this field1) felt that the time was
ripe to put together this monograph on "Libre Software as A
Field of Study". Consequently, we issued a call for contributions, which led to a process in which each proposal was
reviewed by at least two experts in the field.
2 Definition
The term "Libre Software" is used in this introduction,
and in the title of this special issue, to refer to both "free
software" (according to the Free Software Foundation, FSF,
definition) and "open source software" (as defined by the
Open Source Initiative, OSI). "Libre" is a term well understood in romance languages (i.e. from Latin origin), such as
Spanish, French, Catalan, Portuguese and Italian, and understandable in many others. It avoids the ambiguity of
"free" in English, since "libre" means only "free as in free
speech", and the term is used in Europe in particular, although its first use can be traced to the United States2 .
Libre software is distributed under a license that complies with the "four freedoms", as stated by Richard Stallman
in "The Free Software Definition":
The freedom to run the program for any purpose (freedom 0).
The freedom to study how the program works and adapt
it to your needs (freedom 1). Access to the source code
is a precondition for this.
The freedom to redistribute copies so you can help your
neighbour (freedom 2).
The freedom to improve the program and release your
improvements to the public, so that the whole community benefits (freedom 3). Access to the source code is a
precondition for this.
Therefore, libre software is defined by what users can
do when they receive a copy of the software, and not by
how that software was developed, nor by whom, nor with
what intentions.
However, although there is nothing in the definition
about how the software has to be produced or marketed to
become "libre", the four freedoms enable some development and business models while making others difficult or
impossible. This is why we often talk about "libre software
development models" or "libre software business models".
Both terms are not to be understood as "models to be fol-
The Guest Editors
Jesús M. González-Barahona teaches and researches at the
Universidad Rey Juan Carlos, Madrid, Spain. He started working
in the promotion of libre software in the early 1990s. Since then
he has been involved in several activities in this area, such as
the organization of seminars and courses, and the participation
in working groups on libre software. He currently collaborates
in several libre software projects (including Debian), and
participates in or collaborates with associations related to libre
software. He writes in several media about topics related to libre software, and consults for companies on matters related to
their strategy regarding these issues. His research interests
include libre software engineering and, in particular, quantitative
measures of libre software development and distributed tools
for collaboration in libre software projects. He is editor of the
Free Software section of Novática since 1997 and has been guest
editor of several monographs of Novática and UPGRADE on
the subject. <[email protected]>
2
UPGRADE Vol. VI, No. 3, June 2005
Stefan Koch is an Assistant professor of Information Business
at the Vienna University of Economics and Business
Administration, Austria. He received a MBA in Management
Information Systems from Vienna University and Vienna
Technical University, and a Ph.D. from Vienna University of
Economics and Business Administration. His research interests
include libre software development, effort estimation for software projects, software process improvement, the evaluation of
benefits from information systems and ERP systems. He is also
the editor of the book “Free/Open Source Software
Development”, published by IGP in 2004. <[email protected]>.
© Novática
Libre Software as A Field of Study
lowed to qualify as libre software", but simply as models
which are possible, perhaps common, in the world of libre
software.
3 Aspects of Study
Taking this definition as our framework, there has been
a great deal of research in recent years about development
and business models for libre software, about the
motivations of developers producing that software, and
about the implications (economic, legal, engineering) of this
new approach. In other words, libre software has become
in itself a subject for study; a new field in which different
research approaches are being tried in order to increase our
understanding of it How is libre software actually being
produced, what room for improvement is still left, which
best practices can be identified, what implications does libre
software have for users and producers of software, how can
libre software development be improved, which ideas and
processes can be transferred to the production of proprietary software, what insights can be gained into open creation processes and user integration, etc. are just some of the
questions being addressed by this research. Some of them
are standard questions only now being put to the libre software world; others are quite specific and new.
4 Papers in This Monograph
This monograph issues contains seven papers that cover
several of the topics mentioned above and make use of a
great variety of empirical and theoretical approaches. The
first paper, by Andrea Deverell and Par Agerfalk, is about
the CALIBRE (Co-ordination Action for LIBRE Software)
project, funded by the European Commission to improve
European research in the field of libre software.
After this comes a paper entitled "Libre Software Movement: The Next Evolution of The IT Production Organization?", written by Nicolas Jullien, which discusses the dissemination of libre software. It argues from a historical perspective that libre software constitutes the next evolution
in industrial IT organization.
The next few papers deal with workings within libre
software projects. Juan-José Amor-Iglesias, Jesús M.
González-Barahona, Gregorio Robles-Martínez and Israel
Herráiz-Tabernero, in their paper "Measuring Libre Software Using Debian 3.1 (Sarge) as A Case Study: Preliminary Results", show empirical results from one of the most
popular and largest projects in existence, based on an analysis of source code. Charles M. Schweik tries to identify
design principles leading to a project’s success or failure;
in his paper "An Institutional Analysis Approach to Study-
ing Libre Software ‘Commons’" he presents a framework
for analysing the institutional design of commons settings
to be applied to libre software projects. Finally, Sandeep
Krishnamurthy, using Mozilla Firefox as an example, challenges the view that in libre software projects, anyone can
participate without hindrance. He coins the term "closeddoor project" for projects with a tight control and explains
why such a strategy might be adopted in his paper "About
Closed-door Free/Libre/Open Source (FLOSS) Projects:
Lessons from the Mozilla Firefox Developer Recruitment
Approach".
The issue concludes with two papers which aim to put
libre software and its development in the context of ‘mainstream’ software engineering practices. Alberto Sillitti and
Giancarlo Succi in their paper "Agility and Libre Software
Development" evaluate the relationship and commonalities
between agile software development methodologies, in particular eXtreme Programming, and libre software development. Christian Neumann and Christoph Breidert present
a framework for comparing different reuse strategies in software development. In their paper titled "The Challenges of
Using Open Source Software as a Reuse Strategy" they give
special consideration to the required technical and economical evaluation.
Acknowledgments
As with any work, this monograph would not have been
possible without the help of several people. Naturally, the
most important work was carried out by the authors themselves, and the reviewers also devoted their time to help in
selecting and improving the submissions. In total, 16 authors contributed submissions, and 16 people provided valuable feedback and assistance by helping with the reviewing. Following the ideals of libre software development,
these reviewers are named here in order to give special recognition of their contribution: Olivier Berger, Cornelia
Boldyreff, Andrea Capiluppi, Jean Michel Dalle, Rishab
Ghosh, Stefan Haefliger, Michael Hahsler, George Kuk,
Björn Lundell, Martin Michlmayr, Hans Mitloehner,
Martin Schreier, Ioannis Stamelos, Ed Steinmueller,
Susanne Strahringer, and Thomas Wieland.
The cooperation of the team in the CALIBRE project
has also been very useful, both in providing ideas and in
collaborating with their effort. Finally, we would also like
to acknowledge the help, assistance and guidance of Rafael
Fernández Calvo, Chief Editor of UPGRADE and
Novática, during the entire process of preparing and assembling this special issue.
1 Novática, in addition to have a section dedicated to this field since
1997, has published three monographs on it – 1997, 2001, and 2003 –
jointly with UPGRADE in the last two cases (see <http://www.ati.es/
novatica/indice.html> and <http://www.upgrade-cepis.org/pages/
pastissues.html>).
2 For a brief study of the origins of the term "libre software", visit <http:/
/sinetgy.org/jgb/articulos/libre-software-origin/libre-softwareorigin.html>.
© Novática
UPGRADE Vol. VI, No. 3, June 2005
3
Libre Software as A Field of Study
Useful References on Libre Software as A Field of Study
In addition to the references included in the papers that
make part of this monograph, readers who wish to understand the
libre (free, open source) software phenomenon in greater detail
may be interested in consulting the following sources.
Books
· C. DiBona, S. Ockman, and M. Stone (eds.). Open Sources:
Voices from the Open Source Revolution. O’Reilly and Associates, Cambridge, Massachusetts, 1999. Available at <http://
www.oreilly.com/catalog/opensources/book/toc.html>.
· J. Feller and B. Fitzgerald. Understanding Open Source Software Development. Addison-Wesley, London, 2002.
· J. Feller, B. Fitzgerald, S.A. Hissam, and K.R. Lakhani (eds.).
Perspectives on Free and Open Source Software. The MIT Press,
Boston, Massachusetts, 2005.
· J. García, A. Romeo, C. Prieto. La Pastilla Roja, 2003. ISBN:
84-932888-5-3. <http://www.lapastillaroja.net/>. (In Spanish.)
· S. Koch (ed.). Free/Open Source Software Development. Idea
Group Publishing, Hershey, PA, 2004.
· V. Matellán Olivera, J.M. González Barahona, P. de las Heras
Quirós, G. Robles Martínez (eds.). Sobre software libre.
4
UPGRADE Vol. VI, No. 3, June 2005
Compilación de ensayos sobre software libre. GSYC,
Universidad Rey Juan Carlos, 2003. Available at <http://
gsyc.escet.urjc.es/~grex/sobre-libre/>. (In Spanish.)
· E.S. Raymond. The Cathedral and the Bazaar: Musings on Linux
and Open Source by an Accidental Revolutionary. O’Reilly and
Associates, Sebastopol, California, 1999.
· R.M. Stallman. Free Software, Free Society: Selected Essays of
Richard M. Stallman. GNU Press, Boston, Massachusetts, 2002.
Also avalaible at <http://www.gnu.org/philosophy/fsfs/rmsessays.pdf >.
Web Sites
· Opensource, a collection of publicly accessible papers about libre
software. <http://opensource.mit.edu>.
· Slashdot, the community site for the worldwide libre software
community. <http://slashdot.org>.
· Sourceforge, the largest hosting site for libre software projects.
<http://sourceforge.net>.
· Free Software Foundation. <http://fsf.org>.
· Open Source Initiative (OSI). <http://opensource.org>.
· BarraPunto, the community site for the Spanish libre software
community. <http://barrapunto.com>.
© Novática
Libre Software as A Field of Study
CALIBRE at The Crest of European Open Source Software Wave
Andrea Deverell and Par Agerfalk
This paper is copyrighted under the CreativeCommons Attribution-NonCommercial-NoDerivs 2.5 license available at <http://
creativecommons.org/licenses/by-nc-nd/2.5/>
CALIBRE (Co-ordination Action for Libre Software)1,
a EUR 1.5 million EU-funded project which aims to revolutionise how European industry leverages software and
services, was officially launched on Friday 10 September
2004 in Ireland. CALIBRE comprises an interdisciplinary
consortium of 12 academic and industrial research teams
from Ireland, France, Italy, the Netherlands, Poland, Spain,
Sweden, the UK and China.
Libre software, more widely known as "open source
software" (OSS), is seen as a significant challenge to the
dominance of proprietary software vendors. The open source
phenomenon, which has produced such headline products
as the Linux operating system, involves the sharing of software source code with active encouragement to modify and
redistribute the code. Open source has lead to the emergence of innovative new business models for software and
services, in which organisations have to compete on product and service attributes other than licensing prices. From
a broader business perspective, several innovative business
models and new business opportunities have emerged as a
result of the OSS phenomenon, and many organisations have
begun to capitalise on this. In terms of competitiveness, the
OSS phenomenon has created a new service market for commercial enterprises to exploit and there are several examples whereby these companies have innovatively forged
competitive advantage. Since purchase price and license fees
are not a factor, OSS companies have to compete predominantly in terms of customer service. Since OSS counters
the trend towards proprietary monopolies, the OSS model
inherently promotes competitiveness and an open market.
Also, by having access to source code, traditional barriers
to entry which militate against new entrants are lowered.
This provides a great opportunity for small and medium
sized enterprises to collaborate and compete in segments
traditionally dominated by multinationals.
Although much of the recent OSS debate has focused primarily on desktop applications (Open Office, Mozilla Firefox,
etc.), the origins and strengths of OSS have been in the platform-enabling tools and infrastructure components that underpin the Internet and Web services; software like GNU/Linux,
Apache, Bind, etc. This suggests that OSS may have a particularly important role to play in the secondary software sector;
i.e. in domains where software is used as a component in other
products, such as embedded software in the automotive sector,
consumer electronics, mobile systems, telecommunications,
and utilities (electricity, gas, oil, etc.). With a focus on the sec-
ondary software sector, different vertical issues, such as embedded software and safety critical applications, are brought
to the fore. The differences in how horizontal issues play out
across different vertical sectors can be dramatic. For example,
the nuances of the software development context in the banking sector are very different from those which apply in the
consumer electronics or telecommunications sectors. A vibrant
European secondary software sector provides fertile research
ground for studying the potential benefits of OSS from a commercial perspective.
Professor Brian Fitzgerald at the University of Limerick believes that "there is enormous potential to provide
increased productivity and competitiveness for European
industry by challenging the proprietary models that dominate software development, acquisition and use".
As part of the two-year CALIBRE project a European
industry open source software research policy forum has
been established. Known as CALIBRATION it comprises
a number of influential organisations such as Philips Medical, Zope Europe, Connecta, Vodafone and others. The aim
of this forum is to facilitate the adoption of next generation
software engineering methods and tools by European industry, particularly in the ‘secondary’ software sector (e.g.
automotive, telecommunications, consumer electronics, etc.)
where Europe is acknowledged to have particular competitive strengths. The forum also plays a central role in the
European Union policy process.
CALIBRE is focused on three scientific research pillars:
open source software, agile methods and globally-distributed
software development. The CALIBRE consortium comprises
the leading researchers in each of these areas. The intention is
to closely link these researchers with the key industrial partners through the CALIBRATION Industry Policy Forum and
a series of dissemination events. Enabling the industrial partners to refine and reshape the CALIBRE research agenda. This
will allow for rapid dissemination, and the proactive formulation of policy initiatives. Upcoming events organised or coorganised by CALIBRE include:
11th -15th July, OSS 2005, Genova, Italy.
9th September 2005, University of Limerick, Ireland,
title of conference: "The Next Generation of Software Engineering: Integrating Open Source, Agile Methods and Global Software Development".
CALIBRE Workshop on Quality and Security in OSS,
18 Oct 2005 at the 7th National Conference on Software
Engineering, Krakow, 18-21 Oct 2005.
For further information please visit <http://www.
calibre.ie> or contact: Andrea Deverell, CALIBRE Events
and Industry Forum Co-ordinator, University of Limerick,
Phone: +353 61 202737. Email: <[email protected]>
1
CALIBRE has cooperated with UPGRADE and Novática
for the production of this monograph.
© Novática
UPGRADE Vol. VI, No. 3, June 2005
5
Libre Software as A Field of Study
Libre Software Movement: The Next Evolution of
The IT Production Organization?
Nicolas Jullien
© Verbatim copy of this article is permitted only in whole, without modifications and provided that authorship is recognized
Free (Libre) software diffusion represents one of the main evolutions of the Information Technology (IT) industry in recent
years. It is not the least surprising either. In this article we first try to replace this diffusion in its historical context. We first
show that the IT industry today presents the same characteristics as those viewed in former evolutions. And we present the
arguments which explain why we think that libre may become a dominant organization for the computer industry.
Keywords: Evolution of The IT Industry, FLOSS, Free/
Libre/OpenSource/Software, Industrial Economics.
1 Introduction
The diffusion of Libre Software products eventually
changes the way programs are elaborated, distributed and
sold, and thus may cause profound changes to an IT (Information Technology) industrial organization. It would be far
from an exceptional phenomenon, as, in the field of Information Technology, the industrial structure has undergone
two major changes since fifty years1.
Considering these points, we may wonder whether we
are on the eve of a new industrial structure and whether it
will be based on a libre organization.
To do so, we show that the IT industry today presents
the same characteristics as those viewed in former evolutions
(part 1). In part 2 we present the arguments which cause us
to believe that libre organization is becoming a dominant
organization for the computer industry.
2 Some Characteristics of The Computer Industry
2.1 Economics Specificities
First of all, a software program can be considered as a
"public good", given that21:
- "it is non-rivalrous, meaning that it does not exhibit
scarcity, and that once it has been produced, everyone can
benefit from it.
- it is non-excludable, meaning that once it has been
1
To avoid the ambiguity of the nouns "Free" (as freedom) or "Open Source"
software, we prefer the French acceptation, increasingly used, of 'Libre'.
In any case, we are here speaking of software for which the licensee can
get the source code, is allowed to modify this code and redistribute the
software and the modifications.
2
Our understanding of the history of the information technology owes
much the work of Breton [5], Genthon [13] and Dréan [11]. The analysis of the organization of the IT industry is also based on the work of
Gérard-Varet and Zimmermann [16], introduced in Delapierre et al. [10].
Lastly, our analysis of the economy and industry of software programs
owes much to Mowery [25] and Horn [19], whose works, it seems to us,
are a reference in this field. We encourage all those who are eager to
know more about these subjects to read these studies.
6
UPGRADE Vol. VI, No. 3, June 2005
created, it is impossible to prevent people from gaining access to the good."
In addition, this good is not destroyed by use, so it can
be bought one for all.
The second characteristic of a computer product is that
it is not made of one piece but rather of a superposition of
several components: hardware (with one specific piece, the
microprocessor), the operating system and the programs.
This implies coordination between different producers, or
that a single producer produces all the components.
The third characteristic, which actually is a consequence
of the first two, is that computer products, and especially,
software are subject to "increasing returns to adoption", to
use the term from Arthur [1]. He has defined 5 types of
Increasing Returns to Adoption, impacting directly from the
single user to the whole market, and these five are present
in the computer software industry:
Learning effect, meaning that you learn to use a program, but also a programming language, making it harder
to switch to another offer.
Network externalities (the choices of the people you exchange with have an impact on the evaluation you make for
the quality of a good). For instance, even if a particular text
editor is not the one which is most appropriate to your document creation needs, you may choose it because everybody
you exchange with sends you text in that format, and so
you need this editor to read the texts.
Economy of scale: because the production of computer
parts involves substantial fixed costs, the average cost per
unit decreases when production increases. This is especially
the case for software where there are almost only fixed costs
Nicolas Jullien defended his PhD work on the economy of libre software in 2001. He is today in charge of coordinating a
research group on the uses of IT applications in Brittany (France),
called M@rsouin (Môle Armoricain de Recherche sur la SOciété
de l’information et les Usages d’INternet, <http://www.
marsouin.org>). He also manages the European CALIBRE
(Coordination Action for LIBRE Software Engineering) project
for GET (Groupe des Écoles des Télécommunications, <http://
www.get-telecom.fr/fr_accueil.html>), one of the participants
of the project. <[email protected]>
© Novática
Libre Software as A Field of Study
(this is a consequence of the fact that it presents the characteristics of a public good).
Increasing return to information: one speaks more of
Linux since it is widely distributed.
Technological interrelations: as already explained, a
piece of software does not work alone, but with some material and other pieces of software. What makes the 'value' of
an operating system is the number of programs available
for this system. And the greater the number of people who
choose an operating system, the wider the range of software programs for this very system, and vice versa.
This means that this industry has four original characteristics, in terms of competition structure, according to
Richardson [31].
software being "public goods", the development and production costs do not depend on the size of user population,
and extending this population can be done at a cost, which,
if not null, is negligible compared to development costs.
The pace of innovation is huge, because since the product is not destroyed by use, only innovative, or at least different, products can be resold. This results in the reduction
of the product’s life length.
These two characteristics lead to fierce competition,
aggressive pricing, and firms trying to impose their solution as the standard in order to take advantage of monopoly
rent.
The other two characteristics are consequences of the
"network effect" and of "technological interrelations":
Firms owning a program have an incentive to develop
some pieces of software which complement the one they
already have. But they are unable to respond to the whole
spectrum of demand connected to their original device (especially when speaking of key programs such as operating
systems). So, at the same time, new firms appear to respond
to new needs.
The consequence is that standards play a very important
role because they make it possible for complementary goods
to work together. Here again, controlling a program, if it
means controlling a standard, is an asset. But to meet all the
demands, one has to make public the characteristics of the
standard (or at least a part of it).
Knowing these characteristics and their consequences
help us to understand the evolutions of the industry since
its emergence in the middle of the last century.
2.2 Technological Progress, New Markets, Vertical
Disintegrations and New "Competition Regimes"
We will see that each period is characterized by a technology which has allowed firms to propose new products
to new consumers.
1. A dominant technological concept: in the first period (mid 1940s to mid 1960s), there was no real differentiation between hardware and software, and computers were
'unique' research products, built for a unique project. Thanks
to technological progress (miniaturization of transistors,
compilers and operating systems), in the second period
(early 1960s to early 1980s), the scope of use extended in
© Novática
two directions: the reduction of size and the price of computers, raising the number of organizations able to afford
them, and the increase in computing capacities, allowing
the same computer to serve different uses. But the main
evolution characterizing the period was that the same program could be implemented in different computers (from
the same family), allowing the program to evolve, to grow
in size, and to serve a growing number of users. The computer had become a 'classical' good, to be changed once no
longer efficient or too old, but without losing the investments made in software. With the arrival of the micro-processor, the third period began in the late 1970s. Once again
the scope of use extended in two directions (increase in
power and reduction in size and price of low-end computers), the dominant technological concept being that the same
program can be packaged and distributed to different persons or organizations, in the same way as for other tangible
goods.
2. … for a dominant use: in the first period, computers were computing tools, or research tools, for research
centers (often military ones). In the second period (early
1960s to the beginning of the 1980s), they had become tools
for centralized processing of information for organizations
(statistics, payment of salaries, etc.), the size of organizations having access to this tool decreasing during the period. The third period is that of personal, but professional,
information processing.
3. … and a dominant type of increasing return to
adoption. Being a tool for specialists, where each project
allowed producers and users to better understand the possibilities of such machines, the first period was dominated by
learning and using, thus with significant R&D (Research &
Development) costs. In the second period, this learning by
using effect did not disappear, as users were able to keep
their home-made programs while changing their computer.
This possibility also created the dominant increasing return
to adoption effect: technological interrelations. As, factually, a program was developed for and worked with one
single operating system, it became difficult for a client to
break the commercial relation, once initiated, with a producer. In "exchange" this client no longer even needed to
understand the hardware part of the machine. As in the second period, this effect did not disappear in the third. But the
third period is dominated by the economy of scope thanks
to the distribution of computers, especially PC production
organization3 but principally because of the development
of standardized programs [25].
These technological characteristics provide elements to
better understand the structure of the computer industry:
3
'Opening' the hardware part of PC, IBM allowed competitors to produce
similar machines and component producers to distribute their products
to different producers. This has increased competition, in terms of price
but also in terms of component efficiency. In return, the distribution of
PC has allowed producers to increase the volume of components sold,
and thus to decrease their price, as this production is mainly a fixed cost
production (the R&D and the construction of the capacities of production).
UPGRADE Vol. VI, No. 3, June 2005
7
Libre Software as A Field of Study
the increasing returns to adoption provide those companies
which control them with dominant positions.
In the first period the more you participated in projects,
the more able you were to propose innovations for the next
project, thanks to the knowledge accumulated. This explains
the quick emergence of seven dominant firms (in the USA).
The second period was initiated by IBM, with the release of the 360 Series, the first family of computers sharing the same operating system. At the end of the period,
IBM was the dominant firm (even sued for abusing a monopoly position), even if incomers like HP and Digital had
gained significant positions with mini-computers. Once
these companies had installed a computer for a client, technological interrelations meant that this client would face
substantial costs if switching to another family run by another operating system. And the more clients they served,
the more they could invest in R&D to develop the efficiency
of their computer family, but also the more they could spend
on marketing to capture new clients. Once again this favored
the concentration in manufacturing business.
In the third period, once again, the winners were those
who controlled the key elements of the computer, central in
terms of technological interrelation: operating systems still,
but also micro-processors. They were the companies which
captured the greatest part of the economy of scale benefits,
as competition made prices fall in the other sectors, in particular for the machines which were a source of high profit
before, but also for other components.
If this standardization is one of the key elements which
made the distribution of computers possible, it also generates some inefficiencies because the control of such standards by a single company can lead to this company abusing
its dominant/monopoly position. This suspicion occurred
at the end of the seventies concerning IBM, and today
Microsoft has been sued for abusing its dominant position.
It is not our aim to debate the reality of these practices. But
the existence of such processes proves that some actors do
not feel that the redistribution of increasing return to adoption benefits is efficient.
3 On The Eve of A New Step in The History of
The Information Technology Industry?
the entire organization, to multiple, linked machines which
are used to carry out different tasks, varying in time, and
which are integrated within various organizations. Networking, exchanging between heterogeneous systems, communication between these machines have all become crucial.
In parallel with this evolution, software program technologies have evolved too [17:126-128]: the arrival of object-oriented programming languages (C++, Java) allowed
already developed software components to be re-used. This
has led to the concept of "modular software programs": the
idea is to develop an ensemble of small software programs
(modules or software components), which would each have
a specific function. They could be associated with and usable on any machine since their communication interfaces
would be standard.
3.1.2 New Dominant Increasing Return to Adoption
Thus, the diffusion of the Internet, and the growth of the
exchanges outside the organization has made the network externalities the dominant increasing return to adoption.
3.1.3 New Dominant Uses
And these programs and these materials are often produced by different firms, for different users. It is necessary
for these firms to guarantee their availability in the future,
in spite of changing versions.
Indeed, within client firms, the demand has become more
and more heterogeneous with the networking of various
systems and the need for users working in the firm to share
the same tools. Software programs (and more particularly,
software packages) have to be adapted to the needs and
knowledge of every individual without losing the economy
of scale benefits, thus the standardization of the programs
upon which the solution is based.
It then becomes logical that client firms should seek more
open solutions which would guarantee them greater control. For example, what the Internet did was not to offer a
"protocol" in order to allow the simple transmission of data,
since this already existed, but to offer a sufficiently simple
and flexible one that allowed it to impose itself as a standard for exchange.
This is so much the case that Horn [19] defends the idea
that we may have entered a new phase in production: "mass
custom-made production".
3.1 A Need for Normalized "Mass-custom made"
Products
3.1.1 New Technologies.
During the 1990s, with the arrival of the Internet, the
principal technical evolution in information technology was,
of course, the generalization of computer networking, both
inside and outside organizations. Miniaturization also allowed the appearance of a new range of 'nomad' products
('organizers' like Psion and Palm , music players, mobile
phones).
This falls within the constant evolution of information
technology products. One has gone from a single machine,
dedicated to one task known in advance and reserved for
8
UPGRADE Vol. VI, No. 3, June 2005
3.1.4 Towards A New Industrial Organization?
However, these service relationships have not proved to
be efficient enough. When one looks into the satisfaction
surveys that have been done with regard to information technology products4, one notes that people are satisfied with
respect to the computer itself but not with the after-sales
service, especially with software programs. The basic tendency shown by the 01 Informatique survey is that the cli4
This is not new, see for instance the satisfaction survey which has been
carried out over three years by "01 informatique" weekly magazine for
large French organizations (issue no.. 1521 in 1998, 1566 in 1999, and
1612 in 2000). Other inquiries exist which are sometimes even harsher,
like those of De Bandt [9], or Dréan [11] (pp. 276 and following).
© Novática
Libre Software as A Field of Study
ent seeks a better before and after-sales support. He/she also
wants to be helped to solve his/her difficulties and wants
his/her needs to be satisfied.
We have found all the elements present on the eve of a
new period of IT organization: some technical evolutions,
corresponding to some evolutions of demand, for which the
present industrial organization appears relatively inefficient.
If we admit that we are at the beginning of a new industrial organization, or "regime of competition", one can ask
what could be the characteristics of such a regime.
3.2 Can Libre Be The Next Industrial Organization?
We will defend the idea that the innovation of Libre concerns the software development process. It provides the industry with two linked 'tools': a system to produce what
Romer [32] has called "public industrial goods" in order to
organize norm development and implementation5, both of
which the software industry lacked.
This should make it possible to redefine service relations, and, in that way, causing the industrial organization
to evolve.
3.2.1 Libre Production: A Way to Organize A Public
Industrial Goods Production, Respecting Norms ...
More than mere public research products, libre programs
were, first and foremost, tools developed by user-experts,
to meet their own needs. The low quality of closed software
packages and, especially, the difficulty of making them
evolve was one of the fundamental reasons for Richard
Stallman’s initiative6. These user-experts are behind many
libre software development initiatives (among which are
Linux, Apache or Samba) and have improved them. One
must also note that, concerning these flagship software programs, this organization has obtained remarkable results in
term of quality and quick improvements7.
This is undoubtedly due to the free availability of the
sources which allowed skilled users to test the software programs, to study their code and correct it if they found er-
rors. The higher the number of contributors, the greater the
chance that one of these contributors will find an error, and
will know how to correct it. But libre programs are also
tools (languages) and programming rules that make this
reading possible. All this contributes to guarantee minimum
thresholds of robustness for the software. Other libre programs largely distributed are program development tools
(compilers, such as GCC C/C++ compiler, development
environment, such as Emacs or Eclipse). The reasons are
twofold:
they are tools used by computer professional, who are
able and interested by developing or adapting their working tools;
they are the first tools you need to develop software
programs, and their efficiency is very important for program efficiency. That is why FSF’s first products were such
programs, and particularly the GCC compiler.
Co-operative work, the fact that the software programs
are often a collection of simultaneously evolving small-scale
projects, also requires that the communication interface
should be made public and 'normalized'8. Open codes do
facilitate the checking of this compatibility and, if need be,
the modification of the software programs. It is also remarkable to note that, in order to avoid the reproduction of diverging versions of Unix, computer firms have set up organizations which must guarantee the compatibility of the
various versions and distribution of Linux. They must also
publish technical recommendations on how to program the
applications so that they can work with this system in the
same spirit as the POSIX standard9.
The fact that firms use libre programs can be seen as the
creation of professional tools to collectively coordinate to
create components and software program bricks which are
both reliable and, especially, 'normalized'. Up to now, this
collective, normalized base has been lacking within the information technology industry [11].
This normalization of the components used to build
"mass custom-made products" helps to improve the quality
of this production because the services based on them may
be of better quality.
5
Still understood as economics theory defines it, meaning an open system
allowing actors to negotiate the characteristics of a component/product/
interface and guarantying that product design would respect these characteristics.
6
Stallman ‘’invented’ the concept of libre program, with the creation of
the GNU/GPL license and of the Free Software Foundation, the organization which produces them; see <http://www.fsf.org/gnu/thegnu
project.html>. See also <http://www.gnu.org/prep/standards.html> for
technical recommendations on how to program GNU software.
7
About the way libre development is structured, besides Raymond
[28][29[30], one can also refer to Lakhani and von Hippel [24] and Jullien
[21]. See Tzu-Ying and Jen-Fang [33] for a survey and an analysis of
on-line user community involvement efficiency, Bessen [4] and Baldwin
and Clark [3] for a theoretical analysis of the impact of libre code architecture on the efficiency of libre development. The latter argue that libre
may be seen as a new development ‘institution’ (p. 35 and later). As to
performance tests, one can refer to <http://gnet.dhs.org/stories/
bloor.php3> for operating systems. The results for numerous comparative evaluations are available on the following sites : <http://
www.spec.org> and <http://www.kegel.com/nt-linux-benchmarks.html>
(the latter mainly deals with NT/Linux).
© Novática
3.2.2 ... Allowing The Development of A More Efficient
Service Industry10
To prove that a more efficient, perennial service industry can be built on libre products, we have to analyze two
points: from the firms’ perspective 1) that these offers are
more interesting than the existing ones and that there is some
business, 1.bis) that this business is financially sustainable;
and from a global perspective 2) that in the long run it pro8
In the sense that they respect public formats whose evolution is decided
collectively.
9
It is the Free Standard Group, <http://www.freestandards.org/>. Among
others, members of this committee are: Red Hat, Mandriva, SuSE/Novell,
VA Software, Turbo Linux, and also IBM, SUN or Dell, etc.
10
This theoretical analysis is based on an study of the commercial strategies of companies saying they sell Libre software based services or products in France (see Jullien [22]).
UPGRADE Vol. VI, No. 3, June 2005
9
Libre Software as A Field of Study
vides actors with enough incentives to contribute to the
development of such public goods to maintain the dynamism of innovation.
The Business
There is a business based on libre software. As with classical 'private'11 programs, when using libre ones, it is necessary to define one’s needs, to find a/the software program
that answers them, to install it and, sometimes, to adapt it
by developing complementary modules. Once installed, it
is necessary to follow its evolution (security upgrade, new
functionalities...). It should be taken into account that users
(firms, administrations or even single users) are not always
competent enough to evaluate, install or follow the evolution of these software programs. They do not always know
how to adapt them to their own needs.
All this requires the presence of specialists of these software programs in the firm, which is not always easy. And
most of the business users do not need them on a full-time
basis. That is why, for a long time, some agents from the
libre movement argue that "companies should be created
and that this activity should be profitable" (Ousterhout [27]).
Of course, the absence of license fees definitely bestows
a competitive advantage on the libre solution. But this alone
does not justify its adoption: over the long term, this solution must prove to be less expensive and yet offer the same
quality standards. Proprietary solution manufacturers use
this indicator to defend their offers12.
Let’s consider now the specific advantages of libre
software.
We have already said that the most mature libre programs were of very high quality. This facilitates the relationships between the producers of a software-based solution and those who use this solution. Producers can more
easily guarantee, through a contract, the reliability of the
libre programs they use, because they are able to evaluate
their quality thanks to the norm they have set up during the
development phase. An assistance network is available to
them and they can also intervene by themselves in these
software programs. In addition, the fact that the software
program sources are accessible and that the evolution of
these programs is not controlled by a firm can reassure the
adopter: the solution respects and will continue to respect
the standards. It will thus remain inter-operable with the
other programs he/she uses.
The pooling of software bricks should also change the
competition among service firms towards long-term relationships and maintenance of software programs. It would
11
We prefer this term to proprietary, as all programs have an owner. Here
"private" means that the owner do not share the program with others, as
it in a classical software distribution.
12
This is called TCO, for "Total Cost of Ownership". Today, Microsoft
defends the idea that, if its software programs are more expensive than
libre programs, they have a lower TCO, because it is easier to find firms
that install them, given that their evolution is guaranteed, controlled by
a firm, etc.
10
UPGRADE Vol. VI, No. 3, June 2005
become more difficult for them to pretend that the malfunctioning of a software program they have installed and
parameterized is due to a program error. This can encourage firms to improve customer services and allows us to
say that, in this field, libre solutions can be competitive.
Does that 'theoretical' organization provide libre service
companies with profitable business models? This is undoubtedly the most delicate point to defend today. There are few
examples of profitable firms and many, still, have not reached
a balance. However, we can point the following points:
with regard to production costs, thanks to construction
modules, the cost for developing software programs is more
broadly spread over time, thus resembling a service production structure whereby the missing functionality is developed only when necessary. The contribution of the service firms does not relate to the entire production of a software program but to the production of these components
for clients who prefer libre programs so as not to depend on
their supplier. Moreover, a component that has been developed for one client can be re-used to meet the needs of another client. A "security hole" that has been detected for
one client can be corrected for all the clients of the firm. As
a consequence, firms monopolize part of the economies of
scale generated by the collective use of a software program.
In exchange, they guarantee the distribution of their innovations and corrections, which is one of the software editors’ traditional roles. But traditionally, they finance this
activity by producing and selling new versions of the software program.
One may say that service firms which base their offers
on libre programs propose free 'codified' knowledge, that
is, software programs in order to sell the 'tacit' knowledge
they possess: the way software programs intimately function, the capabilities of their developers to produce contributions that work, to have those who control the evolution
of software programs accept these contributions, etc. These
firms are the most competent to take advantage of the benefits linked to the apprenticeship generated by the development and improvement of software programs.
because of these learning effects and because it is difficult to diffuse the tacit knowledge one needs to master in
order to follow and influence the evolution of a libre program, this role will inevitably be limited to a small number
of firms. They will bring together specialists in software
programs and will place them at the disposal of client-firms.
They will have created strong trademarks, recognized by
the users-developers of software programs and known by
other clients. This will make it possible to diminish the pressure of competition, thus ensuring their profit margins.
If it is hard to measure the incentives to innovate but
such competition should also encourage these producers to
contribute to the development of the software programs they
use.
The Contribution to Software Development
First of all, it is a way to make themselves known and
demonstrate their competence as developers to their clients.
© Novática
Libre Software as A Field of Study
Because every client has different needs, it is important
for the firms to master a vast portfolio of software programs
as well as to contribute to the development of standard software programs which are used in most offers. They must be
able to present their clients with realizations that are linked
to their problems. It is not so much the question of mastering technical products as to be able to follow, even control
their evolution, to guarantee the client, in the long run, that
it will meet his/her needs. And it is easier to follow the evolution of these software programs if one takes part in the
innovation process as it easier to understand other people’s
innovations (Cohen and Levinthal [6]).
In a market based on the increase in value of technical
expertise, this contribution activities reinforce the image of
a firm with regard to its expertise and capacities to be reactive, two qualities which allow it to highlight a special offer
as well as to improve its reputation (via the trademark) and
increase margins. On the other hand, this once again will
reinforce the tendency to concentrate on specific activities
because it is necessary to lower research costs and, therefore, to increase the number of projects and clients.
A more important source of innovation should be that
coming from users. As it is important to have the modifications of the program included in the official version (not to
have to redevelop these modifications for each new version
of the program), most of the new functionalities developed
by or for a user should be redistributed to all. Incidentally,
this will also give incentives for the service companies to
participate in the development of the most evolving software. If they want to be able to propose add-ons for their
clients they have to be already known as an ‘authorized’
contributor13.
4 Conclusion: Choosing The Right Economic
Landscape
If the libre movement seems to be the next step in an
historical trend, and the global economic model can be described, it is rather clear that business models which should
emerge and structure this new period are not yet well defined.
This stresses the necessity for more analysis of these
models, an analysis initiated by Dahlander [8] and Jullien
et al. [23]. But we have to focus on producer-community
relationships and the competitive advantage of managing a
libre project. This also means better understanding how the
libre organization(s) of production work(s), the incentive
for developers to participate in this production, and to measure the productivity of libre organization.
This is the research agenda of the CALIBRE (Coordination Action for LIBRE Software Engineering) European
research project14.
13
Firms such as Easter Eggs in France are today paid by companies to
make a modification of a libre program accepted and integrated into the
official distribution.
14
<http://www.calibre.ie>.
© Novática
Acknowledgements
This work has been funded by RNTL (Réseau National des Technologies Logicielles, French National Network for Software Technologies, <http://www.telecom.gouv.fr/rntl/>). The final report of this work is
available at <http://www-eco.enst-bretagne.fr/Etudes_projets/RNTL/
rapport_final/>.
References
[1] W. B. Arthur. "Self-reinforcing mechanisms in economics".
En P. W. Anderson, K. J. Arrow, and D. Pines, editors, "The
Economy as an Evolving Complex System". SFI Studies in
the Sciences of Complexity, Addison-Wesley Publishing Company, Redwood City C.A, 1998.
[2] W. B. Arthur. "Competing technologies, increasing returns and
lock-in by historical events: The dynamics of allocations under increasing returns to scale". Economic Journal, 99: 116131, 1999. <http://www.santafe.edu/arthur/Papers/Pdf_files/
EJ.pdf>.
[3] C. Y. Baldwin y K. B. Clark. "The architecture of cooperation: How code architecture mitigates free riding in the open
source development model". Harvard Business School, 43
pages,
2003.
<http://opensource.mit.edu/papers/
baldwinclark.pdf>.
[4] J. Bessen. "Open source software: Free provision of complex public
goods". Research on Innovation, 2002. <http://
www.researchoninnovation.org/online.htm# oss>.
[5] P. Breton. "Une histoire de l’informatique". Point Sciences,
Le Seuil, Paris, 1990.
[6] W. M. Cohen y D. A. Levinthal. "Innovation and learning:
The two faces of r&d". Economic Journal, 99: 569-596, 1989.
[7] M. Coris. "Free software service companies: the emergence
of an alternative production system within the software industry?" In [23, pp. 81-98], 2002.
[8] L. Dahlander. "Appropriating returns from open innovation
processes: A multiple case study of small firms in open source
software". School of Technology Management and Economics, Chalmers University of Technology, 24 pages, 2004.
<http://opensource.mit.edu/papers/dahlander.pdf.>
[9] J. De Bandt. "Services aux entreprises: informations, produits,
richesses". Economica, Paris, 1995.
[10] M. Delapierre, L.-A. Gerard-Varet, y J.-B. Zimmermann.
"Choix publics et normalisation des réseaux informatiques".
Technical report, Rapport BNI, Décembre 1980.
[11] G. Dréan. "L’industrie informatique, structure, économie,
perspectives". Masson, Paris, 1996.
[12] J. Gadray. "La caractérisation des biens et des services, d’adam
smith à peter hill: une approche alternative". Technical report,
IFRESI, Lille. Document de travail, 1998.
[13] C. Genthon. "Croissance et crise de l’industrie informatique
mondiale". Syros, Paris, 1995.
[14] C. Genthon. "Le cas Sun Microsystem". ENST Bretagne,
2000. <http://www-eco.enst-bretagne.fr/Enseignement/2A/
1999 -2000/EST201/sun/sun00.htm>. Course material.
[15] C. Genthon. "Le libre et l’industrie des services et logiciels
informatique". RNTL, 2001. <http://www- eco.enstbretagne.fr/Etudes_projets/RNTL/work
shop1/
genthon.pdf>, workshop.
[16] L.-A. Gérard-Varet y J.-B. Zimmermann. "Concept de produit
informatique et comportement des agents de l’industrie". In
panel "Structures économiques et économétrie", Mai 1985.
UPGRADE Vol. VI, No. 3, June 2005
11
Libre Software as A Field of Study
[17] F. Horn. "L’économie du logiciel". Tome 1: De l’économie
de l’informatique à l’économie du logiciel. Tome 2: De
l’économie du logiciel à la socio-économie des "mondes de
production" des logiciels. PhD, Université de Lille I, mention: économie industrielle, 570 pages, 2000. <http://wwweco.enst-bretagne.fr/Etudes_projets/RNTL/
documents_universit aires.html>.
[18] F. Horn. "Company strategies for the freeing of a software
source code: opportunities and difficulties". In [23, pp. 99122], 2002.
[19] F. Horn. "L’économie des logiciels". Repères, La Découverte,
2004.
[20] N. Jullien. "Linux: la convergence du monde Unix et du
monde PC". Terminal, 80/81: 43-70. Special issue, Le
logiciel libre, 1999
[21] N. Jullien. "Impact du logiciel libre sur l’industrie
informatique". PhD, Université de Bretagne Occidentale /
ENST Bretagne, mention: sciences économiques, 307 pages,
Novembre 2001. <http://www-eco.enst-bretagne.fr/
Etudes_projets/RNTL/documents_universitaires.html>.
[22] N. Jullien. "Le marché francophone du logiciel libre". Systèmes
d’Information et Management, 8 (1): 77-100, 2003.
[23] N. Jullien, M. Clément-Fontaine, y J.-M. Dalle. "New economic
models, new software industry economy". Technical Report,
RNTL (French National Network for Software Technologies)
project, 202 pages, 2002. <http://www-eco.enst-bretagne.fr/
Etudes_ projets/RNTL/>.
[24] K. Lakhani y E. von Hippel. "How open source software
works: Free user to user assistance". Research Policy, 32:
923-943, 2003. <http://opensource.mit.edu/papers/
lakhanivonhippelusersupport.pdf>.
[25] D. C. Mowery, editor. "The International Computer Software
Industry, A comparative Study of Industry Evolution and Structure". Oxford University Press, 1996.
[26] L. Muselli. "Licenses: strategic tools for software publishers?", In [23, pp. 129-145], 2002.
[27] J. Ousterhout. "Free software needs profit". Communications
of the ACM, 42 (4): 44-45, April 1999.
[28] E. S. Raymond. "The Cathedral and the Bazaar", 1998. <http:/
12
UPGRADE Vol. VI, No. 3, June 2005
/www.tuxedo.org/~esr/writ ings/cathedral-bazaar/>.
[29] E. S. Raymond. "Homesteading the Noosphere", 1998. <http:/
/www.tuxedo.org/~esr/wri tings/homesteading/>.
[30] E. S. Raymond. "The Cathedral & the Bazaar; Musing on
Linux and Open Source by Accidental Revolutionary".
O’Reilly, Sebastopol, California, 1999.
[31] G. B. Richardson. "Economic analysis, public policy and the
software industry". In The Economics of Imperfect Knowledge - Collected papers of G.B. Richardson, volumen 97-4.
Edward Elgar, DRUID Working Paper, April 1997.
[32] P. Romer. "The economics of new ideas and new goods".
Annual Conference on Development Economics, 1992,
Banque Mondiale, Banque Mondiale, Washington D. C.,
1993.
[33] C. Tzu-Ying y L. Jen-Fang. "A comparative study of online
user communities involvement in product innovation and
development". National Cheng Chi University of Technology and Innovation Management, Taiwan, 29 pages, 2004.
<http://opensource.mit.edu/papers/chanlee.pdf>.
[34] J.-B. Zimmermann. "Le concept de grappes technologiques.
Un cadre formel". Revue économique, 46 (5): 1263-1295,
Septembre 1995.
[35] J.-B. Zimmermann. "Un régime de droit d’auteur: la propriété
intellectuelle du logiciel". Réseaux, 88-89: 91-106, 1998.
[36] J.-B. Zimmermann. "Logiciel et propriété intellectuelle: du
copyright au copyleft". Terminal, 80/81: 95-116. Special
Issue, Le logiciel libre, 1999.
© Novática
Libre Software as A Field of Study
Measuring Libre Software Using Debian 3.1 (Sarge)
as A Case Study: Preliminary Results
Juan-José Amor-Iglesias, Jesús M. González-Barahona, Gregorio Robles-Martínez, and Israel Herráiz-Tabernero
This paper is copyrighted under the CreativeCommons Attribution-NonCommercial-NoDerivs 2.5 license available at <http://
creativecommons.org/licenses/by-nc-nd/2.5/>
The Debian operating system is one of the most popular GNU/Linux distributions, not only among end users but also as
a basis for other systems. Besides being popular, it is also one of the largest software compilations and thus a good
starting point from which to analyse the current state of libre (free, open source) software. This work is a preliminary study
of the new Debian GNU/Linux release (3.1, codenamed Sarge) which was officially announced recently. In it we show the
size of Debian in terms of lines of code (close to 230 million source lines of code), the use of the various programming
languages in which the software has been written, and the size of the packages included within the distribution. We also
apply a ‘classical’ and well-known cost estimation method which gives an idea of how much it would cost to create
something on the scale of Debian from scratch (over 8 billion USD).
Keywords: COCOMO, Debian, Libre Software, Libre Software Engineering, Lines of Code, Linux.
part of any Debian release. It is composed exclusively of
free software (according to Debian Free Software Guidelines, DFSG [7]). Other sections, such as non-free or
1 Introduction
On June 6, 2005, the Debian Project announced the official release of the Debian GNU/Linux version 3.1,
codenamed "Sarge", after almost three years of development [6]. The Debian distribution is produced by the Debian
project, a group of nearly 1,400 volunteers (a.k.a.
maintainers) whose main task is to adapt and package all
the software included in the distribution [11]. Debian
maintainers package software which they obtain from the
original (upstream) authors, ensuring that it works smoothly
with the rest of the programs in the Debian system. To ensure this, there is a set of rules that a package should comply with, known as the Debian Policy Manual [5].
Debian 3.1 includes all the major libre software packages available at the time of its release. In its main distribution alone, composed entirely of libre software (according
to Debian Free Software Guidelines), there are more than
8,600 source packages. The whole release comprises almost
15,300 binary packages, which users can install easily from
various media or via the Internet.
In this paper we analyse the system, showing its size
and comparing it to other contemporary GNU/Linux systems1. We decided to write this paper as an update of Counting Potatoes (see [8]), and Measuring Woody (see [1]) which
were prompted by previous Debian releases. The paper is
structured as follows. The first section briefly presents the
methods we used for collecting the data used in this paper.
Later, we present the results of our Debian 3.1 count (including total counts, counts by language, counts for the largest packages, etc.). The following section provides some
comments on these figures and how they should be interpreted and some comparisons with Red Hat Linux distributions and other free and proprietary operating systems. We
close with some conclusions and references.
2 Collecting The Data
In this work we have considered only the main distribution, which is the most important and by far the largest
1
GNU/Linux systems are also known as 'distributions'.
© Novática
Juan-José Amor-Iglesias has an MSc in Computer Science from
the Universidad Politecnica de Madrid, Spain, and he is currently
pursuing a PhD at the Universidad Rey Juan Carlos in Madrid,
Spain. Since 1995 he has collaborated in several free software
related organizations: he is a a co-founder of LuCAS, the best
known free documentation portal in Spanish, and Hispalinux,
and collaborates with Barrapunto.com. <jjamor@
gsyc.escet.urjc.es>
Jesús M. González-Barahona teaches and researches at the
Universidad Rey Juan Carlos, Madrid, Spain. He started working
in the promotion of libre software in the early 1990s. Since then
he has been involved in several activities in this area, such as
the organization of seminars and courses, and the participation
in working groups on libre software. He currently collaborates
in several libre software projects (including Debian), and
participates in or collaborates with associations related to libre
software. He writes in several media about topics related to libre software, and consults for companies on matters related to
their strategy regarding these issues. His research interests
include libre software engineering and, in particular, quantitative
measures of libre software development and distributed tools
for collaboration in libre software projects. He is editor of the
Free Software section of Novática since 1997 and has been guest
editor of several monographs of Novática and UPGRADE on
the subject. <[email protected]>
Gregorio Robles-Martínez is a PhD candidate at the Universidad Rey Juan Carlos in Madrid, Spain. His main research interest
lies in libre software engineering, focusing on acquiring
knowledge of libre software and its development through the
study of quantitative data. He was formerly involved in the
FLOSS project and now participates in the CALIBRE
coordinated action and the FLOSSWorld project, all European
Commission IST-program sponsored projects. <grex@gsyc.
escet.urjc.es>
Israel Herráiz-Tabernero has an MSc in Chemical and
Mechanical Engineering, a BSc in Chemical Engineering and
he is currently pursuing his PhD in Computer Science at the
Universidad Rey Juan Carlos in Madrid, Spain. He ‘discovered’
free software in 2000, and has since developed several free tools
for chemical engineering. <[email protected]>
UPGRADE Vol. VI, No. 3, June 2005
13
Libre Software as A Field of Study
contrib, are not covered here. The approach used for collecting the data is as follows: first, the sources for the distribution are retrieved from the public archives on the Internet,
through archive.debian.org <ftp://archive.debian.org> and its
mirrors, on a per-package basis. Debian provides source code
packages and binary packages. We have used the former in
this study, although the latter are what tend to be downloaded
by users as they are pre-compiled. For each source code package there may be one or many binary packages.
Our second step was to analyse the packages and extract
the information that we were looking for using SLOCCount2
[12]. The lines of code count is only an estimate due to some
peculiarities of the tool (basically based on source code and
programming language identification heuristics) and the criteria chosen for the selection of packages [8].
The final step was to identify and remove packages that
appear several times in different versions (for instance, this
happens with the GCC compiler) so as not to count the same
code more than once. This may lead to an underestimation
as in some cases the source code base may not be that similar (in the case of PHP, we have left the PHP4 version but
removed PHP3), so we have kept some cases where we know
that at least significant amounts of common code (for instance for xemacs and emacs or for gcc and gnat) are present.
The final step is to draw up a set of reports and statistical
analyses using the data gathered in the previous step and
considering it from various points of view. These results
are presented in the following section.
3 Results of Debian 3.1 Count
After applying the methodology described we calculated
that the total source lines of code count for Debian 3.1 is
229,496,000 SLOC (Source Lines Of Code). Results by category are presented in the following subsections (all numbers are approximate, see [4] for details).
Source Lines
Language
of Code (SLOC)
%
C
130,847,000
C++
38,602,000
16.8
Shell
20,763,000
9
LISP
6,919,000
3
Perl
6,415,000
2.8
Python
4,129,000
1.8
Java
3,679,000
1.6
FORTRAN
2,724,000
1.2
PHP
2,144,000
0.93
Pascal
1,423,000
0.62
Ada
1,401,000
0.61
TOTALS
229,496,000
100
57
Table 1: Count of Source Lines of Code by Programming
Language in Debian 3.1.
14
UPGRADE Vol. VI, No. 3, June 2005
3.1 Programming Languages
The number of physical SLOC and percentages, broken
down by programming language, are shown in Table 1.
Below 0.5% there are some other languages such as
Objective C (0.37%), ML (0.31%), Yacc (0.29%), Ruby
(0.26%), C# (0.23%) or Lex (0.10%). A number of other
languages score less than 0.1%.
The pie chart in Figure 1 shows the relative importance
of the main languages in the distribution. Most Debian packages are written in C, but C++ is also to be found in many
packages, being the main language in some of the most important ones (such as OpenOffice.org or Mozilla). Next up
comes Shell, which is mainly used by scripts supporting
configuration and other auxiliary tasks in most packages.
Surprisingly LISP is one of the top languages, although this
can be explained by the fact that it is the main language in
several packages (such as emacs) and is used in many others. While this is not reflected in our results, there is a historical trend towards a relative decline of the C programming language combined with a growing importance of more
modern languages such as Java, PHP, and Python.
3.2 Largest Packages
The following list shows the most important Debian 3.1.
packages over 2 MSLOC broken down by size. For each
package we give the package name, version, total number
of SLOC, composition of programming languages, and a
description of the purpose of the software.
OpenOffice.org (1.1.3): 5,181,000 SLOC. C++ accounts
for 3,547,000 SLOC. C accounts for 1,040,000 SLOC.
There is also code written in 15 more languages, either
scripting languages (such as shell, tcl, python or awk)
or non-scripting languages (pascal, java, objective-C,
lisp, etc).
Linux kernel (2.6.8): 4,043,000 SLOC. C accounts for
3,794,000 SLOC, Makefiles, assembler and scripts in
several languages accounts for the rest. This is the latest
kernel included in the Debian 3.1 release.
NVU (N-View) (0.80): 2,480,000 SLOC. Most of the
code is C++, with more than 1,606,000 SLOC, plus a
large percentage of C (798,000 SLOC). Other languages,
mainly scripting languages, are also used. It is a complete web authoring system capable of rivalling well
known proprietary solutions such as Microsoft
FrontPage.
Mozilla (1.7.7): 2,437,000 SLOC. Most of its code is
C++, with more than 1,567,000 SLOC plus a large percentage of C (789,000 SLOC). Mozilla is a well known
open source Internet suite (WWW browser, mail client,
etc).
GCC-3.4 (3.4.3): 2,422,000 SLOC. C accounts for
1,031,000 SLOC, Ada for 485,000 SLOC and C++ for
244,000 SLOC. Other languages are used minimally.
GCC is the popular GNU Compiler Collection.
XFS-XTT (1.4.1): 2,347,000 SLOC. Mainly 2,193,000
SLOC of C. Provides an X-TrueType font server.
XFree86 (4.3.0): 2,316,000 SLOC. Mainly 2,177,000 SLOC
of C. An X Window implementation, including a graphics
2
We use SLOCCount revision 2.26. It currently recognizes 27 programming languages.
© Novática
Libre Software as A Field of Study
Figure 1: Breakdown of Source Lines of Code for The
Predominant Languages in Debian 3.1.
server and basic programs.
VNC4 (4.0): 2,055,000 SLOC. VNC4 is a remote console
access system, mainly programmed in C with 1,920,000
SLOC.
Insight (6.1): 1,690,000 SLOC, mainly programmed in C
(1,445,000 SLOC). Insight is a graphical debugger based on
GDB.
kfreeBSD5-source (5.3): 1,630,000 SLOC. This is the
source code of 5.3-FreeBSD kernel, a base for a future
GNU distribution based on FreeBSD kernel.
It should be noted that this list would have varied if
Debian maintainers had packaged things following different criteria. For instance, if all emacs extensions had been
included in the emacs package it would have been much
further up the table (probably in the "top ten" list). However, a Debian source package tends to be very much in line
with what upstream authors consider to be a package, which
is usually based on software modularization principles.
Figure 2 provides a breakdown of the sizes of all Debian
3.1 packages. Throughout our study of Debian distributions
over time, from version 2.0 (released in 1998) to version
3.0 (released in 2002), we have observed that the mean size
of packages is around 23,000 lines [10]. For Debian 3.1 the
mean size of packages has increased to 26,600 lines. The
reason behind this is not yet clear, and further studies need
to be conducted, but it may be because the number of packages is growing faster than the number of maintainers, so
that the previous equilibrium no longer exists.
3.3 Effort and
Cost Estimations
The COCOMO model
(COnstructive COst
MOdel) [2] provides a
rough estimation of the
human and monetary
effort needed to generate software of a given
size. It takes as an input
metric the number of
source lines of code.
Since this estimation
technique is designed
for ‘classical’ software
generation processes
and for large projects,
the results it gives when
© Novática
applied to Debian packages should be viewed with caution.
In any case, we will use a basic COCOMO model to give us an
effort estimation based in its size. Using the SLOC count for
the Debian source packages, the data provided by the basic
COCOMO model are as follows:
Total physical SLOC count: 229,495,824
Estimated effort: 714,440.52 person-months (59,536.71
person-years). Formula: 2.4 * (KSLOC^1.05)
Estimated schedule: 105.84 months (8.82 years). Formula:
2.5 * (Effort^0.38)
Estimated cost to develop: 8,043,000,000 USD
To reach these figures, each project was estimated as
though it had been developed independently, which is true
for nearly all cases. For calculating the cost estimation, we
have used the mean salary for a full-time systems programmer in 2000 according to Computer World [3] - 56,286 USD
per year - and an overhead factor of 2.4 (for an explanation
of how this factor is arrived at and other details of the estimation model see [13]).
4 Comparison with Other Systems
To put the figures shown above into context, here are some
software sizes for operating systems. The figures that appear in
Table 2 have been obtained from several different sources (listed
in [10]) and refer to approximate lines of code.
Most of these numbers (in fact, all of them, except for
Red Hat Linux, Fedora Core and Debian) are estimates as it
even difficult to know what they consider as a line of code
(i.e. whether they take into account comments and blank
lines or not). However, for the sake of this paper they provide enough insight and hence we consider them suitable
for comparison purposes.
It should also be noted that, while Red Hat and Debian
include a great many applications and, in many cases, even
several applications within the same category, Microsoft and
Sun operating systems include only a limited number of them
(which also tend to be small in size). If the most common
applications used in those environments were to be included,
they would be far larger. However, it is also true that all
those applications are neither developed nor put together by
the same team of maintainers, as is the case of Linux-based
distributions.
From these numbers, it can be seen that Linux-based
Figure 2: Package Sizes for Debian 3.1. Counts in SLOCs Are Represented on A
Logarithmic Scale.
UPGRADE Vol. VI, No. 3, June 2005
15
Libre Software as A Field of Study
distributions in general, and Debian 3.1 in particular, are
some of the largest pieces of software ever put together by a
group of maintainers.
5 Conclusions and Related Work
Debian is one of the largest software systems in the world,
probably the largest. Its size has grown with every release,
3.1 being twice the size of 3.0. For the last few releases, the
main languages used to develop packages included in Debian
are C and C++. In fact C, C++ and Shell represent more
than 75% of all source code in Debian. The number of
packages continues to grow steadily, doubling almost every
two years.
The Debian GNU/Linux distribution, put together by a
group of volunteers dispersed all over the world, would, at first
sight, appear to show a healthy and fast-growing trend. Despite
its enormous size it continues to deliver stable releases.
However, there are some aspects that put into doubt the future
sustainability of this progress. For instance, mean package size
is showing an unstable behaviour, probably due to the number
of packages growing faster than the number of maintainers.
Nor can we forget that we have had to wait almost three years
for a new stable release and that the release date has been seriously delayed on several occasions.
Regarding other software systems, there are few detailed
studies of the size of modern, complete operating systems.
The work by David A. Wheeler, counting the size of Red
Hat 6.2 and Red Hat 7.1 is perhaps the most comparable.
Some other references provide total counts of some Sun and
Microsoft operating systems, but while they do provide
estimates for the system as a whole, they are not detailed
enough. Debian is by far the largest of them, although this
comparison has to be taken with a degree of caution.
To conclude, it is important to stress that this paper aims
to provide estimations based only on a preliminary study
(since the release is not yet officially published). However,
we believe they are accurate enough to allow us to draw
some conclusions and compare them with other systems.
Operating System
Source Lines
of Code
(SLOC)
Microsoft Windows 3.1 (April 1992)
3,000,000
Sun Solaris (October 1998)
7,500,000
Microsoft Windows 95 (August 1995)
15,000,000
Red Hat Linux 6.2 (March 2000)
17,000,000
Microsoft Windows 2000 (February 2000)
29,000,000
Red Hat Linux 7.1 (April 2001)
30,000,000
Microsoft Windows XP (2002)
40,000,000
Red Hat Linux 8.0 (September 2002)
50,000,000
Fedora Core 4 (previous version; May 2005)
76,000,000
Debian 3.0 (July 2002)
105,000,000
Debian 3.1 (June 2005)
229,500,000
Acknowledgements
This work has been funded in part by the European Commission,
under the CALIBRE CA, IST program, contract number 004337, in part
by the Universidad Rey Juan Carlos under project PPR-2004-42, and in
part by the Spanish CICyT under project TIN2004-07296.
References
[1] Juan José Amor, Gregorio Robles y Jesús M. González-Barahona.
Measuring Woody: The size of Debian 3.0 (pending publication).
Will be available at <http://people.debian.org/~jgb/debiancounting/>.
[2] Barry W. Boehm. Software Engineering Economics, Prentice Hall,
1981.
[3] Computer World, Salary Survey 2000. <http://www.
computerworld.com/cwi/careers/surveysandreports>.
[4] Jesús M. González Barahona, Gregorio Robles, and Juan José
Amor, Debian Counting. <http://libresoft.urjc.es/debiancounting/>.
[5] Debian Project, Debian Policy Manual. <http://www. debian. org/
doc/debian-policy/>.
[6] Debian Project, Debian GNU/Linux 3.1 released (June 6th 2005).
<http://www.debian.org/News/2005/20050606>.
[7] Debian Project, Debian Free Software Guidelines (part of the
Debian Social Contract). <http://www.debian.org/
social_contract>.
[8] Jesús M. González Barahona, Miguel A. Ortuño Pérez, Pedro de
las Heras Quirós, José Centeno González, and Vicente Matellán
Olivera. Counting potatoes: The size of Debian 2.2. UPGRADE,
vol. 2, issue 6, December 2001, <http://upgrade-cepis.org/issues/
2001/6/up2-6Gonzalez. pdf>; Novática, nº 151 (nov.-dic. 2001),
<http://www.ati.es/novatica/2001/154/154-30.pdf> (in Spanish).
[9] Jesús M. González-Barahona, Gregorio Robles, Miguel OrtuñoPérez, Luis Rodero-Merino, José Centeno-González, Vicente
Matellán-Olivera, Eva Castro-Barbero, and Pedro de-las-HerasQuirós. Anatomy of two GNU/Linux distributions. Chapter in
book "Free/Open Source Software Development" edited by
Stefan Koch and published by Idea Group, Inc., 2004.
[10] Gregorio Robles, Jesús M. González-Barahona, Luis López, and
Juan José Amor, Toy Story: an analysis of the evolution of Debian
GNU/Linux, November 2004 (pending publication). Draft
available at <http://libresoft.urjc.es/debian-counting/>.
[11] Gregorio Robles, Jesús M. González-Barahona, and Martin
Michlmayr. Evolution of Volunteer Participation in Libre
Software Projects: Evidence from Debian, julio 2005, Proceedings of the First International Conference on Open Source
Systems. Genova, Italy, pp. 100-107. <http://gsyc.escet.urjc.es/
~grex/volunteers-robles-jgb-martin. pdf>.
[12] David Wheeler. SLOCCount. <http://www.dwheeler.com/
sloccount/>.
[13] David A. Wheeler. More Than a Gigabuck: Estimating GNU/
Linux’s Size. <http://www.dwheeler.com/sloc>.
Table 2: Size Comparison of Several Operating Systems.
16
UPGRADE
Vol. VI, No. 3, June 2005
© Novática
Libre Software as A Field of Study
An Institutional Analysis Approach to
Studying Libre Software ‘Commons’
Charles M. Schweik
This paper is copyrighted under the CreativeCommons Attribution-NonCommercial-NoDerivs 2.5 license available at <http:/
/creativecommons.org/licenses/by-nc-nd/2.5/>
Anyone interested in Libre software will be interested in the question of what leads to success and failure of Libre
projects. This paper marks the beginning of a five-year research program, funded by the U.S. National Science
Foundation, to identify design principles that lead to successful Libre software development efforts. Recently,
scholars have noted that Libre software projects can be considered a form of ‘commons’, producing software
public goods. This connection is important, for a large body of theoretical and empirical findings exists related
to long-enduring environmental commons which could also apply to and inform Libre software projects. Institutions – defined here as rules-in-use – are a central set of variables known to influence the ultimate outcome of
commons settings (e.g., long-enduring commons or ones that succumb to what G. Hardin has called the “Tragedy
of the Commons”). To date, we know relatively little about the institutional designs of Libre projects and how they
evolve. This paper presents an oft-used framework for analyzing the institutional designs of environmental
commons settings that will guide upcoming empirical research on Libre software projects. It presents a trajectory
of these projects and discusses ways to measure their success and failure. The paper closes by presenting example
hypotheses to be tested related to institutional attributes of these projects.
Keywords: Common Property, Commons, Institutions,
Libre Software.
1 Introduction
Several articles in UPGRADE and Novática’s 2001
issue on Open Source/Free Software [1] (referred to from
now on as simply ‘Libre ’ software) noted that the composition of development teams was changing, from allvolunteer teams to teams with paid participants from
industry, government or not-for-profit organizations [2].
While the Libre collaborative approach is not a panacea,
there are enough success stories to conclude that this
development paradigm is viable and important. At the
same time, a much higher number of Libre projects have
been abandoned before reaching the goals they set out to
achieve at their outset [28]. Therefore, an important question, recognized by a number of researchers [3-8] is: what
factors lead to success or failure of Libre projects?
Recently Libre software development projects have
been recognized as a form of ‘commons’, where sets of
volunteer and paid professional team members from all
over the globe collaborate to produce software that is a
public good [9-13][53]. This recognition provides the
opportunity to connect separate streams of research on
the management of natural resource commons ([14][15]
provide summaries) with the more traditional information system research related to the production of software, and Libre software in particular.
Viewing Libre projects as a commons focuses attention on attributes and issues related to collective action,
governance, and the often complex and evolving system of rules that help to achieve long-enduring com-
© Novática
mons [16]. Hardin’s famous phrase, "Tragedy of the
Commons," [17] describes settings where users who
share a commons (e.g., a pasture) over-consume the
resource, leading to its destruction. For each herdsman,
the addition of one more animal to the herd adds
positive utility because it is one more animal to sell.
The negative is that it is one more animal grazing on
the commons. The rational choice of each herder is to
add more animals, leading eventually to overgrazing of
the commons.
But because Libre software are digital, over-consumption of the commons is not the concern. Sustaining (and perhaps growing) a team of developers is. In
these settings the tragedy to be avoided is the decision
to abandon the project prematurely, not because of an
external factor (such as a better technology has come
along that is a better solution than what the project will
produce), but because of some kind of problem internal
to the project (such as a conflict over project direction,
loss of financial support, etc.). (See Endnote 1.)
Charles M. Schweik is an Assistant Professor in the Dept.
of Natural Resources Conservation and the Center for
Public Policy and Administration at the University of
Massachusetts, Amherst, USA. He has a PhD in Public
Policy from Indiana University, a Masters in Public
Administration from Syracuse University, and has an
undergraduate degree in Computer Science. A primary
research interest is in the use and management of public
information technology. For more than six years, between
his undergraduate degree and his MPA, he was a programmer
with IBM. <[email protected]>
UPGRADE Vol. VI, No. 3, June 2005
17
Libre Software as A Field of Study
Since this is an important point, let me try and
analyze this particular tragedy following Hardin’s logic.
In Libre software development settings, developers (and
possibly users, testers, documenters) replace the herdsmen as decision-makers. The motivation for these people to participate is in part the anticipation that the
software being produced will fill a personal or organizational need. However, research on Libre developer
motivations [58] has shown that participants receive
other positive benefits from participating.
From the developer’s perspective, it is worth spending one unit of time contributing to this project because
he is: (1) getting his name known and thus increasing
the possibility for future job or consulting opportunities, (2) learning new skills through reading source and
peer-review of their code submissions, and/or (3) getting paid by his employer to participate.
Alternatively, the logic might be taken to stop contributing time because he does not like the direction in
which the project is going, or that his contributions are
not being accepted and he is not receiving adequate
feedback on why. In these situations, the accumulation
of developer dissatisfaction may lead to premature project
abandonment, because of factors internal to the project.
The tragedy of the commons in this context is about
the premature loss of a production team, not overappropriation as in Hardin’s famous pasture example.
Consequently, a key concern faced by Information Technology (IT) organizations who are considering Libre
software as a policy or strategy is how can a vibrant
production and maintenance effort be sustained over
the longer term and how can the premature abandonment of the project be avoided.
In "Governing the Commons" [18], Elinor Ostrom
emphasized that in some environmental commons settings Hardin’s tragedy is avoided – the commons becomes "long-enduring" – because of the institutional
designs created by self-governing communities. Institutions, in this context, can be defined as sets of rules –
either formal or informal – consisting of monitoring,
sanctioning and conflict resolution mechanisms that help
to create appropriate incentive structures to manage the
commons setting. In Libre software commons settings,
the evolution of project institutions may help to explain
why some Libre software projects move more smoothly
from alpha to beta to stable and regular release cycles
and grow and maintain larger development teams and
user communities, while other projects become abandoned before they reach maturity. While research shows
that a vast majority of Libre software projects have
either one developer or small teams [48-52], I think the
influence of institutional design will become increasingly critical as projects grow (in terms of interested
parties) and mature.
Moreover, the increasing involvement of firms and
government agencies in Libre software development will
undoubtedly lead to more complex institutional environments. For this reason, I think attention to the institutional designs of Libre projects is critically important
as Libre software collaborations (and other "open content collaborations", see [13]) become more commonplace.
This paper describes some components of a fiveyear research program just underway which will study
the institutional designs of Libre software development
projects from a commons perspective. A primary goal
of the research program is to identify design principles
that contribute to the ultimate success or failure of
these projects. The paper is organized in the following
manner.
First, I explain why Libre software development
projects are a type of commons, or more specifically, a
"common property regime". Next, I define what we
mean by institutions and describe a theoretical framework utilized by many social scientists to study institutional designs of environmental commons settings. I
then describe the general trajectory of Libre software
development projects, and discuss ways to measure
success and failure at these stages of this trajectory. I
provide some examples of hypotheses related to institutional designs that, when empirically tested, could help
to identify design principles for Libre software projects.
I close with a discussion of why this should matter to
IT professionals.
Figure 1: A General Clasification of Goods. (Adapted from [21:7].)
18
UPGRADE Vol. VI, No. 3, June 2005
© Novática
Libre Software as A Field of Study
Figure 2: A Framework for Institutional Analyisis of Commons Settings. (Adapted from [21:73].)
2 Libre Software Are Public Goods Developed
by Common Property Regimes
It is possible to view Libre software from two perspectives: use and development. I’ll consider the use
side first. Social scientists recognize four categories of
goods, private, public, club and common pool resources
distinguished by two properties (Figure 1) [22][21]:
first, how easy or difficult is it to exclude others from
using or accessing the good? second, does the good
exhibit rivalrous properties? That is, if I have one unit
of the good, does that prevent others from using it as
well?
Traditional proprietary software can be classified as
a club good in Figure 1. The digital nature of software
(downloadable over the Internet or copied off of CDRom) makes it non-rivalrous. The pricing for access
(and the do-not-copy restrictions of the "shrinkwrap
license") make exclusion of non-purchasers possible
[53].
But in many cases, this exclusion is not always
successful. It is widely understood that there illegal
copying of proprietary software occurs, creating a different form of club with an entrance to the club based
on the wiliness to risk being caught rather than on a
price for access. But regardless of whether the company can successfully crack down on illegal bootlegging of their software, because of its digital nature,
proprietary software falls in the club-good category.
Libre software differs from proprietary software in
that Libre licenses (such as the GNU General Public
© Novática
License) permit users to copy and distribute the software wherever they please as long as they comply with
the specifications of the license [54]. These licenses
provide a mechanism for acting upon a violation of the
specified rules, so exclusion is theoretically possible
through litigation under contract or copyright law [6162], but in most cases this is unlikely [61].
Since Libre software is also non-rivalrous – it is
freely copied digitally over the Internet or on CD-ROM
(such as in the case of a Linux distribution, for example) – technically, it should be classified as a club good
– a club with no "fee" other than license compliance to
join. But given that Libre software distribution is global
in reach with no monetary cost associated with it,
many classify it a public good [23-25][25][55].
Now let me turn to a discussion about the production of Libre software. I noted earlier that some consider
Libre software projects a form of "commons" [25][54].
McGowan [53] refers to these commons as "social
spaces" for the production of freely available and modifiable software.
While these projects involve collaboration, and contrary to what some might believe, there are property
rights (copyright) and ownership issues in these commons [53].
Raymond (cited by McGowan) defines owners of a
Libre software project as ones who have "exclusive
right, recognized by the community at large, to redistribute modified versions" [53: 24]. According to
Raymond, one becomes owner of a project by either:
UPGRADE Vol. VI, No. 3, June 2005
19
Libre Software as A Field of Study
(1) being the person or group who started the project
from scratch; (2) being someone who has received
authority from the original owner to take the lead in
future maintenance and development efforts; (3) being
a person who takes over a project that is widely
recognized as abandoned and makes a legitimate effort
to locate the original author(s) and gets permission to
take over ownership. McGowan adds a fourth option –
the "hostile takeover" – where the project can be
hijacked or "forked" because of the "new derivative
works" permissions provided by the license. Forking
often occurs when a project is deemed by some on the
team to be headed technically or functionally in the
wrong direction. A kind of mutiny can occur and a
new project is created using the existing source from
the old project. The result is two competing versions
[53].
Some readers might find the definition of Libre
project owners by Raymond somewhat troublesome.
This definition encapsulates Raymond’s libertarian view
of Libre projects, where the community as a whole
somehow together recognizes ownership rights and collectively acts as one to support them. To some, this
collective recognition and action may appear rather
hard to believe.
An alternative way of identifying or defining an
owner in Libre software settings is through a person or
team’s ability to initiate or maintain a coherent collective development process.
From this perspective, ownership is more a result of
the barriers against expropriation and does not require
some mystical collective endorsement. The reader should
note too that this alternative definition of Libre ownership is consistent with Raymond and McGowan’s four
ways to become a recognized Libre software owner
listed above.
Given the ownership aspects above, here is the key
point: Libre software projects are a form of self-governing "common property regime," with developers working collaboratively to produce a public good
[9][11][12][27] [13][25][53][54]. While ‘commons’ is the
term most often used, "common property regime" more
accurately describes Libre software projects.
The recognition of Libre projects as a form of
common property regime provides an opportunity to
connect knowledge amassed over the years on the
governance and management of natural resource commons under common property settings (e.g., [14][15]).
Weber recently noted the importance of governance
and management in Libre software projects when he
stated: "The open source process is an ongoing experiment. It is testing an imperfect mix of leadership,
informal coordination mechanisms, implicit and explicit
norms, along with some formal governance structures
that are evolving and doing so at a rate that has been
sufficient to hold surprisingly complex systems together"
[12:189].
20
UPGRADE Vol. VI, No. 3, June 2005
3 A Framework for Studying The Institutional
Designs of Libre Software Projects
Weber’s recognition of social norms, informal coordination processes and formal governance structures
coincides with what political scientists and economists
refer to as "institutions" [18][21][31]. For more than 40
years, researchers, including this author [32-34], have
utilized the "Framework for Institutional Analysis" (Figure 2) to organize thinking about environmental commons cases [31:8]. This framework has not yet been
applied to the study of Libre software commons, but
the analytic lens it provides complements other Libre
software research underway by researchers in more
traditional information systems fields (e.g., [35-38]).
Consider the situation where an analyst is trying to
understand why a particular Libre software project is
lively or why it is losing momentum. Figure 2 depicts
Libre projects as a dynamic system with feedback. The
analyst might begin studying the project by first looking to elements on the left-hand side: the physical,
community and rule attributes.
Physical attributes refers to a variety of variables
related to the software itself or to some of the infrastructure to coordinate the team. These include the type
of programming language(s) used, the degree to which
the software is modular in structure, and the type of
communication and content management infrastructure
used.
Community attributes refers to a set of variables relating to the people engaged in the Libre software project,
such as whether they are volunteer or paid to participate,
whether they all speak the same language or not, and
aspects that are more difficult to measure related to social
capital, such as how well team members get along, how
well they trust each other [63], etc. This component also
includes other non-physical attributes of the project, such
as its financial situation and the sources that provide this
funding (e.g., a foundation).
Rules-in-use refers to the types of rules in place
that are intended to guide the behavior of participants
as they engage in their day-to-day activities related to
development, maintenance or use of the Libre software.
The specific Libre license used is one important component of the rules-in-use category. But I expect that most
Libre projects – especially more mature ones with larger
numbers of participants – will have other sets of formal
or informal rules or social norms in place that help to
coordinate and manage the project.
The middle section of Figure 2, Actors and the
Action Arena, indicates a moment or a range of time
where the left side attributes remain relatively constant
and actors (e.g., software developers, testers, users)
involved in the Libre software project make decisions
and take actions (e.g., programming, reviewing code,
deciding to reduce or stop their participation, etc.). The
aggregation of these actors making decisions and taking
actions is depicted as Patterns of Interactions in Figure
© Novática
Libre Software as A Field of Study
Figure 3: Stages of Libre Projects Software and Outcome (Success) Measures. (Adapted from Schweik
and Semenov, 2003 [46].)
2. The accumulation of actions results in some Outcome (right side, bottom of Figure 2). An outcome
could be a change in the physical attributes of the
Libre software commons (e.g., a new release), a change
in the community attributes of the project (e.g., new
people joining in or people leaving), a change to the
existing rules-in-use (e.g., a new system for resolving
conflicts) or any combination thereof. In Figure 2,
these kinds of changes are depicted through the feedback system from outcome to the three sets of commons attributes on the left side of the same figure, and
a new time period begins.
Much of what institutional analysis is about is the
study of rules, from formal laws to informal norms of
behavior, standard operating procedures, and the like.
Embedded in the rules-in-use category of Figure 2 are
three nested levels that, together, influence actions taken
and resultant outcomes at these different levels of
analysis [39]. Operational level rules affect the day-today activities made by participants in the Libre software
commons. These can be formally written rules, or,
perhaps more often in Libre settings, could be community norms of behavior.
An example of operational rules might be the pro-
© Novática
cedures for adding new functionality to the next release
of the software. Another example might be rules on
promoting a particular developer to a position where he
or she has more decision-making responsibility in the
project. The Collective-Choice level represents the discussion space where team members with authority define group goals, and create or revise operational rules
to move the team toward these goals. In addition, at
this level there is a system of collective-choice rules
that define who is eligible to change operational rules
and specify the process for changing these rules [21].
Collective-choice rules might, for example, determine
how the team can change the process of code review
before new code can be checked-in or "committed"
[40]. Constitutional-Choice level rules specify who is
eligible to change collective-choice rules and also define the procedures for making such changes. For example, if the formally designated leader of a Libre
project has decided to move on to a new opportunity,
constitutional-choice rules would outline how a replacement is chosen.
At each level, there can be systems for the monitoring of rule conformance and systems for sanctioning
when rules are broken.
UPGRADE Vol. VI, No. 3, June 2005
21
Libre Software as A Field of Study
In short, at any point in time in the lifecycle of a
Libre software project, programmers, users, and testers
will make their decisions on how they participate based
on the existing physical, community and institutional
attributes of the project, as well as their anticipation of
where the project is headed and their own personal
circumstances. Participants make decisions and take
actions at three levels: operational, collective-choice
and, less frequently, constitutional-choice.
One hypothesis to be tested in this research is that
the systems of rules-in-use at any one of these levels
will become more complex as a Libre software project
matures and gains participants. I also expect the institutional design to become more complex in situations
where one or more organizations (e.g., firms, nonprofits or government agencies) provide resources to
support the project. This is consistent with McGowan
[53:5] when he states: "The social structures necessary
to support production of large, complex projects are
different from the structures – if any – necessary to
support small projects…".
4 The Trajectory of Libre Software Projects
I now turn to the question of how to evaluate the
"Outcomes" component in this framework. While Figure 2 reveals a feedback loop to draw attention to the
dynamic and evolutionary nature of these projects
[48][45], it doesn’t depict these longitudinial properties
very well. For this reason I include Figure 3.
In earlier work [46] I argued that Libre software
projects follow a three-stage trajectory (Figure 3): (1)
initiation; (2) a decision to "go open" and license it as
Libre software; and (3) a period of growth, stability or
abandonment. Most of Libre software research focuses
on projects at Stage 3. But some of the decisions made
at the earlier stages may be critical factors leading to
the outcome of growth or abandonment at Stage 3.
Consider Stage 1 in Figure 3. In many cases, Libre
software projects start with a private consultation of
one or a few programmers working together to develop
a ‘core’ piece of software.
At this juncture, the software may not yet be placed
under a Libre software license or made available on the
Internet, and in some circumstances the team may not
even be planning at the moment to license it as Libre
software. But critical design decisions may be made at
this stage, such as the modularity of the code, which
might greatly influence how well the software can be
developed in a Libre common property setting in Stage
3.
While the "small and private group starting from
scratch" scenario is probably what most might think of
in the initiation phase of a Libre software project, there
is at least one other alternative: "software dumping"
[60]. In these situations, the software first is developed
under a more traditional, proprietary, closed-source,
software-development model within a firm. At some
22
UPGRADE Vol. VI, No. 3, June 2005
point – perhaps after years of work – decision-makers
may make a strategic decision not to support the software any more, and as a consequence, make the code
available and license it as Libre software. This scenario
may become more prominent in future years if software
firms continue to consider Libre software as part of
their business strategy.
The "going open" stage (Figure 3, Stage 2) is
probably brief but perhaps not as simple as it might at
first appear. In this stage, team members decide on an
appropriate Libre software license, and, perhaps more
importantly, create a public workspace and a collaborative infrastructure (e.g., versioning system, methods for
peer review, bug tracking mechanism, etc.) to support
the project.
Platforms like Sourceforge.net and Freshmeat.net have
made this step fairly easy, but there are some projects
that utilize web-based platforms that they have implemented themselves.
I should note at this juncture that in some Libre
projects, Stages 1 and 2 can be conflated. It may be a
relatively common phenomenon where a founding member has an idea and immediately broadcasts an appeal
for other partners to help in the development of the
project.
This appeal may immediately involve the creation
of a project page on the web or on a hosting site such
as Sourceforge.net. But regardless of how the project
gets through Stage 2, the next step is Stage 3 of Figure
3. This stage describes the period in the project’s life
cycle where the software is actively being developed
and used under Libre software licensing and is publicly
available on the Internet. Many of the early studies of
Libre software projects focused on cases that fall under
the "high growth" (in terms of market share or number
of developers) success stories such as Linux or Apache
Web Server. Achieving this stature is often the default
or assumed measure of success of these projects in
Libre software literature.
However, empirical studies of Libre software have
shown that the majority of projects never reach this
stage and many, perhaps most, involve only a small
number of individuals [48-52]. Some of these studies
may be focusing on projects in the early days of their
life cycle, where people are working to achieve high
growth. But in other instances, members of a particular
project may be quite satisfied to remain "stable" and
active with a small participant base (Figure 3, Stage 3:
Small Group). Some Libre software projects in
bioinformatics might provide examples of these kinds
of circumstances [47].
The main point regarding Figure 3 is that there are
important stages in the trajectory of Libre software
projects and that the measures for success and failure
will likely change during these stages. Moreover, physical, community and institutional attributes of projects
will evolve as well.
© Novática
Libre Software as A Field of Study
5 Measuring Success or Failure of Libre Software Projects along This Trajectory
I noted earlier that a goal of this research project is
to define "design principles" that lead to successful
Libre software collaborations. In the empirical work I
am just initiating, success or failure of Libre projects is
the concept I seek to explain. What follows is a
description of one method for measuring success and
failure. Others have undertaken research trying to quantify this as well, and I build upon their important work
[3][4][8][41].
For my purposes, an initial measure of success or
failure in Libre project collaboration requires asking
two questions in sequence. First, does the project exhibit some degree of development activity or from a
development perspective does the project look abandoned? Second, for projects that appear to be abandoned, were they abandoned for reasons that were
outside of the team’s control? Let me elaborate on each
question.
5.1 Does The Project Exhibit Developer
Activity or Does It Look Abandoned?
Several studies have measured whether a Libre software project is ‘alive’ or ‘dead’ [48], by monitoring
changes in the following physical attribute variables of
the software (Figure 2) over some period of time:
Release trajectory (e.g., movement from alpha to
beta to stable release) [3].
Version number [3][48].
Lines of code [48][43].
Number of ‘commits’ or check-ins to a central storage repository [45].
Similarly, the analyst could monitor changes in community attribute variables (Figure 2) such as:
The activity or vitality scores as measured on collaborative platforms such as Sourceforge.net or
Freshmeat.net [3][8][48].
Obviously, if any of these metrics show some change
over a period of time, the project demonstrates some
level of activity or life. A key issue here will be
deciding what time range will be long enough to
determine a dead or abandoned project. I expect that
some more mature software projects may show periods
of dormancy in terms of development activity until
some interesting new idea is suggested by a developer
or user. Consequently, the range of time with no signs
of activity should be relatively long in order to determine project death, or, better yet, the analyst should
find some acknowledgment in project documentation
(e.g., website) that the project has been closed down or
abandoned.
5.2 If The Project Looks Abandoned, Did The
Abandonment Occur because of Factors Outside
of The Team’s Control?
A classification of a project as dead does not by
© Novática
itself necessarily mean it was a failed project [63].
Some projects may exhibit no activity because they
have reached full development maturity: the software
produced does the job and requires no more improvements. In these instances, the project would be classified as a success story, not a failure.
In other instances a project may be classified as
dead or abandoned but have become so for reasons
outside the project team’s control, such as in the case
where another (rival) technology has come along that is
technologically superior or becomes more accepted (recall the Gopher versus WWW example in Endnote 1).
In these instances, the project should probably not be
considered a failure from a collaborative point of view
(although in some cases they probably could be). There
were simply rival technologies that were better at doing
the job.
But there will be other cases where the project
shows no signs of development life and yet the software has not fully matured to its full potential and
there were no apparent external factors that led to
developers abandoning the project. I would classify
these as premature abandonment cases, for some factor
in the internal project led to people abandoning the
effort before it reached maturity.
Consequently, for this research program, I intend to
use the questions in Section 5.1 and 5.2 to classify
projects into success or failed categories. Successful
projects will either show some level of life or will
exhibit no further signs of development because it has
reached development maturity.2 Projects that were abandoned because of external influences will be dropped
out of the population of cases, for they cannot be
classified as either a success story or a failed case.
Cases that appear to be abandoned prematurely because
of some kind of internal issue or issues will be the
ones classified as failures. These metrics will be fairly
easy to collect for projects that are in Stage 3 (growth)
in Figure 3. It will be more difficult to identify projects
to study that fall in the earlier stages (Stage 1 or 2),
but when I do, these same concepts should apply.
6 Toward The Identification of "Design Principles" for Libre Software Commons
Until now, I have tried to emphasize four points.
First, that Libre software projects are a form of common property regime which have physical, community
and institutional attributes that influence their performance. Second, that there are many ways to measure the
success and failure of these projects but an important
one will be a measure of the collaboration being abandoned prematurely (failure) or remaining until the software reaches a level of maturity (success). Third, that
the institutional designs – the rules-in-use – are an area
that has to date been largely neglected in Libre software
studies. Fourth, that the identification of "design principles" which lead to success of these projects at the
UPGRADE Vol. VI, No. 3, June 2005
23
Libre Software as A Field of Study
different stages in Figure 3 is desirable as more organizations turn to Libre software as an IT strategy.
The identification of design principles will require a
systematic study of Libre software projects at different
stages in Figure 3 with attention given to appropriate
measures of success and failure at each stage. Hypotheses will need to be made related to all three sets of
independent variables – physical, community and institutional attributes in Figure 1 – based upon work in
traditional software development, more recent studies of
Libre software projects explicitly, and relevant work
related to natural resource commons. But because of
space limitations, I will conclude this paper by providing some hypotheses related to the institutional designs
of Libre software projects (rules-in-use in Figure 1) and
noting their relationship to studies of natural resource
commons.
Libre software projects will be more successful (not
be abandoned prematurely) if they include some degree
of voice by lower level participants in the crafting of
operational-level rules.
It has been shown that in natural resource commons
settings the resources are better sustained when users
have some rights to make and enforce their own operational-level rules [18][14]. Applying this to Libre software projects if, for example, operational-level rules
are imposed by some overarching authority without the
consultation of others working "in the trenches," these
workers may become disenchanted and abandon the
project. Alternatively, if developers and users associated
with the Libre software project have some say in the
crafting or revising of operational-level rules as the
project progresses, commons theory suggests they will
be more willing to participate over the long run.
Libre software projects will be more successful (not
be abandoned prematurely) if they have established
collective-choice arrangements for changing operational
rules when necessary.
It has also been shown that long-enduring natural
resource commons tend to have institutional designs
that allow for rule adaptation when needed. Systems
with fixed rules will more likely fail because the
understanding at the time they were crafted may be, to
some degree, flawed or the situation they were designed to work in will eventually, change [15].
Libre software projects will be more successful (not
be abandoned prematurely) if they have systems established to handle disputes between group members.
Studies such as Divitni et al. [56] and Shaikh and
Cornford [57] provide discussions on conflict in Libre
software settings. The most extreme type of conflict is
"forking," described earlier. Commons settings with conflict management in place often result in early resolution coupled with new learning, and understanding within
the group [15:1909]. Projects not capable of handling
conflict can lead to dysfunctional situations where cooperation is no longer possible.
24
UPGRADE Vol. VI, No. 3, June 2005
Libre software projects will be more successful (not
be abandoned prematurely) if
… they have systems in place that provide for the
monitoring of operational rules.
… they have some level of graduated sanctions for
people who break established rules.
… they have rule enforcers whose judgments are
deemed effective and legitimate.
Operational rules work only if they are enforced.
Research in natural resource commons settings has shown
that often systems of low-cost monitoring can be established by the users themselves, and are most effective
when there are (at first) modest sanctions given to offenders [15]. Sharma, Sugumaran and Rajgopalan [59] note
that monitoring and sanctioning systems do exist in some
form in Libre software projects. However, in current Libre
software literature little is mentioned on this topic. Commons literature suggests that the chance for success will
be higher if there is formal or informal monitoring of
operational rule conformance as well as a set of tiered
social sanctions in place to rein in rule-breakers. For
example, a remonstrance by direct one-to-one email that, if
not successful, progresses to "flaming" in view of the
entire team is one example of a graduated sanction procedure in Libre software settings. In addition, commons
studies have also shown that rule enforcement is more apt
to work when the people imposing sanctions are deemed
effective and legitimate [15]. Translated to Libre cases,
effective sanctioning of rule breakers requires someone
who possesses formal designation to do this or who is
recognized as a legitimate group authority.
7 Conclusions
The hypotheses provided in the previous section are
intended to be illustrative of what needs to be done to
move toward the identification of design principles in
Libre software commons settings. I have provided examples highlighting institutional (rules-in-use) issues because I think this is an area that has, until now, been
neglected in the Libre software research. However, testable hypotheses certainly can be generated related to
other categories of attributes on the left side of Figure
1. For example, an obvious but important one probably
is: Libre software projects will be more successful if
they have a regular and committed stream of funding
coming in to support their endeavor.
It may be that, for many Libre projects, attention to
institutional design is simply not important, because the
development team is comprised of only one or a small
number of individuals. More important variables at that
stage may be physical or community attributes. However,
I suspect that in the larger (in terms of lines of code)
projects, or in Libre projects where more than one firm or
organization is contributing resources to support the project,
the institutional design will become a much more important set of variables. Over the next few years, funded by
the U.S. National Science Foundation, I will be undertak© Novática
Libre Software as A Field of Study
ing a systematic study of these projects looking specifically at the design and evolution of their institutional
structures and these issues.
Why should UPGRADE and Novática readers care?
Here I return to where I started. Several papers in their
2001 issue emphasized the changing nature of participation in Libre software projects: that increasingly actors
are not volunteers but people paid by their organizations to contribute to the development of the software.
It is not difficult to imagine a future where government
agencies and/or firms devote resources to work on a
Libre software project together. (Firms are doing this as
right now). A main lesson from natural resource commons research is that institutions matter. I expect that
as Libre software and Libre software commons mature,
institutional attributes will become increasingly important and apparent as factors that lead to the success or
failure of these projects.
Endnotes
1. I am indebted to an anonymous reviewer for
making the point that some abandoned projects are not
tragedies. This reviewer provided the example of the
Gopher technology being superseded by the World Wide
Web technology. This is a case of an external factor
leading to the early abandonment of the software project
but would not be considered a tragedy. I should also
note that the idea of project cancellation has been used
in more traditional software development in the past
[42], but the phrase "premature abandonment" rather
than "premature cancellation" better fits Libre settings
since in many cases there is no formal organization
making the decision to end the project prematurely.
2. An additional analytic part of this project will be
to analyze the ‘vibrancy’ of successful projects – capturing the degree of life (in terms of developer or user
activity) a project exhibits. In other words, I ultimately
want to develop a measure of success that moves
beyond the ‘live’ versus ‘dead’ metric. Several studies
(e.g., [3][4][8]) have looked at vibrancy metrics, focusing in on variables such as the numbers of people in
the formal development team or the extended development team (e.g., bug reporters), number of commits,
number of downloads, etc. Other possible vibrancy
metrics might include an examination of the direction
of change in numbers of formal or extended developer
teams. However, a more thorough examination of these
metrics is needed – beyond what can be done in this
paper. For it is likely that any vibrancy metric will be
closely tied to the stage of development of the project.
For example, Dalle and colleagues [44] note that more
active, younger projects on Sourceforge.net are likely
to attract developers at a higher rate than older, more
mature projects with larger code bases. From this perspective, vibrancy metrics might look very similar between a project that is being abandoned prematurely
and a project that is reaching maturity. For this reason,
© Novática
in this paper I only want to point out that I intend to
investigate further how to conceptualize and put into
operation vibrancy metrics but it is beyond the scope
of this paper to do so.
Acknowledgments
Support for this study was provided by a grant from the U.S.
National Science Foundation (NSFIIS 0447623). However, the findings, recommendations, and opinions expressed are those of the
authors and do not necessarily reflect the views of the funding
agency.
References
[1] UPGRADE, Vol. II, No. 6, December 2001, <http://
www.upgrade-cepis.org/issues/2001/6/upgrade-vII-6.html>;
Novática, n. 154 (nov.-dic. 2001), <http://www.ati.es/
novatica/2001/154/nv154sum.html> (in Spanish).
[2] R.W. Hahn. "Government Policy Toward Open Source Software: An Overview." In R. W. Hahn (ed.) Government Policy
toward Open Source Software. Washington, D.C.: AEIBrookings Joint Center for Regulatory Studies, 2002.
[3] K. Crowston, H. Annabi, and J. Howison. "Defining Open
Source Project Success." In Proceedings of the 24th International Conference on Information Systems (ICIS 2003).
Seattle, WA, 2003.
[4] Crowston, Annabi, Howison, and Masango. "Towards a Portfolio of FLOSS Project Success Measures." In Feller,
Fitzgerald, Hissam, and Lakhani (eds.) Collaboration, Conflict and Control: The Proceedings of the 4th Workshop on
Open Source Software Engineering. Edinburg, Scotland,
2004.
[5] K. Crowston and B. Scozzi. "Open Source Software Projects
as Virtual Organizations: Competency Rallying for Software Development," IEE Proceedings Software (149:1), pp.
3-17, 2002.
[6] W. Scacchi. "Understanding the Requirements for Developing
Open Source Software Systems." IEE Proceedings Software.
149, 1: 24-39, 2002.
[7] W. Scacchi. "Free and Open Source Development Practices in
the Game Community." IEEE Software. January/February,
2004.
[8] K.J. Stewart and T. Ammeter. "An Exploratory Study of Factors Influencing the Level of Vitality and Popularity of Open
Source Projects." In L. Applegate, R. Galliers, and J.I.
DeGross (eds.) Proceedings of the Twenty-Third International
Conference on Information Systems, Barcelona. Pp. 853-57,
2002.
[9] Y. Benkler. "Coase’s Penguin, or Linux and the Nature of the
Firm. Yale Law Journal. 112 (3), 2002.
[10] A. Nuvolari. "Open Source Software Development: Some
Historical Perspectives." Eindoven Center for Innovation
Studies, Working paper 03.01. <http://opensource.mit.edu/
nuvolari.pdf>, 2003.
[11] J. Boyle. The second enclosure movement and the construction of the public domain. Law and Contemporary Problems. 66(1-2): 33-75, 2003.
[12] S. Weber. The Success of Open Source. Cambridge, MA:
Harvard University Press, 2004.
[13] C.M. Schweik, T. Evans, and J.M. Grove. "Open Source
and Open Content: A Framework for the Development of
Social-Ecological Research." Ecology and Society (pending
publication).
[14] E. Ostrom, J. Burger, C.B. Field, R.B. Norgaard, and D.
UPGRADE Vol. VI, No. 3, June 2005
25
Libre Software as A Field of Study
Policansky. "Revisiting the Commons: Local Lessons, Global Challenges." Science 284. pp. 278-282, 1999.
[15] T. Dietz, E. Ostrom, and P. Stern. "The Struggle to Govern
the Commons." Science 302(5652). pp. 1907-1912, 2003.
[16] C. Hess and E. Ostrom. "Ideas, Artifacts and Facilities: Information as a Common-Pool Resource. Law and Contemporary Problems. 66(1&2), 2003.
[17] G. Hardin. "The Tragedy of the Commons." Science.
162:1243-48, 1968.
[18] E. Ostrom. Governing the Commons: The Evolution of
Institutions for Collective Action. New York: Cambridge
University Press, 1990.
[19] R. Netting. Balancing on an Alp: Ecological Change and
Continuity in a Swiss Mountain Community. Cambridge:
Cambridge Uiversity Press, 1981.
[20] J.M. Baland and J.P. Platteau. Halting Degradation of
Natural Resources. Is there a Role for Rural Communities?
Oxford University Press, 1996.
[21] E. Ostrom, R. Gardner, and J.K. Walker. Rules, Games, and
Common-Pool Resources, Ann Arbor: University of Michigan Press, 1994.
[22] V. Ostrom and E. Ostrom. "Public Goods and Public
Choices." In Alternatives for Delivering Public Services:
Toward Improved Performance. E.S. Savas (editor). Boulder, Colo: Westview Press. pp. 7-49, 1977.
[23] J. Bessen. "Open Source Software: Free Provision of Complex Public Goods." <http://www.researchoninnovation.org/
opensrc.pdf>, 2001 .
[24] Peter Kollock. "The Economies of Online Cooperation: Gifts
and Public Goods in Computer Communities." In Communities in Cyberspace, edited by Marc Smith and Peter
Kollock. London: Routledge, 1999.
[25] R. van Wendel de Joode, J.A. de Bruijin, and M.J.G. van
Eeten. Protecting the Virtual Commons: Self Organizing
Open Source and Free Software Communities and Innovative Intellectual Property Regimes. The Hague: T.M.C. Asser
Press, 2003.
[26] S.V. Ciriacy-Wantrup and R.C. Bishop. "’Common Property’ as a Concept in Natural Resource Policy." Natural Resources Journal. 15:713-27, 1975.
[27] A. Nuvolari. "Open Source Software Development: Some
Historical Perspectives." Eindoven Center for Innovation
Studies, Working paper 03.01. <http://opensource.mit.edu/
papers/nuvolari.pdf>, 2003.
[28] K. Healy and A. Schussman. "The Ecology of Open-Source
Software Development." Available at <http://
opensource.mit.edu/papers/ healyschussman.pdf>, 2003.
[29] B.J. McCay and J.M. Acheson. The Question of the Commons: The Culture and Ecology of Communal Resources.
Tucson: University of Arizona Press, 1987.
[30] D. W. Bromley et al. Making the Commons Work: Theory,
Practice, and Policy (ICS Press, San Francisco, 1992.
[31] C. Hess and E. Ostrom. "A Framework for Analyzing Scholarly Communication as a Commons." Presented at the Workshop on Scholarly Communication as a Commons, Workshop in Political Theory and Policy Analysis, Indiana University, Bloomington, IN, March 31-April 2, 2004. <http:/
/dlc.dlib. indiana. edu/archive/00001244/>.
[32] C.M. Schweik. The Spatial and Temporal Analysis of Forest
Resources and Institutions. Tesiks Doctoral. Center for the
Study of Institutions, Population and Environmental Change,
Indiana University. Bloomington, IN, 1998.
[33] C.M. Schweik. Optimal Foraging, Institutions and Forest
26
UPGRADE Vol. VI, No. 3, June 2005
Change: A Case from Nepal. Environmental Monitoring and
Assessment. 62: 231-260, 1999.
[34] Schweik, C.M., Adhikari, K., and Pandit, K.N. 1997. "LandCover Change and Forest Institutions: A Comparison of Two
Sub-Basins in the Southern Siwalik Hills of Nepal." Mountain Research and Development. 17(2): 99-116, 1997.
[35] Institute for Software Research. <http://www.isr.uci.edu/research-open-source.html>, 2005.
[36] Libre Software Engineering. <http://libresoft.urjc.es/>, 2005.
[37] FLOSS, Free/Libre Open Source Software Research. <http:/
/floss.syr.edu/>, 2005.
[38] K. Stewart. Open Source Software Development Research
Project. <http://www.rhsmith.umd.edu/faculty/kstewart/
ResearchInfo/KJSResearch.htm>, 2005.
[39] L.L: Kiser and E. Ostrom. "The Three Worlds of Action: A
Meta-theoretical Synthesis of Institutional Approaches." In
E. Ostrom (ed.) Strategies of Political Inquiry. Beverly Hills,
CA: Sage. Pp. 179-222, 1982.
[40] K. Fogel and M. Bar. Open Source Development with CVS.
Scottsdale, AZ: Coriolis, 2001.
[41] K. Stewart. "OSS Project Success: From Internal Dynamics
to External Impact." In Proceedings of the 4th Annual Workshop on Open Source Software Engineering. Edinburgh,
Scotland. May 25th , 2004.
[42] Standish Group International, Inc. The CHAOS Report.
<http://www.standishgroup.com/sample_research/
chaos_1994_1.php>, 1994.
[43] S. Hissam, C.B. Weinstock, D. Plaksoh, and J. Asundi. Perspectives on Open Source Software. Technical report CMU/
SEI-2001-TR-019, Carnegie Mellon University. <http://
www.sei.cmu.edu/publications/documents/01.reports/
01tr019.html>, 2001.
[44] J-M. Dalle, P.A. David, R.A. Ghosh, and F.A. Wolak. "Free
& Open Source Software Developers and ‘the Economy of
Regard’: Participation and Code-Signing in the Modules of
the Linux Kernel." <http://siepr.stanford.edu/programs/
OpenSoftware_David/Economy-ofRegard_8+_OWLS.pdf>, 2004.
[45] G. Robles-Martinez, JM. Gonzalez-Barahona, J. CentenoGonzalez, V. Matellan-Olivera, and L. Rodero-Merino.
"Studying the Evolution of Libre Software Projects Using
Publically Available Data. In J. Feller, B. Fitzgerald, S.
Hissam, and K. Lakhani (eds.) Taking Stock of the Bazaar:
Proceedings of the 3rd Workshop on Open Source Software
Engineering. <http://opensource.ucc.ie/icse2003>, 2003.
[46] C.M. Schweik and A. Semenov. The Institutional Design of
"Open Source" Programming: Implications for Addressing
Complex Public Policy and Management Problems. First
Monday 8(1). <http://www.firstmonday.org/issues/issue8_1/
schweik/>, 2003.
[47] <http://bioinformatics.org/>.
[48] A. Capiluppi, P. Lago, and M. Morisio. "Evidences in the
Evolution of OS projects through Changelog Analyses," In
J. Feller, B. Fitzgerald, S. Hissam, and K. Lakhani (eds.)
Taking Stock of the Bazaar: Proceedings of the 3rd Workshop on Open Source Software Engineering. <http://
opensource.ucc.ie/icse2003>, 2003.
[49] R.A. Ghosh and V.V. Prakash. "The Orbiten Free Software
Survey." First Monday (5) 7. <http://firstmonday.org/issues/
issue5_7/ghosh/>, 2000.
[50] R.A. Ghosh, G. Robles, and R. Glott. Free/Libre and Open
Source Software: Survey and Study. Technical report. International Institute of Infonomics. University of Maastricht,
© Novática
Libre Software as A Field of Study
The Netherlands. June. <http://www.infonomics.nl/FLOSS/
report/index.htm>, 2002.
[51] K. Healy and A. Schussman. "The Ecology of Open-Source
Software Development." Disponible en: <http://
opensource.mit.edu/papers/healyschussman.pdf>, 2003.
[52] S. Krishnamurthy. "Cave or Community? An Empirical Examination of 100 Mature Open Source Projects." FirstMonday
7, 2002.
[53] D. McGowan. Legal Implications of Open Source Software.
University of Illinois Review, 241 (1): 241-304, 2001.
[54] E. Moglen. "Anarchism Triumphant: Free Software and the
Death of Copyright." First Monday, 4. August, 1999.
[55] P. Kollock. The Economies of Online Coorperation: Gifts
and Goods in Cyberspace. In M. Smith and P. Kollock (eds.)
Communities in Cyberspace. London: Routeldge. Pp. 220239, 1999.
[56] M. Divitini, L. Jaccheri, E. Monteiro, and H. Traetteberg.
"Open Source Processes: No Place for Politics?". In J. Feller, B. Fitzgerald, S. Hissam, and K. Lakhani (eds.) Taking
Stock of the Bazaar: Proceedings of the 3rd Workshop on Open
Source Software Engineering. <http://opensource.ucc.ie/
icse2003>, 2003.
[57] M. Shaikh and T. Cornford. "Version Management Tools:
CVS to BK in the Linux Kernel." In J. Feller, B. Fitzgerald,
S. Hissam, and K. Lakhani (eds.) Taking Stock of the Bazaar: Proceedings of the 3rd Workshop on Open Source Software Engineering. <http://opensource.ucc.ie/icse2003>,
2003.
[58] J. Feller and B. Fitzgerald. Understanding Open Source Software Development. London: Addison Wesley, 2002.
[59] Sharma, Sugmaran, and Rajgopalan. "A Framework for Creating Hybrid-Open Source Software Communities." Information Systems Journal. 12:7-25, 2002.
[60] A. Wasserman, A. Center for Open Source Innovation,
Carnegie Mellon West Coast Campus. Personal conversation.
[61] L. Rosen. Open Source Licensing. Upper Saddle River, NJ:
Prentice Hall, 2005.
[62] A.M. St. Laurent. Understanding Open Source and Free Software Licensing. Sebastopol, CA: O’Reilly, 2004.
[63] J.M. Garcia. "Quantitative Analysis of the Structure and Dynamics of the Sourceforge Project and Developer Populations:
Prospective Research Themes and Methodologies." <http://
siepr. stanford.edu/programs/OpenSoftware_David/JuanMG_FOSS-PopDyn_Report2+.pdf>, 2004.
© Novática
UPGRADE Vol. VI, No. 3, June 2005
27
Libre Software as A Field of Study
About Closed-door Free/Libre/Open Source (FLOSS) Projects:
Lessons from the Mozilla Firefox Developer Recruitment Approach
Sandeep Krishnamurthy
This paper is copyrighted under the CreativeCommons Attribution-NonCommercial-NoDerivs 2.5 license available at <http://
creativecommons.org/licenses/by-nc-nd/2.5/>
In this paper, the notion of a "closed-door open source project" is introduced. In such projects, the most important development tasks (e.g. code check-in) are controlled by a tight group. I present five new arguments for why groups may wish to
organize this way. The first argument is that developers simply do not have the disposable time to evaluate potential
members. The next two arguments are based on self-selection- by setting tough entry requirements the project can ensure
that it gets high quality and highly persistent programmers. The fourth argument is that expanding a group destroys the
fun. The fifth argument is that projects requiring diverse inputs require a closed door approach.
Keywords: Cave, Developer Recruitment, Firefox, FLOSS,
Group Size, Open Source Software.
I sent the club a wire stating - "PLEASE ACCEPT MY RESIGNATION. I DON’T WANT TO BELONG TO ANY CLUB THAT WILL
ACCEPT ME AS A MEMBER."
Groucho Marx,
US comedian with Marx Brothers (1890-1977)
1 Introduction
The vast majority of open source projects are ‘small’,
i.e., have less than five members
[3][5][4][11]. Many open source projects are ‘caves’ or
have just one member [7].
At the same time, some open source scholars have argued that the number of developersin a project is a proxy
for the level of that project’s success [9]. In this line of
reasoning, since developers have multiple options and many
demands on their time, merely attracting a large number of
developers to a project is an indication of success. In [12] it
has been argued that projects that do not grow beyond a
"strong core of developers ... will fail because of a lack of
resources devoted to finding and repairing defects" (pp. 329,
341). However, these "bigger is better" arguments ignore
some of the negatives in attracting a large number of
developers to a project - e.g. coordination costs, conflict,
loss in quality.
What we are now learning that is that the edict of
Raymond in "The Cathedral and the Bazaar" [14] - "treating your users as co-developers is your least-hassle route
to rapid code improvement and effective debugging" - does
not always apply. In some groups, what we observe is not
the promiscuous Raymondian model of "user as co-developer" which allows free access to code and broad check-in
28
UPGRADE
Vol. VI, No. 3, June 2005
privileges. Rather, what we see is that individuals are asked
not to apply and a tight group of individuals control the most
pivotal tasks (e.g. code check-in).
Sandeep Krishnamurthy is Associate Professor of ECommerce and Marketing at the University of Washington,
Bothell, USA. Today, he is interested in studying the impact of
the Internet on businesses, communities and individuals. He is
the author of a successful MBA E-Commerce textbook, "ECommerce Management: Text and Cases" and has recently
edited two books, "Contemporary Research in E-Marketing:
Volumes I, II". His academic research has been published in
journals such as Organizational Behavior and Human Decision
Processes (OBHDP), Marketing Letters, Journal of Consumer
Affairs, Journal of Computer-Mediated Communication,
Quarterly Journal of E-Commerce, Marketing Management,
Information Research, Knowledge, Technology & Policy and
Business Horizons. He is the Associate Book Review Editor of
the Journal of Marketing Research and a co-editor for a Special
Issue of the International Marketing Review on E-Marketing.
His writings in the business press have appeared on Clickz.com,
Digitrends.net and Marketingprofs.com. Sandeep was recently
featured on several major media outlets (TV- MSNBC, CNN,
KING5 News; Radio- KOMO 1000, Associated Press Radio
Network; Print- Seattle Post Intelligencer, The Chronicle of
Higher Education, UW’s The Daily; Web- MSNBC.com,
Slashdot.org) recently for pointing out the flaws in Microsoft
Word’s Grammar Check. His comments have been featured in
press articles in outlets such as Marketing Computers, Direct
Magazine, Wired.com, Medialifemagazine.com, Oracle’s Profit
Magazine and The Washington Post. Sandeep also works in
the areas of generic advertising and non-profit marketing. You
can access his web site at <http://faculty.washington.edu/
sandeep> and his blog at <http://sandeepworld. blogspot.
com>.<sandeep@u. washington.edu>
© Novática
Libre Software as A Field of Study
In this article, using the example of the Mozilla Firefox
browser, I argue that some very successful FLOSS (Free/
Libre/Open Source) projects are designed to be small. Far
from seeking a large number of developers, these groups
actively discourage applicants and do not even let interested
individuals to submit patches. Rather than opening the doors
to all interested individuals, these projects provide the code
for their programs to the world- but do not allow anyone to
participate in the development of the product. Based on
public online conversations, I provide five theoretical explanations to describe why some open source groups take a
"closed-door" approach.
2 Firefox Development Team
The Mozilla Foundation’s Firefox browser, <http://
www.mozilla.org/products/firefox/>, has done very well. It
has been downloaded more than 60 million times in a very
short period. Even though the Firefox browser benefits from
the vast Netscape code, an extremely small team of committed programmers developed the entire Firefox browser.
At this point, six individuals, Blake Ross, David Hyatt, Ben
Goodger, Brian Ryner, Vladmir Vukicevic and Mike Connor,
form the core group. At all points, a small group has had the
privilege of checking in code.
The Firefox project (originally named Phoenix and then
renamed Firebird) was initiated by Blake who had long fixed
bugs on Mozilla browser and was disenchanted with the
direction of the project. David Hyatt was brought in since
he was an ex-Netscape employee and had an intimate knowledge of the code. A significant subset of this core group
was paid a salary1 by the Mozilla Foundation to work on
this project.
The documents used by members of this team provide
us with a rare glimpse of what motivates some groups to
keep out others. Consider these excerpts from the Frequently
Asked Questions (FAQ) in the team’s original manifesto
(source: <http://www.blakeross.com/firefox/README1.25.html)>:
"- Q2. Why only a small team?
- The size of the team working on the trunk is one of the
many reasons that development on the trunk is so slow. We
feel that fewer dependencies (no marketing constraints),
faster innovation (no UI committees), and more freedom to
experiment (no backwards compatibility requirements) will
lead to a better end product.
- Q3. Where do I file bugs on this?
- We’re still chopping with strong bursts and broad
strokes. There’s plenty that’s obviously broken and we don’t
need bugs on that. If you find a bug (a feature request is not
a bug) and you’re sure that it’s specific to Firefox (not
present in Mozilla) and you’ve read all of the existing Firefox
bugs well enough to know that it’s not already reported then
feel free report it on the Phoenix product in Bugzilla.
1
Working off public documents, it appears that Blake, David and Ben
were paid while Peter was not.
© Novática
...
- Q5: How do I get involved?
- By invitation. This is a meritocracy — those who gain
the respect of those in the group will be invited to join the
group."
The FAQ is very clear. Those who wish to participate in
the process are discouraged from doing so. Potential participants are told that membership is by invitation only.
Clearly, the group has done everything in its power to keep
out people rather than encouraging them to participate. If
attracting developers is the path to success for an open source
project [9], this would not be the case.
What this teaches us is that some open source projects
are not "open door" projects. Many can best be described
as "closed-door" projects. Formally, closed-door projects
are defined as those that provide access to the program and
source code to any interested person, but do not provide
access to core functions of software development (esp. setting up roadmaps, checking in code and submitting patches).
Such projects want their target audience to download
and use their product and tinker with their code. However,
they intentionally keep out qualified potential participants.
Even developers who work on bugs and submit patches are
not admitted to the team. Firefox has consistently been developed as a closed-door project.
While users can provide feedback through open forums,
the actual development is done by a core group.
This approach is controversial and it upsets many members of the open source community. If open source projects
are about building a community of hackers, the closed-door
approach seemingly provides a manifestation of this philosophy that is built on the standard principles of control.
Here is one public reaction to the Firefox developer recruitment policy:
"They say loudly that they are only willing to accept
developers to the project that they have vetted themselves,
no one need apply. And with this attitude in front of them,
they drive away people who want to help but are unsure of
their abilities.
Then they say that they want people to submit patches
and pitch in to help develop the product. But how is anyone
supposed to do that without being a member? Well, obviously you don’t have to be on the team to work for the team.
But who wants to work for someone that isn’t going to treat
them as part of the same team?
…
However, the spirit of OSS (at least on the BSD side of
the world) is one of openness and acceptance. Turning people away or accepting a new member only through invitation smacks of elitism. Unfortunately when you deal with
human beings, you will inevitably end up dealing with some
who think themselves elite and worthy of looking down upon
others from the heights of their snoots." (Source: <http://
developers.slashdot.org/comments.pl? sid=137815&
cid=11526872>).
Moreover, relying on a small group could potentially
jeopardize the future of a project as members get other op-
UPGRADE Vol. VI, No. 3, June 2005
29
Libre Software as A Field of Study
portunities. Mike Connor, one of Firefox’ main developers
vented his frustration in this way:
"This is bugging me, and its been bugging me for a while.
In nearly three years, we haven’t built up a community of
hackers (emphasis added) around Firefox, for a myriad of
reasons, and now I think we’re in trouble. Of the six people
who can actually review in Firefox, four are AWOL, and
one doesn’t do a lot of reviews. And I’m on the verge of just
walking away indefinitely, since it feels like I’m the only
person who cares enough to make it an issue." (Source:
<http://steelgryphon.com/blog/index.php?p =37>)
3 Onion Theory
At this point, the onion theory of open source software
development has gained currency (e.g. [13][15][3][10]). In
this theory, a small group of powerful individuals control
check-in privileges while members of outer layers of the
onion are assigned routine tasks such as fixing bugs. The
reasons and implications of this organizational structure are
still not fully explicated in the literature.
The most common explanation for restricting group size
is that increasing the number of developers leads to coordination problems [2]. Some scholars have pointed out access to the core group in most open source projects is controlled by a "joining script".
Those who know how to approach the developers in the
project in a manner that is culturally compatible get in while
others are denied access [15]. Joining scripts certainly played
a role in the choice of Firefox core group developers - Dave
Hyatt was employed at Netscape and knew the culture, Ben
Goodger got in as a result of a thorough critique of the
Mozilla browser on his website.
In this article, drawing from public conversations on
Firefox, I discuss five other explanations for executing a
closed-door approach. It is, of course, not clear if all explanations provided here apply to all closed-door projects.
Future research should evaluate the relative importance of
each explanation. These arguments are somewhat new in
the way they are presented and should move the conversation on the recruitment of open source developers forward.
4 Four Explanations
Explanation 1 - Low Disposable Time
Evaluating new members takes time that developers have
very little of. As a result, frequently, leaders who have limited time at their disposal choose to do the work themselves
rather than spending the time trying to identify a potential
candidate.
Blake Ross has said: "I’m only just now finding time to
get back on Firefox, and even then I often have 1-2 hours
tops (a day). Ben obviously has his hands full leading and
trying to get all his ducks in a row." (Source: <http://
blakeross.com/index.php?p=19>)
Finding new team members is an onerous and a risky
task. The task involves the cost of advertising and screening applicants. If the applicant is not known to a member of
30
UPGRADE Vol. VI, No. 3, June 2005
the core group, it may be hard to judge the competence and
capabilities of potential new members.
Moreover, there is the usual agency problem where applicants may hide their abilities or game it in a way to ensure membership. Therefore, when members are pressed for
time, the incremental benefit from including a new member
is outweighed by the incremental cost of finding that person.
Explanation 2 - Meritocracy (Only The Most Skilled
Will Get in)
Open source communities compete with corporations
for developers. Attracting the best developers enhances the
community’s chances of competing in a tough marketplace.
Setting the highest standards allows them to recruit highly
skilled developers enhancing their chances of corporate
success. Closing the door leads to self-selection with the
most competent developers applying. It is possible to assess the quality of work of an open source developer since
the record of accomplishment is open and hence, easily
available.
This theory is supported by an observation made by,
Blake Ross, a key Firefox developer: "We basically wanted
to use open source as the world’s best job interview. Rather
than get people in front of a whiteboard for two hours and
ask them to move Mount Fuji (Author clarifies- this is a
reference to a book on Microsoft interview processes), we
wanted people to submit patches that would demonstrate
exactly what they would bring to the table if they joined the
team." (Source: <http://blakeross.com/index.php?p=19>)
A participant on a Slashdot thread also articulated a similar argument: "Firefox actually want the ‘smartest coders’
that work with their codebase. While it is certainly elitist, it
makes sure that only the elite (dedication plus skill) get to
work on their branch of the browser. If that ends up making
it work faster, more robustly and more efficiently, then all to
the better. A small team of highly skilled individuals can
often achieve more than a large pool of medium skilled people, and usually far more than a huge team of mediocrely
skilled people. Everyone they compete with (corporate entities, such as MS and Opera) is pretty much guaranteed to
be elitist (they’ll hire the best coders and designers they
can at interview), so why shouldn’t the firefox team? Of
course, as has been noted, if you think you can do better
with your choice of team recruitment, then fork the project,
and see which one survives." [Source: <http://developers.
slashdot.org/comments.pl?sid= 137815&cid=11527401>]
Explanation 3 - Persistence (Only The Most Persistent Will Get in)
Closed-door projects deter people from applying. References are frequently made to the amount of work that is
involved. See, for instance, this post by Ben Goodger of
Mozilla Firefox: "Help Wanted We always need Heavy
Lifters in code. If you’re excited about web browser technology, why not get involved in the premier Open Source
browser project? We’re especially looking for people with
© Novática
Libre Software as A Field of Study
skills in Mac OS X programming and Windows developers.
Get started today by finding and fixing something. Instructions are not provided here since figuring out how to do all
of this can be considered part of the "entry requirements".
;-) [Source: <http://www.mozilla. org/projects/firefox/>]
This has got to be the world’s most intimidating "Help
Wanted" advertisement. Ben literally tells people that they
will have to work hard for nothing and if they want to impress, they should work on low-end work, i.e., fixing bugs.
Blake Ross justifies this approach in this way: "Ben concedes that even figuring out how to get noticed is part of the
recruitment process, and rightfully so. After all, most of the
current Mozilla super reviewers and the people running the
project began as "entry-level" contributors and floated to
the top of the meritocracy. If you aren’t willing to do a little
research, observe how the project functions, and figure out
how to make your mark on it, do you really belong on the
team?" (Source: <http://blakeross.com/index.php?p=19>)
This is likely to lead to people with high levels of persistence to self-select into the project with positive results.
While self-selection based on developer quality is understandable, self-selection based on persistence may lead to
lower quality programmers (e.g. those who have high disposable time) being admitted.
While a long-term commitment to the project (e.g. long
posts on forums) is a sign of persistence, it may also be an
indicator of ideological fervor.
Therefore, enabling self-selection through persistence
may lead to peculiarities in terms of group composition.
Other groups have dealt with persistent participants who
do not add much by creating niche developer mailing lists
[12].
Explanation 4 - Opening the Door Will Kill The Fun
Scholars who study the motivation of open source developers tell us that many individuals are motivated by the
fun of building something [8][6]. In [1] Bitzer, Schrettl and
Schroeder even classify open source developers as homo
ludens and model the intrinsic motivation of developers.
Eric Raymond, an early observer of FLOSS and the author
of the popular "The Cathedral and the Bazaar", has said
that: "It may well turn out that one of the most important
effects of open source’s success will be to teach us that play
is the most economically efficient mode of creative work." 2
Closed-door projects are frequently started by a small
group that is intimately familiar with each other. Admitting
outsiders takes away this cozy feeling and reduces the intrinsic motivation of current developers.
Blake Ross has noted that Firefox has always been an
informal project: "People sometimes ask why we work on
Firefox for free. It gets hard to keep a straight face at "work."
Give me another project that touches the lives of millions of
people worldwide and still has public codenames like "The
Ocho" which get published in the media. ("The Ocho" is
2
We are grateful to Bitzer, Schrettl and Schroeder [1], page 9, for alerting
us to this quote.
© Novática
the name of the fictitious ESPN 8 station in Dodgeball; kudos to Ben for the flash of v1.5 naming brilliance). The best
part of Firefox is that even as it’s skyrocketed to the top, it’s
never really grown out of its humble roots as a skunkworks
project that was by and large coordinated on caffeine highs
at Denny’s. It has, in short, never quite grown up." (Source:
<http://blakeross.com/index.php?p=24>)
As one observer on Slashdot put it: "Maybe it is about
having fun ...
If you limit the developers to people who actually like
working together, and have similar ideas of how to behave
and talk to other people, more can often be done than if you
also invite all the socially dysfunct coders, who cannot take
a rejection of patch as anything but a personal insult (or, for
the true nutcase, some political game).
There are more than a couple of great coders out there
with zero people skill. They can damage a project because,
even though their own contributions are great, they lower
the fun level and therefore productivity of everybody else.
Some of them make great solo projects ..." (Source:
<http://developers. slashdot.org/comments.pl?sid=137815&
cid=11527896>)
Similar observations have been made in the entrepreneurship arena. Frequently, we find that a company is started
by a group of good friends. Over time, as formalization increases, the company puts in more processes making it more
structured and bureaucratic reducing the ad-hoc and fun
nature.
It is not clear if this can only be a partial explanation for
the existence of closed-door projects.
Explanation 5 - Products Requiring Diverse Capabilities Require Closed-Door Approach
Firefox is one of the few open source products that targets a general audience, i.e., a consumer market. Unlike the
vast majority of open source products that target a general
audience, Firefox needs to succeed with lay consumers. This
implies that for the project to succeed what is needed is an
intuitive user interface (UI) along with a sound product.
Therefore, the project team needs to involve people with
diverse capabilities- some that are more adept at UI design
and others that have strong programming capabilities.
The original manifesto states that: "We feel that fewer
dependencies (no marketing constraints), faster innovation
(no UI committees), and more freedom to experiment (no
backwards compatibility requirements) will lead to a better
end product."
In Blake Ross’ words: "Since this audience was primarly
non-technical in character, we felt it necessary to judge
patches not just on technical merit but also on how closely
they adhered to this new vision. Code+UI review, however,
took more time than we were willing to spend in our eagerness to develop Phoenix quickly. So we sought to find the
people who understood our vision so well that they didn’t
need this additional layer of review, and then bring them
onto the team." (Source: <http://blakeross.com/index.
php?p=19>)
UPGRADE Vol. VI, No. 3, June 2005
31
Libre Software as A Field of Study
closed-door approach. The group did not want a UI committee and wanted to handle patches differently (i.e.,
Code+UI review). Increasing the group size and allowing
outsiders to enter the group would dilute this process.
5 Conclusion
In this paper, I have proposed five new arguments for
organizing an open source project in a closed-door manner.
The first argument is that developers simply do not have the
disposable time to evaluate potential members and are likely
to use their time to do the work rather than invest it in evaluating new members. The next two arguments are based on
self-selection - by setting tough entry requirements the
project can ensure that it gets high quality and highly persistent programmers. The fourth argument applies the homo
ludens fun-driven intrinsic motivation arguments by implying that extending a group beyond a small coterie will ruin
the fun. The fifth argument is that complicated projects (e.g.
those requiring input in technical and user interface areas)
require a closed-door approach.
Future research must investigate the relative importance
of these arguments. Moreover, we do not know if project
outcomes are improved or hurt by organizing the project in
a closed-door way. An empirical comparison of closed-door
and open-door approaches is needed.
32
UPGRADE
Vol. VI, No. 3, June 2005
References
[1] J. Bitzer, W. Schrettl, and P.J.H. Schroeder. "Intrinsic
Motivation in Open Source Software Development", 2004.
Available at <http://econwpa.wustl.edu/eps/dev/papers/0505/
0505007.pdf>.
[2] Frederick Brooks. The Mythical Man-Month: Essays on
Software Engineering, 20th Anniversary Edition, AddisonWesley Professional, 1995.
[3] K. Crowston and J. Howison "The Social Structure of Open
Source Software Development Teams. Working Paper", 2003.
Available at <http://floss.syr.edu/tiki-index.php> [accessed on
August 22, 2004].
[4] K. Healy and A. Schussman. "The Ecology of Open Source
Software Development", 2004. Working paper. Available at
<http://www.kieranhealy.org/files/drafts/ossactivity.pdf>
[accessed on August 22, 2004].
[5] F. Hunt and P. Johnson. "On the Pareto distribution of
Sourceforge projects", in Proceedings of the F/OSS Software
Development Workshop. 122-129, Newcastle, UK`, 2002.
[6] S. Krishnamurthy. "On the Intrinsic and Extrinsic Motivation
of Open Source Developers", 2005. Forthcoming in
Knowledge, Technology & Policy.
[7] S. Krishnamurthy. "Cave or Community?: An Empirical
Examination of 100 Mature Open Source Projects", in First
Monday, 7(6), 2002. Available at
<http://firstmonday.org/issues/issue7_6/krishnamurthy/
index.html>.
[8] K.R. Lakhani and R. Wolf. "Why Hackers Do What They Do:
Understanding Motivation and Effort in Free/Open Source
Software Projects", in Perspectives on Free and Open Source
Software, edited by J. Feller, B. Fitzgerald, S. Hissam, and K.
R. Lakhani. Cambridge, MA: MIT Press, 2005.
[9] J. Lerner and J.Tirole. "Some Simple Economics of Open
Source", in Journal of Industrial Economics, 52, pp. 197-234,
2002.
[10] Luis Lopez-Fernandez, Luis, Gregorio Robles, and Jesus M.
González-Barahona. "Applying Social Network Analysis to
Information in CVS Repositories", 2005. Available at <http:/
/opensource.mit.edu/papers/llopez-sna-short.pdf>.
[11] G. Madey, V. Freeh, and R. Tynan. "Modeling the F/OSS
Community: A Quantitative Investigation", in Free/Open
Source Software Development, edited by Stephan Koch,
Hershey, PA: Idea Group Publishing, 2004.
[12] A. Mockus, R. T. Fielding, and J. D. Herbsleb. "Two Cases
of Open Source Software Development: Apache and Mozilla",
in ACM Transactions on Software Engineering and
Methodology, 11(3), pp. 309-346, 2002.
[13] K. Nakakoji, Y. Yamamoto, Y. Nishinaka, K. Kishida, and Y.
Ye. "Evolution Patterns of Open-Source Software Systems and
Communities", in Proceedings of International Workshop on
Principles of Software Evolution (IWPSE 2002), pp. 76-85,
2002.
[14] E. Raymond. "The Cathedral and the Bazaar", in First
Monday, Volume 3, Issue 3, 1998. Available at <http://
www.firstmonday.org/issues/issue3_3/raymond/>.
[15] G. Von Krogh, S. Haefliger, and S.Spaeth. "Collective Action
and Communal Resources in Open Source Software
Development: The Case of Freenet", en Research Policy, 32(7),
pp. 1217-1241, 2003.
© Novática
Libre Software as A Field of Study
Agility and Libre Software Development
Alberto Sillitti and Giancarlo Succi
This paper is copyrighted under the CreativeCommons Attribution-NonCommercial-NoDerivs 2.5 license available at <http://
creativecommons.org/licenses/by-nc-nd/2.5/>
Agile Methods and Libre Software Development are both popular approaches to software production. Even if they are
very different, they present many commonalities such as basic principles and values. In particular, there are many analogies between Libre Software Development and Extreme Programming (focus on the code and embrace of changes to name
a few ones). This paper presents such principles and basic values and identifies the commonalities.
Keywords: Agile Methods, Extreme Programming, Libre
Software.
1 Introduction
Agile Methods (AMs) have grown very popular in the
last few years [3] and so has Libre Software [1][8]. Even if
these approaches to software development seem very different, they present many commonalities, as evidenced by
Koch [11].
Both AMs and Libre Software push for a less formal
and hierarchical organization of software development and
a more human-centric development, with a major emphasis:
in focusing on the ultimate goal of development – producing the running system with the correct amount of
functionalities. This means that the final system has to include only the minimum number of features able to satisfy
completely the actual customer.
in eliminating activities related to some ‘formal’ specification documents that have no clear tie with the final outcome of the product.
This approach is clearly linked with the Lean Management [16]. AMs acknowledge explicitly their ties with Lean
Management [13], while Libre Software keeps them implicit.
Moreover, AMs and Libre Software development look
similar under several points of view, including:
1. Their roots are both fairly old, but now they have been
revamped with a new interest, as it is explicitly acknowledged by Beck [4] for AMs (eXtreme Programming, XP,
in particular) and evidenced by Fuggetta for Libre Software [10].
2. They are both disruptive [6], in the sense that they alter
established values in software production.
3. There are successes of both, whereas more traditional
approaches have failed (the C3 project for AMs [4] and
the Mozilla/Firefox browser for Libre Software [7][12]).
4. Proposers of AMs are also participating at Libre Software development (e.g. Beck with JUnit).
This paper aims at providing an overview of the
commonalities between Agile Methods (XP in particular)
and Libre Software from the point of view of the basic principles and values these two communities share.
© Novática
The paper is organized as follows: Section 2 identifies
the general Agile Principles in Libre Software; Sections 3
focuses on specific XP values and principles and identifies
them in Libre Software; finally, Section 4 draws the conclusions and proposes further investigation.
2 Agile Principles in Libre Software
The basic principles shared by all AMs are listed in the
so-called Agile Manifesto [2]. Table 1 identifies the principles of the AMs in Libre Software.
Altogether, it is evident that Libre Software adopts most
of the values fostered by supporters of AMs.
Such evidence calls for subsequent analysis to determine the extent and the depth of such adoption. Moreover,
AMs and Libre Software are classes of software development methods, which include a wide number of specific
methods.
Therefore, it is important to consider specific instances
of them to determine how the interactions between AMs
and Libre Software really occurs in practice, beyond considerations that, left alone, ends up being quite useless.
3 XP Values and Principles in Libre Software
Besides the commonalities between Libre Software and
the AMs in general, it is interesting to analyze this relationship between Libre Software and one of the most popular
AM: Extreme Programming.
Alberto Sillitti, PhD, PEng, is Assistant Professor at the Free
University of Bozen, Italy. He is involved in several European
Union funded projects in the software engineering area related
to agile methods and open source software. His research areas
include software engineering, component-based software
engineering, integration and measures of web services, agile
methods, and open source. <[email protected]>
Giancarlo Succi, PhD, PEng, is Professor of Software
Engineering and Director of the Center for Applied Software
Engineering at the Free University of Bozen, Italy. His research
areas include agile methods, open source development, empirical
software engineering, software product lines, software reuse,
and software engineering over the Internet. He is author of more
than 100 papers published in international conferences and
journals, and of one book. <[email protected]>
UPGRADE Vol. VI, No. 3, June 2005
33
Libre Software as A Field of Study
XP is centered in four major values (a comprehensive
discussion is in the two editions of Beck’s book [4][5]):
1. Communication: developers need to exchange information and ideas on the project among each other, to the
managers, and to the customer in an honest, trusted and
easy way. Information must flow seamless and fast.
2. Simplicity: simple solutions have to be chosen wherever
possible. This does not mean to be wrong or to take simplistic approaches. Beck often uses the aphorism "simple but not too simple".
3. Feedback: at all levels people should get very fast feedback on what they do. Customers, managers, and developers have to achieve a common understanding of the
goal of the project, and also about the current status of
the project, what customers really need first and what
are their priorities, and what developers can do and in
what time. This is clearly strongly connected with communications. There should be immediate feedback also
from the work people are doing, that is, from the code
being produced – this entails frequent tests, integrations,
versions, and releases.
4. Courage: every stakeholder involved in the project
should have the courage (and the right) to present her/
his position on the project. Everyone should have the
courage to be open and to let everyone inspect and also
modify his/her work. Changes should not be viewed with
terror and developers should have the courage to find
better solutions and to modify the code whenever needed
and feasible.
These values are present in various ways in Raymond’s
description of Open Source (Raymond, 2000) and summarized in Table 2.
Moreover, as noted in [9], hidden inside the first version of Beck’s book [4] there are 15 principles, divided into
5 fundamental principles and 10 other principles.
The fundamental principles are:
1. Rapid Feedback: going back to the value of feedback,
such feedback should occur as early as possible, to have
the highest impact in the project and limiting to the highest extent the possible disruptions.
2. Assume Simplicity: As mentioned, simplicity is a major value. Therefore, simplicity should be assumed everywhere in development.
3. Incremental Change: change (mostly resulting from
feedback) should not be done all at once. Rather, should
be a permanent and incremental project, aimed at creating an evolving system.
4. Embracing Change: change should be handled with
courage and not avoided. The system as a whole, and
the code, should be organized to facilitate change to the
largest possible extent.
5. Quality Work: quality should be the paramount concern. Lack of quality generates rework and waste that
should be avoided to the large degree.
Other principles of XP are:
1. Teach Learning: requirement elicitation is an overall
learning process. Therefore, learning is of paramount im-
34
UPGRADE Vol. VI, No. 3, June 2005
portance in the system.
2. Small Initial Investment: the upfront work should be
kept as minimum as possible, as subsequent changes may
destroy it.
3. Play to Win: all the development should be guided by
the clear consciousness that what we do is effectively
doable.
4. Concrete Experiments: the ideas should be validated
not though lengthy and theoretical discussions but via
concrete experimentations on the code base.
5. Open, honest Communication: the communication
should be kept simple and easy. The customer should
not hide his/her priorities nor the developers and the
managers should hide the current status of the work.
6. Work with people’s instincts - not against them: the
role of the managers is to get the best out of developers,
so their natural inclinations should be exploited. A strong
team spirit should be exploited. Moreover, in the interactions between managers, developers, and customers,
the fears, anxieties, discomforts should not be ignored
but properly handled.
7. Accepted Responsibility: all the people in the project
should voluntary take their own responsibilities, customers, managers, and developers. Such responsibilities
should then be assigned with complete trust.
8. Local Adaptation: the methodology should be wisely
adapted to the needs of each development context.
9. Travel Light: in XP projects it is important to keep the
lowest amount of documents possible, clearly without
compromising the integrity of the project.
10. Honest Measurement: the project should be tracked
with objective and understandable measures. The measures should be collected in a lean way not to alter the
nature of XP.
In this section we review the application in Open Source
of the fundamental principles: rapid feedback, assume simplicity, incremental change, embracing change, quality work.
We have already discussed the issue of feedback and
simplicity from Beck’s point of view. Fowler [9] shares
most of the Beck’s point of view and he stresses the continuous improvement of the source code making it the simplest as possible.
Regarding the incremental changes, Raymond [14] acknowledges upfront it as one of its guiding principles since
his early Unix experience: "I had been preaching the Unix
gospel of small tools, rapid prototyping and evolutionary
programming for years".
As for embracing changes proposed by others, we have
already mentioned Raymond’s opinion [14] on listening to
customers even if they do not "pay you in money". He goes
further and in rule number 12 he states the pivotal role of
embracing the change: "Often, the most striking and innovative solutions come from realizing that your concept of
the problem was wrong".
Raymond [14] goes further than Beck [4] on this subject. Both agree that prototypes (‘spikes’ in Beck jargon)
can be instrumental to achieve a better understanding of a
© Novática
Libre Software as A Field of Study
Principles of the Ams
Identification in Libre Software
Individuals and
interactions over
processes and tools
The development process in Open Source communities definitely puts more
emphasis on individual and interaction rather than on processes and tools.
Working software over
comprehensive
documentation
Customer collaboration
over contract negotiation
Responding to change
over following a plan
The interactions in Open Source communities, though, tend to me mainly based
on emails; the pride and the individuality of the developer, though, become
predominant, while in Agile Methods there is a strong push toward establishing
team spirit among developers.
Both Agile Methods and Open Source development view the working code as
the major supplier of documentation.
In Open Source communities the most common forms of user documentation are
screenshots and users forums (Twidale and Nichols, 2005), which both come
from the direct use of the systems, and the most common source of technical
documentation are class hierarchies directly extracted from the code, bugtracking databases, and outputs from differences between versions.
In Libre Software customers and developers often coincide. This was especially
true in the early era of Libre Software, when it was clearly said, for instance, that
Unix (and later Linux and the GNU tools) was a system developed by developers
and for developers. In such cases, the systems are clearly customer driven.
There are now situations where the customers are clearly separated from
developers. New systems such as Subversion, ArgoUML, etc., have a clear
customer base, separated from the developers. Still, looking at how the releases
occur, the functionalities are added, and the problems are solved it appears that
the system is developed with a clear focus on customer collaboration. Moreover,
in Europe it is becoming more popular the request that systems developed with
public funds are releases with Libre Software licenses of various kinds.
Following the discussion above on “Customer collaboration over contract
negotiation”, the evolution of an Open Source project typically is customer
driven. It appears that Libre Software systems do not have a “Big Design
Upfront”; they are pulled rather than pushed and their evolution depends on real
needs from the customers.
However, most of such analysis is based on situations where customers and
developers coincide. It would be interesting to see how this situation would
evolve in the newer scenarios where there are customers separated from
developers.
Table 1: Principles of the AMs in Libre Software.
complex application domain. Raymond [14] also claims that
the system being developed can help identifying new ideas
for new developments – rule 14: "Any tool should be useful
in the expected way, but a truly great tool lends itself to
uses you never expected". Needless to say, when drafting
rule 14 Raymond is not concerned in ensuring the customer
that he will not waste customer’s resources.
Regarding quality work, in Raymond [4] there is not an
explicit reference to the paramount role of quality as it is in
Beck [4]. However, throughout the essay there is a constant
evidence of the pride that Open Source developers put in
their code, a pride that comes only from deploying quality
work.
Now we turn our attention to the other principles: teach
learning; small initial investment; play to win; concrete experiments; open, honest communication; work with people’s instincts - not against them; accepted responsibility;
local adaptation; travel light; honest measurement.
Raymond emphasizes the role of listening and learning
© Novática
from other’s comments. However, there is not an explicit
mention to teaching learning.
There is also little concern of not having small initial
investment and travel light. The reason is that Open Source
projects are lead more by developers, less likely to spend
ages in the "analysis paralysis" or in producing useless documentation and more concerned on delivering useful code.
Rather, the attention of Raymond [14] is on evidencing that
a little bit of upfront work is required "When you start building a community, what you need to be able to present is a
plausible promise. Your program doesn’t have to work particularly well. It can be crude, buggy, incomplete, and poorly
documented. What it must not fail to do is (a) run, and (b)
convince potential co-developers that it can be evolved into
something really neat in the foreseeable future".
Playing to win and concrete experiments are an integral part of any self-motivated effort, so it does not require
any further explanation.
Discussing values, we have already evidences the role
UPGRADE Vol. VI, No. 3, June 2005
35
Libre Software as A Field of Study
XP value
Identification in Libre Software
Communication
The very same concept of Open Source is about sharing ideas via the source code,
which becomes a propeller for communication. So, with no doubt communication is a
major value in the work of Raymond [14].
Simplicity
Feedback
The role of communications is reinforced by Raymond throughout his essay [14]. He
clearly states the paramount importance of listening to customers “But if you are writing
for the world, you need to listen to your customers – this does not change just because
they’re not paying you in money.” Then, it is evidenced that to lead an Open Source
project good communication and people skills are very important: he carries as examples
Linus Torvald and himself, allegedly, two people capable of motivating and
communicating.
Simplicity in the system is highly regarded in the Open Source community. In general,
Raymond [14] mentions the “constructive laziness,” which helps in finding existing
solutions that can be adapted to new situations.
Beck's concept of simplicity [4] is clearly reflected in rule number 13 of Raymond [14]; it
is an excerpt from Antoine de Saint’Exupéry: “Perfection (in design) is achieved not
when there is nothing more to add, but rather when there is nothing more to take away”.
Working in a distributed community, Raymond [14] acknowledges the value of a fast
feedback at all levels:
• between distributed developers, potentially working on the same fix
• between developers and customers – rule number 11 is a clear example: “The
next best thing to having good ideas is to recognize good ideas from your users.
Sometimes the latter is better”.
Feedback is achieved especially running and testing the code, this is why early and
frequent releases are instrumental – rule 7 says “Release early, release often. And listen
to your customers”.
Courage
Needless to say most of the comments made about feedback could apply as well to
communication. This is not awkward. Beck [4] acknowledges explicitly that the two
concepts overlap.
The value of courage is less present in the presentation of Raymond [14]. He hints at
courage when he presents the initial difficulty in getting the work exposed to “thousands
of eager co-developers pounding at every single new release”.
Table 2: XP values in Libre Software.
given by Raymond (2000) to an open, honest communication.
Being developer-centric, Open Source also advocates
working with people’s instincts - not against them and
relies on accepted responsibility. The very first two rules
of Raymond are "Every good work of software starts by
scratching a developer’s personal itch", and "Good programmers know what to write. Great ones know what to
rewrite (and reuse)". Also rule 4 appears quite applicable:
"If you have the right attitude, interesting problems will find
you".
While there is no formal measurement in place in
Raymond’s essay [14], there is an emphasis on releasing
often, thus making clear the status of the project and the
bugs still presents. This resembles honest measurement.
4 Conclusions
Altogether, we note that there is a pretty high level of
36
UPGRADE Vol. VI, No. 3, June 2005
overlap between the values adopted by AM (XP in particular) and those of Open Source development according to
Raymond. Communication, feedback and simplicity are
fully endorsed. Courage is also implicitly assumed to carry
out an Open Source project.
Going to the principles, there is still a good level of agreement in the fundamental principles, apart from quality that
in Raymond’s work is assumed, rather than advocated.
For the "other principles" of XP, the only differences
come from the different point of view: Raymond deals with
mostly volunteers, while Beck mostly with employees. Concepts such as traveling light, limited upfront design, etc.,
do not concern particularly Raymond that, on the other hand,
is more interested that the open source developers do at least
a little bit of design upfront.
As to the practices, clearly the situation is quite different. Practices related to process, shared understanding and
programmer welfare are somewhat similar in the two cases.
© Novática
Libre Software as A Field of Study
Practices related to fine-scale feedback are not so widely
present in the description of Raymond.
As a final note, we would like to evidence that both
Beck’s and Raymond’s experience comes from an early use
of very easy to employ, expressive, and powerful programming languages: Smalltalk and Lisp respectively. An analysis
of the role of programming languages in AMs and in Libre
Software development could be an interesting subject for a
further study.
References
[1] P. Abrahamsson, O. Salo, and J. Ronkainen. Agile software development methods, VTT Publications, 2002. <http://
www.inf.vtt.fi/pdf/publications/2002/P478.pdf> [accessed on
June 15 2005].
[2] Agile Alliance, Agile Manifesto, 2001. <http://www. agilema
nifesto.org/> [accessed on June 15 2005].
[3] L. Barnett. "Teams Begin Adopting Agile Processes". Forrester
Research, November 2004.
[4] K. Beck. Extreme Programming Explained: Embracing Change,
Addison Wesley, 1999.
[5] K. Beck. Extreme Programming Explained: Embracing Change,
Second Edition, Addison Wesley, 2004.
[6] C.M. Christensen. The Innovator’s Dilemma, Harper Business,
2003.
[7] M.A. Cusumano, D.B. Yoffie. Competing on Internet Time:
Lessons From Netscape & Its Battle with Microsoft, Free Press,
1998.
[8] J. Feller, B. Fitzgerald. Understanding Open Source Software
Development, Addison-Wesley, 2002.
[9] M. Fowler. Principles of XP, 2003. <http://www.martinfowler.
com/bliki/PrinciplesOfXP.html> [accessed on June 15 2005].
[10] A. Fuggetta. "Open Source Software – an Evaluation", Journal
of Systems and Software, 66(1), 2003.
[11] S. Koch. "Agile Principles and Open Source Software Development: A Theoretical and Empirical Discussion", 5th International
Conference on eXtreme Programming and Agile Processes in
Software Engineering (XP2004), Garmisch-Partenkirchen, Germany, 6 - 10 June, 2004.
[12] S. Krishnamurthy. "The Launching of Mozilla Firefox - A Case
Study in Community-Led Marketing", 2005. <http://
opensource.mit.edu/papers/sandeep2.pdf> [accessed on June 15
2005].
[13] M. Poppendieck, T. Poppendieck. Lean Software Development:
An Agile Toolkit for Software Development Managers, Addison
Wesley, 2003.
[14] E.S. Raymond. The Cathedral and the Bazar, Version 3.0, 2002.
<http://www.catb.org/~esr/writings/cathedral-bazaar/cathedralbazaar/> [accessed on June 15 2005]. Also published by O’Reilly
in 2001.
[15] M.B. Twidale, D.M. Nichols. "Exploring Usability Discussions
in Open Source Development", 38th Hawaii International Conference on System Sciences, 2005. <http://csdl.computer.org/
comp/proceedings/hicss/2005/2268/07/22680198c.pdf>.
[16] J.P. Womack, D.T. Jones. Lean Thinking: Banish Waste and
Create Wealth in Your Corporation, Revised and Updated, Free
Press, 2003.
© Novática
UPGRADE Vol. VI, No. 3, June 2005
37
Libre Software as A Field of Study
The Challenges of Using Open Source Software as A Reuse Strategy
Christian Neumann and Christoph Breidert
This paper is copyrighted under the CreativeCommons Attribution-NonCommercial-NoDerivs 2.5 license available at <http://
creativecommons.org/licenses/by-nc-nd/2.5/>
This paper compares the benefits of adapting open source software to internal and commercial reuse strategies. We propose a
course of action that can be used for technical and economical evaluation. The advantages, disadvantages, and risks of these
basic strategies are investigated and compared.
Keywords: Commercial-Off-The-Shelf, COTS, FOSS, Free
and Open Source Software, Software Engineering, Software Reuse.
This paper focuses on two different business models:
system vendors that develop specific software to customer
order, and software companies which develop standard software.
1 Introduction
Reusing existing software is an important part of modern software engineering, promising cost reduction, faster
time to market, and improved quality. One study shows that
productivity can be increased by up to 113%, the average
fault rate can be lowered by 33% and time to market can be
shortened by 25% [11]. Another study reports a 51% defect
reduction, a 57% productivity increase, and a 57% faster
time to market [16].
The decision whether or not to introduce reuse into the
engineering process depends on two important conditions:
components must meet certain technical requirements and
their usage must be economically viable. This paper describes a technical course of action for choosing the best
alternative out of three rudimentary reuse strategies: the
development and usage of in-house components, the adoption of external commercial components (the so Called Commercial-Off-The-Shelf, COTS, components), and the integration of Free and Open Source Software, FOSS).
Recent studies have discussed the use of FOSS and its
effect on total cost of ownership and quality. But these studies mainly focus on the use of operating systems such as
Linux, or application software like the Apache web server.
For example, one study estimates that using FOSS can lead
to savings of about 12 million Eur (one fifth of the cost of
using commercial applications) over a five-year period [7].
Another study points to savings of about 33% compared to
commercial software over a two-year period [19].
However, with the exception of these studies, there is
hardly any knowledge about the integration of FOSS in the
software engineering process. The free concept of FOSS
opens the door to new strategies and business models that
can be executed at almost every phase of the development
process, such as using FOSS as a basis for future product
line development or contributing software components to
the FOSS community. The last point is especially interesting for companies because it enables the maintenance process of software artefacts to be outsourced.
38
UPGRADE Vol. VI, No. 3, June 2005
2 Adapting Reuse Strategies
Figure 1 illustrates the course of action to be taken before a decision is made. The decision making process is
divided into three phases: sighting, adaptation, and comparison. In the first two phases, which focus on technical
aspects, the decision is taken as to whether a component
provides the desired functionality or, if not, whether it can
be extended. In phase three the economic aspects of the
remaining candidates are investigated thoroughly and the
implications for different business models are discussed.
2.1 Technical Analysis
First of all the technical requirements of the project need
to be specified. This specification is largely deduced from
the desired functionality and contains the architecture, I/O
functionality, and business logic. Based on this specification, the software developer identifies possible places where
components can be hooked into the architecture of the application. A good place to start searching is in I/O procedures because almost every program uses file operations,
Christian Neumann is a PhD student at the Dept. of Information
Business at the Vienna University of Economics and Business
Administration, Austria. He received his masters degree in
Engineering and Management from the University of Karlsruhe,
Germany. His research interests include quality of open source
projects, usability of frameworks, cost estimation, and software
investment analysis. He worked for several years as a software
engineer for a major German IT company.
<[email protected]>.
Christoph Breidert holds a PhD from the Dept. of Information
Business at the Vienna University of Economics and Business
Administration, Austria. He received his masters degree in
Engineering and Management from the University of Karlsruhe,
Germany. He has several years experience in software
development of large-scale J2EE projects.<christoph.
[email protected]. ac.at>.
© Novática
Libre Software as A Field of Study
Figure 1: Model for Evaluating Reusable Components.
database access, or a Graphical User Interface (GUI).
Phase one of the analysis consists of a sighting in which
possible reusable components are identified. In the case of
internal components, sources of information for this task
are reuse repositories or implicit knowledge of former
projects. The search for external components can be performed by using the Internet, asking newsgroups, visiting
exhibitions, or reading magazines. A good place to start the
search for FOSS are some of the great online repositories
such as sourceforge, <http://www.sourceforge.net>, or
Apache, <http://www.apache.org>. At these repositories
FOSS can be searched for by theme, programming language,
or keywords. The Apache Foundation in particular hosts a
wide range of ready to use components, some of which have
evolved into de facto standards (log4j, xerces, xalan, etc.).
The second phase takes a deeper look into the functionality provided by the components that were identified in phase
one. Possible sources of information are documentations, specifications, and examples. If the provided functionality does
not comply with the technical requirements we need to check
whether their components can be extended. Naturally this
is only applicable for FOSS and internal components, because COTS do not provide the source code required for
modification. Using the components unchanged is called
black-box reuse; modifying the components before use is
called white-box reuse [22].
In the case of FOSS it is very important to address the
following issues [28]: future functional upgradability, openstandard compatibility, adaptability and extensibility, and
reliability. To address these issues it is necessary to take a
deeper look into design and architecture which we would
hope to be described in the user manual. Unfortunately, many
FOSS projects suffer from bad documentation which makes
it very difficult to learn or extend their components, making their integration, and in particular their extension or adaptation, difficult. Therefore the existence of good documentation, examples, articles or newsgroups is indispensable for a fast and cost effective integration. There are several systematic ways of searching for FOSS components
(e.g. <http://www.amosproject.org>) and evaluating FOSS
[9].
© Novática
To ensure the quality of reusable software, especially in
long-term projects such as frameworks in product lines, the
source code must be inspected. This can be done by using
quality indicators such as object-oriented metrics or commentaries [4][5][10][18].
All possible candidates that have passed the first two
stages are compared in phase three. These two phases may
require a great deal of effort, but the information gathered
provides the basis for our final economic and managerial
analysis.
2.2 Specific Software
Imagine a system vendor that implements a highly customised solution for a client; for example, a monitoring system for a diversified IT (Information Technology) landscape.
The requirements specification forms the basis for the offer
and every additional functionality of the customer leads to
a change request that has to be renegotiated every time.
The reduction of development effort and the resulting
decrease of costs is the most important managerial aspect
to be considered. Time to market may be another constraint
that needs to be kept in mind. The use of existing components, both internal and external, is therefore extremely advisable because it is cheaper to reuse them than to implement the same functionality over and over again, and the
development schedule can also be shortened. Developing
internal reusable components is not cost-efficient because
their intended functionality is tailored to just one specific
case and it is highly unlikely that they will be reusable in
future projects.
The integration of COTS or FOSS also enables systems
with highly sophisticated technologies to be produced even
when no in-house knowledge is available [2]. Security
mechanisms in particular have proven so difficult to implement correctly that developers should be spared the risk of
producing major bugs by using suitable COTS [17]. However, the integration of COTS invariably involves license
costs.
FOSS provides the same advantages as COTS but there
is no license to pay for. The impact of a FOSS component
license is not significant in the case of specific software.
UPGRADE Vol. VI, No. 3, June 2005
39
Libre Software as A Field of Study
Even if the application is an extended version of a FOSS
component covered by a copyleft license, it is of no real
significance. A copyleft license does not prevent the vendor from selling a derived product and the purchaser has
the right to redistribute the program and obtain its source
code. In the case of developing specific applications it is
common practice to give the source code to the customer
anyway. And the customer has nothing to gain from giving
its customised solution away to anyone else. But we will
later be pointing out that while FOSS licenses are not significant in the case of specific software, they do have implications on the development of proprietary software.
To sum up, in the case of specific software the overall
costs for the different types of components must be estimated and the cheapest alternative should be chosen.
2.3 Proprietary Software
Now imagine a software company that has built up extensive knowledge in a specific field – e.g. document management – and wants to provide a desktop application that
will help to organise everyday paperwork. The company’s
marketing division has spotted additional demands, – e.g.
other office work tasks – which should be integrated into
the application. The company wants to extend its market
share and sell licenses that are valid for one workplace only.
This situation is completely different from developing specific software because the time horizon is much longer and
the business model is based on selling licenses.
The tasks for the software engineer are the same as before: To identify the required functionalities, spot possible
components, and investigate the candidates. But in addition it is essential to identify and specify future requirements very thoroughly, especially if the components are an
important part of the design, such as a framework architecture for a product line development. To be open for
upcoming features the components must be highly
customisable, modular and trustworthy.
Developing internal components for reuse is the most flexible reuse alternative with regard to reusability and functionality. As components can be designed from scratch, their functionality is tailored to suit the company’s needs. The greatest
disadvantage of establishing an internal reuse strategy is the
very high initial investment for developing reusable components. Due to the great need for modularity, software quality,
and documentation, the cost of developing reusable software
can be significantly higher than if the same functionality is
developed for a single application only. One study indicates
that the break-even point can be reached after just three times
of reusing software components [1]. For a successful reuse
strategy a repository for the reusable components is mandatory and this must be integrated in the development process.
This can lead to higher organisational costs.
Choosing COTS does not require any initial investment
apart from purchasing the license. But there are several disadvantages outweighing this advantage.
Flexibility is limited as components cannot be adapted
to meet special needs. Additional functionality must be re40
UPGRADE Vol. VI, No. 3, June 2005
alized in wrappers. Development of this glue code can take
up to three times as long as the in-house development of
the software [24][27].
The fact that COTS are black-boxes gives rise to two
important risks: the risk of undetermined interior behaviour and the risk of interaction with the environment. The
first may lead to invalid/wrong results or even to unreliability
of the whole application. To prevent this from happening, a
great many tests need to be conducted, leading to an increase in integration costs. The latter risk gives rise to a
number of security risks such as accessing unauthorised
resources, accessing resources in an unauthorised manner,
or abusing authorised privileges [30][17].
Another major problem is maintenance since COTS consumers depend on the vendors’ efforts in this regard [2].
The frequency and number of updates can be uncertain,
making it very difficult for the consumer to maintain the
application. Therefore it is very important to have an overview of the development and maintaining cycles of COTS
so that they can be synchronised with the actual development process. For these reasons the additional costs involved
in maintaining COTS-based systems can equal or even exceed those of custom software development [24].
Adapting existing FOSS components can be the basis
for a company’s reuse strategy because FOSS combines the
advantages of an internal reuse and COTS strategy: No initial investment is needed, existing functionalities can be
used, and the source code is available.
The free availability of source code is what makes this
strategy so powerful. On the one hand it provides the necessary flexibility to extend or adapt FOSS components and
on the other hand it enables inspection and debugging which
reduces the risk of unexpected behaviour. Despite the cost
of learning and adaptation there is no initial investment needed
for integrating the components into the engineering process.
Furthermore, the company can cut its maintenance costs because maintenance is carried out by the community. But these
steps need to be given careful consideration because any knowhow contained in the software may become generally available to the community and other users (see below).
The main drawback of COTS, the unspecified behaviour, also applies for FOSS. But as the source code is available, bugs and flaws can be fixed immediately.
And even if the community no longer supports the product, users can extend and maintain it on their own.
Other advantages stem from the FOSS developing process, which is characterised by the peer-review principal [23].
In a peer-review approach the program is reviewed by at
least two experts (in a vivid FOSS community probably
more). This critical judgement helps to improve the design,
functionality and quality of the program. Furthermore, openness of source code is necessary for security applications
because the user can verify the functionality and reliability
of the program, such as encrypted data transmission.
But availability of source code does not necessarily mean
good programs. A program is only as good as its programmers. It is therefore necessary to take a close look at the
© Novática
Libre Software as A Field of Study
community’s activities in terms of the number of contributors involved, the project’s maturity (alpha, beta, stable),
and the frequency of releases [14][15].
Only projects with several contributors, a high degree of
programming activity, and well defined road maps are worth
using in company’s product lines [21]. Projects that only have
a handful of contributors or where the contributed work is
highly concentrated may die or remain in an unstable state.
In this case any benefit of using FOSS is lost as there is
no community to maintain and evolve the product. The integration of early releases (snapshot release, beta, etc.)
should be avoided as these components may be buggy, or
future changes in design or functionality may require an
additional integration effort.
Other important aspects to be considered are legal issues. Usage in commercial applications gives rise to a
number of issues that need to be addressed. Richard Stallman
was the first person to coin the term "free software" [26]. In
his view, FOSS software, and especially the programs that
adapt it, should be distributed freely. This idea leads to the
well known GNU General Public License (GPL) which
means that the any program using GPL software is covered
under the same license. As we mentioned above, selling a
GPL covered application is not prohibited but the company
does not hold a copyright preventing the user from redistributing the software.
Furthermore the source code of the derived work must
be available to anyone using this software. This can be a
no-go criteria for the usage of GPL products in a proprietary application. The Library/Lesser GNU Public License
(LGPL) is less restrictive and allows closed software using
LGPL products to be distributed, but any changes made to
the FOSS must be contributed to the community. The least
restrictive one is the Berkeley-license, BSD (and its derivatives), which allows the FOSS to be used and modified without having to publish the source code of the modification
[6][9][8].
In the case of the document management application we
mentioned above, using a copyleft license could lead to a
decrease in market share, because the software can be distributed freely and, even worse, the knowledge contained
in the documents managed must be made available to everyone using the program. The matter of whether a program
that uses GPL components is a derived work or not is a
thorny one and currently the subject of much debate
[29][12][25]. The use of GPL should therefore be avoided.
The LGPL is less restrictive but the constraint of publishing changes to the FOSS community must be carefully considered. Even small changes may reveal insight into the company’s core business. So the adaptation of BSD style FOSS
should be preferred as this is the most flexible license.
3 Conclusion and Further Work
We have looked into and compared three different reuse
strategies and identified the overall costs for adapting reusable components as a primary issue, especially for the development of highly customised specific software. In the
© Novática
case of proprietary software, additional aspects need to be
considered. The integration into a product line requires very
flexible, modular, and reliable components. Therefore COTS
is not suitable due to the drawbacks of black-box reuse.
Using FOSS as white-box reuse is a good alternative to an
internal reuse strategy because it promises savings both for
maintenance and for the implementation of upcoming features. But the impact of the various FOSS licenses must be
given careful consideration.
Unfortunately we are unaware of any empirical studies
that have compared the overall costs of the three reuse strategies. The reason for this lack of knowledge is obvious.
The functionality of the applications investigated ideally
need to be identical and the development effort needs to be
measured. It is very difficult to find enough comparable
projects and companies that are willing to publish internal
metrics that can be used to determine their productivity or
development methods.
Another way of gathering information is by means of an
experiment. An experiment can be used to compare at least
two different settings supervised by an independent scientist. The nature of software reuse involves long-term activity which makes it very difficult to conduct such a test in an
academic setting (three groups of students developing the
same application using different reuse strategies). Furthermore, any such experiment would produce an indicator rather
than a substantiated result since the universe is too small.
We suggest comparing the different reuse strategies with
existing models for effort estimation, e.g. the COCOMO II
(COnstructive COst MOdel) model [3]. This model is based
on the evaluation of over hundred software projects and
contains a module for software reuse. The impact of several
quality indicators – e.g. documentation and understandability
– can be integrated. There is some research currently in
progress into the economic evaluation of FOSS using estimation models [13][20].
References
[1] T. J. Biggerstaff. "Is technology a second order term in reuse’s
success equation?" In Proceedings of Third International Conference on Software Reuse, 1994.
[2] Barry Boehm and Chris Abts. "COTS integration: Plug and
pray". IEEE Computer, 32(1):135–138, 1999.
[3] Barry W. Boehm, Chris Abts, A. Windsor Brown, Sunita
Chulani, Bradford K. Clark, Ellis Horowitz, Ray Madachy,
Donald Reifer, and Bert Steece. "Software Cost Estimation
with CoCoMo II". Prentice Hall PTR, Upper Saddle River, 1
edition, 2000.
[4] Samuel Daniel Conte, H.E. Dunsmore, and V.Y. Shen. "Software Engineering Metrics and Models". The Benjamin/
Cummings Publishing Company, Menlo Park, CA, 1986.
[5] Norman E. Fenton. "Software Metrics - A Rigorous Approach".
Chapman & Hall, London, 1991.
[6] Martin Fink. "The Business and Economics of Linux and Open
Source". Prentice Hall, Upper Saddle River, 2002.
[7] Brian Fitzgerald and Tony Kenny. "Developing an information systems infrastructure with open source". IEEE Software,
21(1):50–55, 2004.
[8] Christina Gacek and Budi Arief. "The many meanings of open
UPGRADE Vol. VI, No. 3, June 2005
41
Libre Software as A Field of Study
source". IEEE Software, 21(1):34–40, 2004.
[9] Bernard Golden. "Succeeding with Open Source". AddisonWesley, Boston, 2005.
[10] B. Henderson-Seller. "Object-Oriented Metrics: Measures of
Complexity". Prentice Hall, Upper Saddle River, NJ, 1996.
[11] Emmanuel Henry and Benoit Faller. "Large-scale industrial
reuse to reduce cost and cycle time". IEEE Software,
12(5):47–53, 1995.
[12] Till Jaeger and Carsten Schulz. "Gutachten zu ausgewählten
rechtlichen aspekten der open source software". Technical
report, JBB, 2005. <http://opensource.c-lab.de/files/
portaldownload/Rechtsgutachten-NOW.pdf>.
[13] S. Koch and C. Neumann. "Evaluierung und aufwandsschätzung
bei der integration von open source software-komponenten". In
Informatik 2005 -Beiträge der 35. Jahrestagung der Gesellschaft
für Informatik e.V. (GI), Lecture Notes in Informatics (LNI) ,
Gesellschaft für Informatik (GI), 2005. (To appear.)
[14] Stefan Koch. "Profiling an open source project ecology and
its programmers". Electronic Markets, 14(2):77–88, 2004.
[15] Stefan Koch and Georg Schneider. "Effort, cooperation and
coordination in an open source software project: Gnome".
Information Systems Journal, 12(1):27–42, 2002.
[16] Wayne C. Lim. "Effects of reuse on quality, productivity,
and economics". IEEE Software, 11(5):23–30, September
1994.
[17] Ulf Lindvist and Erland Jonsson. "A map of security risks
associated with using COTS". IEEE Computer, 31(6):60–66,
June 1998.
[18] M. Lorenz and J. Kidd. "Object Oriented Metrics". Prentice
Hall, Upper Saddle River, N.J., 1995.
[19] T.R. Madanmohan and Rahul De. "Open source reuse in commercial firms". IEEE Software, 21(1):62–69, 2004.
[20] C. Neumann. "Bewertung von open source frameworks als
ansatz zur wiederverwendung". In Informatik 2005 - Beiträge
der 35. Jahrestagung der Gesellschaft für Informatik e.V. (GI),
Lecture Notes in Informatics (LNI) , Gesellschaft für
Informatik (GI), 2005. (To appear.)
[21] Jeffrey S. Norris. "Mission-critical development with open
source software: Lessons learned". IEEE Software, 21(1):42–
49, 2004.
[22] Ruben Prieto-Diaz. "Status report: software reusability". IEEE
Software, 10(3):61–66, May 1993.
[23] Eric S. Raymond. "The Cathedral and the Bazaar: Musings
on Linux and Open Source by an Accidental Revolutionary".
O’Reilly and Associates, Sebastopol, California, 1999.
[24] Donald J. Reifer, Victor R. Basili, Barry W. Boehm, and Betsy
Clark. "Eight lessons learned during COTS-based systems
maintenance". IEEE Software, 20(5):94–96, 2003.
[25] Gerald Spindler and Christian Arlt. "Rechtsfragen bei Open
Source". Schmidt, Köln, 2004.
[26] Richard Stallman. "Free Software, Free Society: selected essays of Richard M. Stallman". GNU Press, Boston, 2002.
[27] Jeffrey M. Voas. "The challenge of using cots software in
component based development". IEEE Computer, 31(6):44–
45, June 1998.
[28]Huaiqing Wang and Chen Wang. "Open source software adoption: A status report". IEEE Software, 18(2):90–95, March/
April 2001.
[29] Ulrich Wuermerling and Thies Deike. "Open source software: Eine juristische risikoanalyse". Computer und Recht,
(2):87–92, 2003.
[30] Qun Zhong and Nigel Edwards. "Security control for COTS
components". IEEE Computer, 31(6):67–73, June 1998.
42
UPGRADE Vol. VI, No. 3, June 2005
© Novática
Mosaic
This section includes articles about various ICT (Information and Communication Technologies) matters, as well as news regarding CEPIS
and its undertakings, and announcements of relevant events. The articles, which are subject to a peer review procedure, complement our
monographs. For further information see "Structure of Contents and Editorial Policy" at <http://www.upgrade-cepis.org/pages/editinfo.html>.
Computational Linguistics
Multilingual Approaches to Text Categorisation
Juan-José García-Adeva, Rafael A. Calvo, and Diego López de Ipiña
In this article we examine three different approaches to categorising documents from multilingual corpora using machine
learning algorithms. These approaches satisfy two main conditions: there may be an unlimited number of different languages in the corpus and it is unnecessary to previously identify each document’s language. The approaches differ in two
main aspects: how documents are pre-processed (using either language-neutral or language-specific techniques) and
how many classifiers are employed (either one global or one for each existing language). These approaches were tested
on a bilingual corpus provided by a Spanish newspaper that contains articles written in Spanish and Basque. The empirical findings were studied from the point of view of classification accuracy and system performance including execution
time and memory usage.
Keywords: Document Classification,
Machine Learning, Multilingual Corpus, Neutral Stemming, Text Categorisation.
are efficient in multilingual contexts
where for each supported language an
adequate number of training instances
exist.
In this work we are interested in the
tools for writing, distributing and selling news stories to consumers. This
industry is one of those most affected
Juan-José García-Adeva received his
BEng in Computer Engineering degree
from the Mondragón Engineering
School of the University of the Basque
Country, Spain, and a MSc by research
in Computer Science from the University
of Essex, UK. He worked for several
years on different topics of Artificial
Intelligence in research centres of Spain,
UK, and USA. He is currently working
towards his PhD in the School of
Electrical and Information Engineering
of the University of Sydney, Australia.
<[email protected]>
institutions. He worked at Carnegie
Mellon University, USA, and Universidad Nacional de Rosario, Argentina, and
as an Internet consultant for projects in
Australia, Brazil, USA and Argentina.
He is author of a book and over 50 other
publications in the field, and is also on
the board of the Elearning Network of
Australasia and the .LRN Consortium.
<[email protected]>
1 Introduction
Text management techniques are an
important topic in the field of Information Systems. They have been gaining popularity over the last decade with
the increased amount of digital documents available and, thus, the necessity of accessing their content in flexible ways [15]. From these techniques,
one of the most prominent is Text Categorisation using Machine Learning,
currently relying on a very active and
large research community. However,
the vast majority of this research is
done using English corpora, with much
less attention paid to other languages
or multilingual environments. Some
recent projects applied cross-lingual
approaches to environments with very
few or none training documents in a
language for which documents need be
classified [2][8]. We believe that approaches like the ones presented here
© CEPIS
Rafael A. Calvo is Senior Lecturer, Director of the Web Engineering Group and
Associate Dean of ICT at the University
of Sydney - School of Electrical and
Information Engineering. He has a PhD
in Artificial Intelligence applied to
automatic document classification. He
has taught at several Universities, high
schools and professional training
Diego López de Ipiña has a BSc in
Computer Science from University of
Deusto, Spain, a MSc in Distributed
System from the University of Essex,
UK, and a PhD in Engineering from the
University of Cambridge, UK.
Currently, he works as a lecturer in the
Faculty of Engineering of the University
of Deusto. His main research interests
are middleware for mobile systems and
ubiquitous computing. <dipina@ eside.
deusto.es>
UPGRADE Vol. VI, No. 3, June 2005
43
Mosaic
by the Internet revolution and, therefore, in great need of the ability to process digital content effectively. Because
the target audiences are culturally diverse, there is often a need to express
the same content in different languages
even within the same context (e.g.
country, region, community, etc.). For
example, in certain multilingual countries, a single newspaper carries news
in several (usually two or three) languages in order to cover the largest
number of readers. This situation is
particularly common in Europe and
presents an interesting application
problem to text categorisation, where
the documents to be classified are provided in more than one language under the same set of categories.
There are several sensible approaches to solving this problem. They
include the possible use of language
identification, language-dependent or
neutral preprocessing of documents,
single or multiple classifiers involving
one or more learning algorithms, etc.
The configuration strongly depends on
the characteristics of the multilingual
corpus. For example, if the corpus
documents contain no information on
the language they are written in, a language identification step might be necessary. It is desirable to explore the
diverse possible configurations in order to learn which one best fits a given
document corpus to obtain the best results. The system-related performance
of these configuration, including
memory usage and execution time,
may also be considered important in
production environments.
The structure of this paper is as follows. Section 2 includes some background information on the main algorithms and methods used in this work.
Section 3 describes the corpus of bilingual (Spanish and Basque) newspaper articles used for experimentation.
Section 4 briefly describes the software
framework employed for performing
the experiments. Section 5 details the
three different approaches we propose.
Section 6 contains the configuration
used in the experiments, while Section 7 discusses the corresponding results and some derived ideas on future
work.
44
UPGRADE Vol. VI, No. 3, June 2005
2 Background
This section contains the description of the learning algorithms used by
the classifiers as well as the automatic
language identification functionality. It
also includes an explanation of the
classification accuracy measures studied in this work.
2.1 Base Learners
2.1.1 Naïve Bayes
Naïve Bayes [10] is a probabilistic
classification algorithm based on the
assumption that any two terms from T
= {T1,..., t|T|} representing a document d and classified under category c
are statistically independent of each
other. This can be expressed by
All neighbours can be treated
equally or a weight can be assigned to
them according to their distance to the
document to categorise. We selected
two weighting methods: inverse to the
distance (1 / s) and opposite to the distance (1 — s). When several of these
k nearest neighbours have the same category, their weights are added together,
and the final weighted sum is used as
the probability score for that category.
Once they have been sorted, a list of
category ranks for the document to
categorise is produced.
Building a kNN categoriser also
includes experimentally determining a
threshold k. The best effectiveness is
obtained with 30 ≤ k ≤ 45 [16]. It is
also interesting to note that increasing
the value of k does not degrade the performance significantly.
(1)
2.1.3 Rocchio
The category predicted for d is
based on the highest probability given
by
Rocchio is a profile-based classification algorithm [13] adapted from the
classical Vector Space model with TF/
(2)
Two commonly used probabilistic
models for text classification under the
Naïve Bayes framework are the multivariate Bernoulli and the multinomial
models. These two approaches were
compared in [12] and the multinomial
model proved to perform significantly
better than the multi-variate Bernoulli,
which motivated us to choose it for this
work.
2.1.2 k-Nearest Neighbours
k-Nearest Neighbours (kNN) is an
example-based classification algorithm [17] where an unseen document
is classified with the category of the
majority of the k most similar training
documents. The similarity between two
documents can be measured by the
Euclidean distance of the n corresponding feature vectors representing the
documents
(3)
IDF weighting and relevance feedback
to the classification situation. This kind
of classifier makes use of a similarity
measure between a representation (also
called profile) pi of each existing category ci and the unseen document dj to
classify. This similarity is usually estimated as the cosine angle between the
vector that represents ci and the feature vector obtained from dj. A document to classify is considered to belong to a particular category when its
related similarity estimation is greater
than a certain threshold.
First, a feature frequency function
is defined
(4)
Where F is the set of all existing
features with f ∈ F,ni expresses in how
many documents fi appears, and r is the
function of relative relevance of multiple occurrences that can be defined
by r(f,d) = max {0,log(0,nf)} .
© CEPIS
Mosaic
The profile pi of a category ci is a vector of weights where one instance is
calculated by
(5)
where Dc is the set of documents
belonging to c and Dc the set of documents not belonging to c. The parameters β and γ control the relative impact of these positive and negative instances to the vector of weights, with
standard values being β =16 and γ =
4 [13].
2.2 Language Identification
The method we used in this work
is based on computing and comparing
language profiles using n-gram frequencies. A n-gram is a chunk of contiguous characters from a word. For
example, the word hello contains the
3-grams: _he, hel, ell, llo, lo_, and o_
_, with _ representing a blank. In general, the number of n-grams that can
be obtained from a word of length w is
w+1.
On the one hand, Zipf’s law [18]
establishes that in human language
texts the word ranked in n-th position
according to how common it is, also
occurs with a frequency inversely proportional to n. On the other hand, experiments performed in [3] determined
that around the 400 most frequent ngrams in a language are almost always
highly correlated to that language. It
is now reasonable to generate a language profile using an arbitrary collection of documents all in the same language.
For each document, the punctuation and digit characters are removed,
while letters and apostrophes are kept.
The remaining text is tokenised, and
each token properly padded with previous and posterior blanks. Values of
Category
n from 1 to 5 are habitually enough.
These n-grams are used to create a dictionary with the 400 most frequent
words ranked by frequency. Therefore,
for each supported language, a profile
pi ∈ P = { pi ,..., pp } is generated.
Comparing two language profiles
consists of calculating (6)
where G i is the set of n-grams
in p i and g i the n-gram of p i ranked
in k-th position. The function K (g,
p i ) returns the rank of g in p i. If
g∉p i then a very high value is used
so that K (g, p i )>>⏐G i ⏐.
When the language of a new document d is to be identified, its profile p
is calculated and compared to each
existing language profile in P. The chosen language will be that of the profile
whose index is obtained by (7)
2.3 Measures of Classification
Accuracy
Precision (π) and recall (p) are two
common measures for assessing how
successful a text categoriser is. Precision indicates the probability that a
document assigned to a certain category by the classifier actually belongs
to that category. On the contrary, recall estimates the probability that a
document that actually belongs to a
certain category will be correctly assigned to that category during the categorisation process. These two measures are defined by (8)
Yes
No
Classifier
Yes
TPj
FPj
decision
No
FNi
TNi
Table 1: Category-specific Contingency Table.
© CEPIS
Values near 0 give more importance
to π while those closer to ∝ provide
more relevance to ρ. The most common applied value is 1 that procures
the same importance for both π and ρ.
Therefore, Equation 9 is transformed
into (10)
argmin { δ (p, p1), δ (p, p2),…, δ (p, pp)}
Expert decision
cj
where TPi indicates the number of
true positives or how many documents
were correctly classified under category ci. Similarly, FPi indicates the
number of false positives and FNi corresponds to false negatives. Table 1
provides an overview of these measures.
However, precision and recall are
generally combined into a single measure called Fβ where 0 ≤ β ≤ ∝. The parameter β serves to balance between
the importance of π and ρ, and can be
expressed by (9)
Instead of using category-specific
values of F1, an averaged measure is
usually preferred, concretely the
macro- and micro-average, identified
by FM1 y Fµ1 respectively. Microaveraging gives more emphasis on performance of categories that are more
frequent (i.e. there are more training
documents for these categories) and is
defined by (11)
where |C| indicates the number of
existing categories. On the contrary,
macro-averaging focuses on uncommon categories. Micro-averaged measures will almost always have higher
scores than the macro-averaged ones.
It can be expressed by (12)
UPGRADE Vol. VI, No. 3, June 2005
45
Mosaic
Finally, the whole error measure is
calculated by
in Section 4, that 85% of the articles
were written in Spanish and the remaining 15% in Basque.
There are two important remarks
about this corpus. The first is that each
article has only one category. The second is the lack of information about
5 Approaches
(13)
3 The Corpus of Newspaper
Articles
In this work we contribute with a
new corpus of newspaper articles
called Diario Vasco, which contains
about 75,000 articles written in either
Spanish or Basque that were published
during the year 2004 in the newspaper
Diario Vasco, Gipuzkoa, Spain. The
corpus is divided into monthly lots,
with around 6,500 news articles per
month, November being the month that
had the most news, with 7,121,
whereas December had the fewest,
with 5,501. Each item of news is embodied into an XML (eXtensible
Markup Language) file that contains
some additional information other than
the article contents and its category.
These categories correspond to the
newspaper section where the news
were published.
The 20 different categories of this
corpus are quite skewed. On average,
the category Deportes is the most popular, having some 1,000 on any given
month. Other categories with the most
number of documents include Tolosa
and CostaUrola, each with around 500
instances per month. The less populated categories are Gipuzkoa, and
Contraportada, with little more than
100 articles in most months.
As a general case, every category
contains articles in both languages, although it is possible to find exceptional
days where a particular category contains articles in only one language. For
the current corpus of 2004, we estimated, by means of the automatic language identification technique depicted
46
UPGRADE Vol. VI, No. 3, June 2005
performance efficiency thanks to a
carefully tuned-up code-base.
In this project we have added two
new functionalities to Awacate: automatic language identification and language-neutral stemming.
which language the article was written
in.
4 Software Framework
All the experiments covered in this
paper were performed on Awacate [1],
an object-oriented software framework
for Text Categorisation, which is written in Java and available as open
source.
It was designed with the aim of
being mainly used in the context of
web applications. It offers numerous
features to suit both a production environment where performance is crucial as well as a research context where
highly configurable experiments must
be executed.
Awacate includes several learning
algorithms such as Naïve Bayes,
Rocchio, SVM (Support Vector Machines), and kNN. They can also be
used as the base binary learners for
ensembles using the decomposition
methods One-to-All, Pairwise Coupling, and ECOC (Error Correcting
Output Codes) [5]. The documents to
categorise can be provided in English,
German, French, Spanish, and Basque
and they may belong to one or several
categories.
Awacate offers complete evaluation
of results including category-specific
TPi, FPi, FNi, πi, pi, F1, and averaged
πµ, pµ, Fµ1, πM, pM, FM1, as well as partitioning of the testing space using n-fold
Cross Validation. Awacate can be used
in production systems due to its high
scalability based on a cache system that
allows for precise control of the
amount of memory allocated, and its
In this sections we propose three
different multilingual approaches to
text categorisation. They differ in two
main aspects: whether a single or oneper-language classifier is used and how
the documents are processed according to their language. By document
processing, we refer to the pre-processing stage (tokenisation, stop-words removal, stemming, etc.) in order to select features as well as the later creation of the feature vectors used by the
learning algorithms.
5.1 Language-neutral Document
Processing and A Single Classifier
(1P1C)
In this approach, a single classifier
learns from training documents regardless of which language they are in. The
feature vectors used by this classifier
are built using a language-neutral
method, meaning that instead of using
a common word stemming approach,
we use n-grams. This is possible because some of the n-grams obtained
from a word will comprise only parts
of a word with no morphological variation [11]. For example, the words runner, running, and run share the 3-gram
run.
Selecting the n-grams that will later
be used as features consists of building a dictionary with all the possible
n-grams found in all the training documents, and then choosing those with
the highest inverse document frequency. The reason for choosing this
particular type of frequency is that affixes indicating a particular morphological variation will be found frequently in many words, and therefore
will offer a low inverse document frequency [11]. The best value of n depends on the language(s) of the documents and their context, and hence
finding it demands running several
experiments for evaluation and configuration.
© CEPIS
Mosaic
The distinctive step in this approach
is building the common knowledge set
with the information obtained from the
training documents. This knowledge
set comprises the features selected to
represent the documents as feature vectors as well as the collection of existing categories. The knowledge set is
built as part of the pre-processing stage
and used to generate feature vectors.
In order to extract the features from a
document, its language has to be previously identified, and then the corresponding pre-processing procedures
applied. After all the features have been
extracted from the training documents,
the common knowledge set will be
used to create the feature vectors, independent of the document from which
they derive.
Similarly, when a new document
has to be categorised, it is first preprocessed according to its language,
which has to have been previously
Figure 1: Classification Accuracy.
The major advantage of using this
method is its language-neutrality. No
particular knowledge is required about
either which language each document
is in or how many languages are
present in the corpus. The only parameter to adjust is the optimum value for
n (in our experience, a value around 5
will usually work sufficiently well).
The most likely disadvantage would be
its high requirement regarding system
memory, due to the large dictionary
needed.
5.2 Language-specific Document
Processing and A Single Classifier
(NP1C)
This approach is similar to Approach 1P1C in the use of a unique
classifier regardless of how many different languages are dealt with. The
main difference is how the documents
are processed, using a different procedure for each language. This means
that as many document processing
modules are needed as there are existing languages. Each of these processors includes a stop-word remover and
a word stemmer. In the case of stemming, we used Porter’s rule-set [14] for
Spanish and the rule-set illustrated
in [4] for Basque.
© CEPIS
Figure 2: Execution Time.
UPGRADE Vol. VI, No. 3, June 2005
47
Mosaic
identified. Then its feature vector can
be created and used by the classifier to
infer its category.
A limitation of this approach may
be the non-availability of the stopwords list or stemming rules for a particular language.
5.3 Language-specific Document
Processing and Independent
Classifiers (NPNC)
In this approach the documents are
processed using language-specific
stop-word lists and stemming rule-sets,
similar to Approach NP1C. However,
there exists one independent classifier
for each language found in the training documents, with its own configuration of the learning algorithm. There
is also a different knowledge set for
each language, each one containing the
particular features of a language.
During the first stage of document
analysis, existing languages and categories are determined and used to instantiate the corresponding classifiers and feed
the knowledge sets. After this task is finished, the learning algorithms have to be
trained. Therefore, feature vectors are
created using the knowledge set that cor-
responds to the language of the document
in question, and eventually used for the
training process.
Consequently, in order to categorise a document, after this document’s
language is identified its corresponding knowledge set is used to create its
feature vector that will be sent to the
proper classifier and eventually its category deduced.
This approach may suffer from the
same limitation as Approach NP1C.
Additionally, a lack of sufficient training instances of documents in a certain language while dealing with multilingual corpora may be a problem. In
such a situation, the relevant classifier
might be insufficiently trained resulting in lower being obtained.
6 Experimental Configurations
We chose the articles published
during November 2004, which comprised of 7,121 documents with a total
size of 24 MB, averaging 4.88 KB/
document. For each experiment execution all the documents in this month
were randomly shuffled and then we
applied the category-based holdout
validation method, using an 80/20 split
with a cap at 300 training documents.
In other words, 80% of the documents
found in each category, up to a maximum of 300, were randomly chosen to
be used for training. The remaining
documents were used for testing. Each
experiment was executed 10 times and
their results eventually averaged.
Feature selection was performed
using the function x2 (ti,cj) that measured the dependence of the category cj
on the occurrence of the term ti. Using
produced better results than other feature selection functions such as Term
Frequency, Document Frequency, or
Information Gain. Using Naïve Bayes,
we applied a reduction factor of
x= 0.88 leaving approximately 17,551
features in Approach 1P1C, 5,000 features in Approach NP1C, and 4,278
(Spanish) + 1,133 (Basque) features in
Approach NPNC. Using Rocchio, the
reduction factor was x = 0.9, corresponding to 14,625, 4,286, and 3,508
+ 955 features respectively. In the case
of kNN, it was x = 0.95, representing
7,313, 2,284, and 1,854 + 517 features.
The feature vectors were built using a sparse representation for the sake
of memory usage efficiency. Each feature was weighted by means of the
function TF/IDF (ti,dj), which is based
on the assumption that those terms occurring in more documents have little
discriminatory strength among categories. Each TF value was normalised as
a fraction of the highest TF value found
in the document in question.
In the case of Approach 1P1C, the
optimum value for the n-grams was
found to be 6. For the learning algorithm kNN, the optimum value of k we
determined was 40. For the learning
algorithm Rocchio, the parameter values were α = 0, β = 20, γ = 0, with a
threshold of 0.8.
The computer used was a PC with
dual 2.4GHz Pentium 4, running GNU/
Linux 2.4.26. The code run using Sun’s
Java version 1.4.2. in client mode.
7 Results and Conclusions
We were interested in the results
from two points of view: their classification accuracy and the system-related
performance, specifically execution
time and memory utilisation.
48
UPGRADE Vol. VI, No. 3, June 2005
© CEPIS
UPENET
Algorithm
1P1 Naïve Bayes
Rocchio
kNN (w = 1.0)
kNN (w = 1 / s)
kNN (w = 1 - s)
NP1 Naïve Bayes
Rocchio
kNN (w = 1.0)
kNN (w = 1 / s)
kNN (w = 1 - s)
NPN Naïve Bayes
Rocchio
kNN (w = 1.0)
kNN (w = 1 / s)
kNN (w = 1 - s)
F1
F1
error
time
memory
0.5993
0.5648
0.4716
0.4454
0.5291
0.6039
0.3423
0.4908
0.5023
0.5172
0.6142
0.3483
0.4958
0.509
0.5332
0.7347
0.6911
0.6142
0.6128
0.6446
0.746
0.47
0.6382
0.634
0.6523
0.7587
0.469
0.6481
0.6382
0.6636
0.0101
0.02
0.0252
0.0243
0.0221
0.0175
0.0338
0.0217
0.023
0.0237
0.016
0.0348
0.0216
0.0227
0.02
35 s
2m14s
46m14s
46m14s
46m14s
1m12s
2m46s
18m48s
18m48s
18m48s
1m56s
3m26s
14m37s
14m37s
14m37s
194 MB
198 MB
122 MB
122 MB
122 MB
92 MB
94 MB
86 MB
86 MB
86 MB
90 MB
91 MB
84 MB
84 MB
84 MB
Table 2: Overall Experimental Measurements.
There are three sets of results
shown in Figures 1, 2 and 3. The precise measurements that these graphs
were based on can be found in Table 2.
Table 3 illustrates the category-specific measurements for the best result,
the worst, and one in the middle. The
classification accuracy is expressed in
terms of the category-specific, macroaveraged, and micro-averaged breakeven point between precision and recall, respectively identified by , , and
(see Section 2.3 for additional details).
The execution time and memory usage measurements are based on a full
run, including the processing of the
documents, training of the classifier(s),
and categorising of the test documents.
The three approaches worked reasonably well providing acceptable results regarding accuracy. (It might be
worthwhile to mention here that some
language-related tasks that are part of
the Text Categorisation process generally yield worse results in Spanish
than in English [7], with no available
published results about Basque.) However, it is obvious that both the Approach NPNC yields the best accuracy
results in general and Naïve Bayes is
the most competent algorithm overall
in terms of execution time, memory usage, and accuracy. Therefore, we can
© CEPIS
conclude that Approach NPNC is the
best of the three proposed in this work.
It not only provides slightly better results than the other two, but they are
obtained using a significantly smaller
amount of features in the vectors representing the documents, contributing
to a better system performance and
lower resource requirements. Of
course, there is no certainty that this
approach can be used in all situations,
so Approach 1P1C may still be the only
one applicable in some circumstances.
Even though Approach NP1C is the
simplest from the implementation point
of view, the results it offers are quite
acceptable and even comparable to the
other two approaches. As part of the
explanation about why this occurs, it
is important to bear in mind that while
the two languages are very different in
vocabulary, there are still many shared
terms, including proper names, numbers, acronyms, etc. There is an unexpected low accuracy with Rocchio in
approaches NP1C and NPNC that we
blame on the nature of this algorithm
that requires a larger number of training instances than the other two algorithms.
It is important to note that the accuracy results obtained are most probably not the best that can be achieved
in this environment. The adjustment of
the parameters in the learning algorithms, feature selections, and documents processing (e.g. using a more
advanced Spanish stemming method
like that found in [6]) could be further
tuned up if the ultimate goal were to
find the best classification accuracy.
For example, different learning algorithms or configurations to the same
algorithm could be applied in Approach NPNC with the aim of maximising the outcome.
One of the obvious results concerning system performance is how much
slower kNN is when compared to the
other two classification algorithms.
This is due to its cost linear relationship between the classification process and the number of training documents and their size [9]. Therefore, it
is not clear what advantages kNN
could provide as its memory usage is
close to that of Naïve Bayes and
Rocchio.
Also remarkable is the significant
memory needs of Approach 1P1C, with
roughly twice as much memory used
as in the other two approaches. Because the accuracy results are also the
worst, we conclude that the interest of
engaging in Approach 1P1C would
only make sense when the languages
UPGRADE Vol. VI, No., June 2005 49
Mosaic
Algorithm
C t
Name
NPNC
Naïve Bayes
error
Account F
1
Deportes
949
0.894
Opinion
278
0.2632
Gente
150
0.5
Cultura
360
0.6752
SanSebastian 390
0.7514
CostaUrola
531
0.75
Contraportada 104
0.1739
Tolosa
596
0.7548
Comarca
342
0.5789
Gipuzkoa
112
0.0714
AltoDeba
418
0.7273
Economia
270
0.5055
AlDia
480
0.75
Mundo
354
0.7087
BajoDeba
396
0.7707
Bidasoa
368
0.6752
TVyRadio
213
0.7037
Politica
388
0.7685
Pasaia
257
0.6494
AltoUrola
164
0.6122
M
Total
0.6142
7121
?
0.7587
0.061
0.0131
0.0066
0.0239
0.0211
0.0507
0.0089
0.0601
0.0225
0.0122
0.0282
0.0211
0.0385
0.0174
0.0221
0.0239
0.0075
0.0221
0.0127
0.0089
0.016
1P1C
kNN (w = 1 – s)
F
error
1
0.8352
0.1164
0.5128
0.0268
0.125
0.0099
0.5769
0.031
0.5435
0.0296
0.5405
0.0839
0.7778
0.0028
0.5379
0.0945
0.4842
0.0346
0
0.0085
0.4124
0.0402
0.7143
0.0113
0.5385
0.0508
0.5124
0.0416
0.5316
0.0261
0.4167
0.0296
0.85
0.0042
0.4691
0.0606
0.5
0.0212
0.5
0.0099
M
0.5291
0.0221
?
0.6446
NP1C
Rocchio
error
F
1
0.6792
0.129
0.1579
0.426
0.2791
0.5077
0
0.5435
0.38
0.0449
0.5
0.3056
0.4841
0.6557
0.309
0.3371
0.1455
0.5357
0.2571
0.169
M
0.3423
?
0.47
0.1596
0.038
0.03
0.0455
0.0582
0.0901
0.0465
0.0986
0.0291
0.0399
0.0554
0.0235
0.084
0.0197
0.0756
0.0554
0.0221
0.0366
0.0244
0.0277
0.0338
Table 3: Category-specific Experimental Measurements.
used by the corpus documents are unknown, or are known but no preprocessing procedures are available.
We should also note that the function used for feature selection is very
expensive from a computational point
of view due to its need for a contingency table that will usually contain
many thousands of cells. This may
have an especially important impact
when combined with the n-gram-based
neutral pre-processing of documents.
As a supplementary result worthwhile to mention, the language identification functionality employed by approaches NP1C and NPNC (see Section 2.2 for further details), proved to
be extremely accurate. Our experiments showed that over 99% of the
50
UPGRADE Vol. VI, No. 3, June 2005
documents were correctly classified by
its language. Our results are uniform
with those found in [3].
We have plans to conduct further
experiments using other multilingual
corpora, desirably some where the
number of languages is more than two,
then analyse how the findings are influenced by the number of languages.
This should be especially important
for the language-neutral document
processing approach. The larger the
number of languages supported, the
greater the number of features, and
therefore the bigger the system memory required. It would be interesting
to learn how to determine the point at
which this approach becomes too
costly.
Another important aspect where
further experimentation would be beneficial is regarding corpus of very different languages (such as English and
Chinese). We believe that the very different morphological structure of some
languages would only make suitable
Approach NPNC, and we suspect that
applying Approach 1P1C or Approach
NP1C would not yield very good results in these situations. The main reason is the very different number of existing terms that make a language-independent pre-processing of documents almost a requirement. For example, considering an English/Chinese
bi-lingual corpus, while English is a
quite inflectional language, Chinese is
not at all. On the one hand, n-gram© CEPIS
Mosaic
based stemming is not very effective
in non-inflection languages and actually it can input noise to the feature set
instead of providing any benefit. On
the other hand, the number of features
needed for a non-inflectional language
with a large vocabulary like Chinese
is much larger than English. Therefore,
using a common feature set (i.e. 1P1C
and NP1C) would play against English for having much fewer features.
Acknowledgements
The authors would like to thank the
Documentation Chief Editor of the newspaper Diario Vasco, Gipuzkoa, Spain,
<http://www.diariovasco.com/>, for kindly
providing us with the collection of articles.
We are also very grateful to the anonymous
reviewers for the extremely useful comments on the manuscript.
© CEPIS
References
[1] J. J. García Adeva. "Awacate: Towards
a Framework for Intelligent Text Categorisation in Web Applications".
Technical report, University of Sydney,
2004.
[2] Nuria Bel, Cornelis H. A. Koster, and
Marta Villegas. "Cross-lingual text categorization". In ECDL, pages 126–
139, 2003.
[3] William B. Cavnar and John M.
Trenkle. "N-Gram-Based Text Categorization". In Proceedings of SDAIR-94,
3rd Annual Symposium on Document
Analysis and Information Retrieval,
pages 161–175, Las Vegas, US, 1994.
[4] Ion Errasti. "Snowball-erako
euskarazko lematizatzailea: sistema eta
lengoaia orotarako eramangarria".
Technical report, Eusko Jaularitza,
2004.
[5] J. J. García Adeva and Rafael A. Calvo.
"A Decomposition Scheme based on
Error-Correcting Output Codes for Ensembles of Text Categorisers". In Proceedings of the IEEE International
Conference on Information Technology and Applications (ICITA), 2005.
[6] A. Honrado, R. Leon, R. O’Donnel,
and D. Sinclair. "A word stemming algorithm for the spanish language". In
SPIRE ’00: Proceedings of the Seventh
International Symposium on String
Processing Information Retrieval
(SPIRE’00), page 139. IEEE Computer Society, 2000.
[7] J. Kamps, C. Monz, M. de Rijke, and
B. Sigurbjörnsson. "Monolingual
document retrieval: English versus
other European languages". In A.P.
de Vries, editor, Proceedings DIR
2003, 2003.
[8] Victor Lavrenko, Martin Choquette,
and W. Bruce Croft. "Cross-lingual relevance models". In SIGIR ’02: Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information
retrieval, pages 175–182. ACM.
UPGRADE Vol. VI, No. 3, June 2005
51
Mosaic
Software Engineering
A Two Parameter Software Reliability Growth Model with An Implicit Adjustment Factor for Better Software Failure Prediction
S. Venkateswaran, K. Ekambavanan, and P. Vivekanandan
The objective of this paper is to develop a Software Reliability Growth Model (SRGM) with a focus on having a simple
model with good prediction capability. To keep the model simple, the strategy is to limit the number of parameters whereby
parameter estimation and model implementation becomes easier. Good prediction capability is to be achieved by taking
advantage of the benefits of an existing model instead of developing another from scratch. A new function is introduced
into an existing model to compensate for its current behavior, viz., exponential decrease in the failure intensity rate. The
prediction capability of this new model (that we have called VPV) was then analyzed and also compared with a few well
known three parameter SRGM’s. The results were found to be good.
Keywords: Error Detection Rate,
Estimation of Parameters, Failure Intensity, Mean Value Function, Prediction Deviation.
1 Introduction
A number of reliability growth
models have been developed since
1967 to address the need of ensuring
software reliability. In 1972, a major
study was made by Jelinski and
Morando. They applied the Maximum
Likelihood Estimation (MLE) technique to determine the total number of
faults in the software. This technique,
viz., MLE, is even used today to make
the model parameter estimates. In
1975, John D Musa presented the Execution Type model, in which he
brought in the concept of actual processor time utilized in executing a program instead of calendar time. In 1979,
Goel and Okumoto described failure
detection as a Non-Homogenous
Poisson Process (NHPP) with an
exponentially decaying rate function.
The cumulative number of failures detected and the distribution of the
number of remaining failures was
found to be Poisson. Yamada et al, in
1983, came out with another model
where the cumulative number of failures detected is described as a S
Shaped Curve. In 1984, Musa and
Okumoto introduced the Logarithmic
Poisson Model (LPN) [4]. In recent
times, we have the Log-Power model
Xie 1993 [10] and the PNZ-SRGM
(Software Reliability Growth Model)
52
UPGRADE Vol. VI, No. 3, June 2005
model, Pham et al 1999 [2], which are
variations of the S-Shaped model.
In the design of the new VPVSRGM model, the focus has been on
factors such as Simplicity, Capability
and Applicability. Also, the strategy
was to have an existing working model
as the base for developing a new model
(thereby taking advantage of the benefits already available). After analyzing
the design of a number of models, the
LPN model was found to be simple in
concept as compared to other models.
It also has an implicit debugging feature One aspect in the design of this
model was that the failure intensity
decreases exponentially with failures
experienced. But this capability of the
model will not suffice in the case when
the testing time increases, i.e., when
faults get hidden during the earlier test
cycle but manifest at a later stage.
A adjustment factor is introduced
in the LPN model to balance the negative exponential decrease in the fail-
ure intensity. This paper describes the
modification performed on the LPN
model to achieve better prediction capability without losing out on its simplicity. The failure prediction performance capability is analyzed using multiple failure data sets. Also, for a clear
understanding of the improved capability of the VPV model, a comparison
is made with the existing LPN model
as well with other three parameter
SRGM models to present the relative
performance of the VPV model.
S. Venkateswaran received his Master’s
Degree in Applied Mathematics from the
University of Madras, Chennai, India. He
is currently doing research at Anna
University, India, in the areas of software
reliability and security. He has over 20
years of experience in the IT industry, has
worked on software projects abroad, and
also serves as a visiting faculty at Anna
University. <siva_vkt@ yahoo. co.inz>
has over 25 years of teaching experience.
<[email protected]>
K. Ekambavanam received his PhD in
Mathematics from Anna University, India. He is a Professor in the Dept. of
Mathematics of the same university and
2 Logarithmic Poisson (LPN)
SRGM
If we denote N(t) as the number of
software failures that have occurred by
time t, the process {N(t); t>=0} can
generally be modeled as a Non-homogenous Poisson process (NHPP) with
mean value function m(t) where m(t)
= E[N(t)].
The basic assumptions made for
this model are :
P. Vivekanandan received his PhD in
Mathematics from Madras University and
an ME in Computer Science from Anna
University, India. He is a Professor in the
Dept. of Mathematics in the same
university and has over 24 years of
teaching experience and over fifty
research papers to his credit. His research
areas of interest are computer networks,
security and software reliability.
<[email protected] >
© CEPIS
Mosaic
The software system is subject to
failure at random times caused by
software faults
There are no failures at time t=0
Failure intensity is proportional to
the residual fault content (and this
is considered to be decreasing following a negative exponential distribution)
Faults located are immediately corrected and will not appear again
The mean value function of this
model as given by Musa and Okumoto
[3], is as follows:
m(t) = a * ln(1 + b * t)
where a and b are the model parameters and t is the testing time
The failure intensity function of this
model, obtained by taking the first derivative of the MVF, is as follows :
λ(t) = a * b / (1 + b * t)
3 VPV SRGM Model
3.1 Formulation
The LPN model is the basis for the
design of the VPV model. The aim is
to take advantage of LPN model’s simplicity while introducing a adjustment
factor for better predictability. Now
consider the Failure Intensity function
(λ(t)) of LPN. It is given as:
d m(t)/dt = λ(t) = a * b / (1 + b * t)
Notations
m(t) : Expected number of observed failures during the time interval [0,t)
λ (t) ; Failure Intensity Function
R(x/s) : Software Reliability
p
: Probability of perfect debugging
MVF : Mean Value Function
MLE : Maximum Likelihood Estimation
VPV : VPV Model
LPN : Logarithmic Poisson Model
KGM : Kapur-Garg Imperfect Debugging Model
YIM : Yamada Imperfect Debugging Model
KYL : Kapur-Younes Learning Model
INF : Inflection S Shaped Model
SRGM : Software Reliability Growth Model
also called the power-law [1] model,
is given below:
m(t) = a * tb
So, the equation, for the new model,
can be taken as:
such as Imperfect Debugging (viz., KG Imperfect Debugging Model, Yamada
Imperfect Mode etc), Learning (K-G
Learning Model etc.) and Testing Effort
(Raleigh Model, Yamada Exponential
Model etc) have three or four parameters. These extra parameters help in
d m(t)/dt = λ(t) = a * tx * Exp(-m(t)/a)) where x = b2
i.e, d m(t) / Exp(-m(t)/a)) = a * tx dt
Now, if we take integration on both
sides, we have
i.e.
Exp(m(t)/a) d m(t) = a
tx d + c where c is a constant
i.e., a * Exp(m(t)/a) = a * (tx+1/x+1) + c
This can be alternately re-written
as:
d m(t)/dt = λ(t) = a * b * Exp(-m(t)/a)
(1)
The above representation clearly
shows the negative exponential decrease of the failure intensity function.
Parameters 'a' and 'b' are constants. In
reality, the deduction rate and expected
number of errors could increase or decrease. A decreasing factor is already
present in the equation so the need is
to have another factor for providing an
increase. This new factor could be in
the form of another parameter which
would increase the complexity of the
equation. But since the aim is not to
increase the number of parameters in
the equation, the method used in the
Duane model is applied. The mean
value function of the Duane Model,
© CEPIS
When t = 0, c = a. since m(0) = 0.
Therefore a * Exp(m(t)/a) = a *
(tx+1/x+1) + a.
Simplifying the above, we get the
mean value function of the new VPV
model as follows (2):
m(t) = a * Ln( (tx+1/x+1) + 1)
where x = b2
fine-tuning the expected number of
failures or the deduction rate or both.
But the new VPV model uses the basic
two parameters to adjust itself to produce results similar or better than the
three parameter models.
3.2 Model Simplicity
The new model of Equation 2 is a
simple one with two parameters. The
need is only to estimate the values of
these two parameters. Generally, when
the number of parameters in a model
increases, estimation of the parameter
values take more time and effort [2].
Many of the models that have been
designed to compensate for factors
3.3 Model Capability
The capability of a model can be
assessed by validating its ability to
make failure predictions, specifically
during the software development stage.
For the new VPV model, a detailed
analysis of its prediction capability has
been performed. The results were
found to be good. The specific details
are provided in Section 4.
UPGRADE Vol. VI, No. 3, June 2005
53
Mosaic
3.4 Model Applicability
Applicability depends on a model’s
effectiveness across different development environments, different software
products and sizes, different operational environments and different life
cycle methods. For the VPV model, its
applicability was measured considering the following:
Failure Data Sets were taken from
different time periods. This accounts
for the different life cycle methods used
during those periods.
The Software products from which
the failure data was taken were also
different. Also, the size of these products were also different.
Since the software products from
which failure data sets were taken were
different, their development environment and operational environment
were also different.
The specific details of the data used
to measure the VPV models applicability is given in Section 4.
4. VPV Model Analysis
4.1 Introduction
Model validation is accomplished
by comparing the predicted failure values of the VPV model with that of the
observed failures. A study is also made
to compare its relative performance
with other SRGM’s. The approach consists of the following steps:
Failure Data Set Compilation
- Three (published) Software Failure
Data Sets are identified [2][11][12]
Parameter Estimation
- The parameters for all Models are
estimated using 70% of each of the failure data sets
VPV Model Validation
- Remaining 30% of each failure data
set is used for failure prediction validation
- Failure prediction analysis with respect to 6 time units of the observed
failures (for Data Set 1, 30% of the
failure data amounts to 6 time units.
This has been uniformly taken for the
remaining two data sets.
- Failure prediction analysis in comparison with the base LPN model
- Failure prediction analysis in comparison with three parameter models
54
UPGRADE Vol. VI, No. 3, June 2005
· K-G Imperfect Debugging Model
(KGM)
· Yamada Imperfect Debugging Model
(YIM)
· Kapur-Younes Learning Model
(KYL)
· Inflection S-Shaped Model (INF)
- Failure prediction analysis for a
longer term prediction (18 time units).
4.2 Failure Data Set Compilation
The failure data sets used for validating the performance of the VPV
model have been taken from published
data. Three failure data sets were identified on the following basis:
Limited number of failure data as
well as large number of failure data
- Data set 1 has 20 failure data observations [2]
- Data set 2 has 59 failure data observations [11]
- Data set 3 has 109 failure data observations [12]
Failure data from different periods
- Data sets 2 is from the 1980’s
- Data set 1 and 3 are from the 1990’s
Failure data from different software
products and sizes
- Data set 1 is from a Tandem Computers project (Release #1) [2]
- Data set 3 is from a real time control
project having 870 Kilo steps of Fortran program and a Middle level Language [12]
4.3 Parameter Estimation
The model parameters are estimated using a well applied methodology, viz., Maximum Likelihood Estimation (MLE) [5][8][9]. The likelihood function for a given set of
(z1,...,zn) failures , in different time interval is given by (3):
The mean value function (Equation 2),
of the VPV model, is substituted in the
above equation (Equation 4). The substituted function is then differentiated
with respect to the parameters (a and
b), and equated to zero. These equations are than solved to get the specific
values of the parameters 'a' and 'b'. The
mean value function of the other models are given in Appendix 1. The
Mathcad software was used to calculate the parameter values. The parameter values computed for all models,
are given in Tables 1 to 6.
4.4 VPV Model Validation
The aim is to analyze the prediction capability of the VPV model by
considering its performance against the
observed failures. The aim is to also
see its relative performance with respect to its base model (LPN) as well
as with other well known three parameter models.
4.4.1 Failure Prediction Analysis
(for 6 Time Units)
The prediction analysis has been
performed by calculating the failure
prediction deviation of VPV and other
models, from actual failure data. This
prediction deviation is calculated as
follows:
Prediction Variation = (Estimated
Failure Data – Actual Failure Data)/
Actual Failure Data
These prediction variations are
plotted in the form of a graph. The xaxis shows the time while the y-axis
shows the prediction deviation from
the actual. The closer the generated
curve is to zero, the closer the predic-
. Taking the natural logarithm of the
Equation 3, we get (4):
where n is the number of failures
observed and zi is the actual cumulative failure data collected up to time ti.
tion is to the actual.
To avoid cluttering in the graph
presentation, the performance of all the
© CEPIS
Mosaic
six models have been split into two,
for each failure data set. Figures 1, 3
and 5 will show the VPV model's performance against the LPN model, the
YIM model and the KGM model for
failure data set 1 (DS1), failure data
set 2 (DS2) and failure data set 3 (DS3),
respectively. Figures 2, 4 and 6 will
show the VPV model's performance
© CEPIS
against the INF model and the KYL
model for failure data set 1(DS1), failure data set 2 (DS2) and failure data
set 3 (DS3), respectively.
From the above figures, it is clearly
seen that the VPV models has good
failure prediction capability and performs better than the three parameter
models.
The VPV model is also clearly better than the LPN model indicating the
success of the adjustment factor.
Table 7 shows the deviation percentage across all models.
4.4.2 Failure Prediction Analysis
(for 18 Time Units)
Here the aim is to analyze the mod-
UPGRADE Vol. VI, No. 3, June 2005
55
Mosaic
el’s performance across longer prediction duration (for 18 time units). In this
case, only data sets 2 and 3 are taken
as they have sufficient observed data
for validation. As before, the prediction deviations are shown below in the
form of a graph.
The graphs in Figures 7-10 also
show that the new model performs well
across longer term failure predictions.
The Inflection S Shape model is also
seen to perform well for the duration
considered. Figures 7 and 9 will show
the VPV model's performance against
56
UPGRADE Vol. VI, No. 3, June 2005
the LPN model, the YIM model and
the KGM model for failure data set 2
(DS2) and failure data set 3 (DS3), respectively. Figures 8 and 10 will show
the VPV model's performance against
the INF model and the KYL model for
failure data set 2 (DS2) and failure data
set 3 (DS3), respectively.
5 Conclusion
In this paper, the newly designed
VPV model shows good software failure prediction capability for all the
three failure data sets processed. Pa-
rameter estimation was also found to
be easy unlike the three parameter
models which needed a lot more effort.
Also, when compared to the other failure prediction growth models, the
VPVmodel performed better across all
data sets. irrespective of the amount of
failure data available. Hence, this
model could be used for failure predictions during the testing phase of
software development to help project
managers monitor the testing progress.
Further analysis of the VPV model
willhave to be done, on more failure
© CEPIS
Mosaic
datasets, to further validate its prediction performance.
Appendix 1: Mean Value Functions of The Models Compared
© CEPIS
UPGRADE Vol. VI, No. 3, June 2005
57
Mosaic
References
[1] M. Xie. "Software Reliability Models
Past, Present & Future", published in
the book 'Recent Advances in Reliability Theory', released at the second International Conference on Mathematical Methods in Reliability held in
Bordeaux, France, pp. 325-340, 2000.
[2] H. Pham, L. Nordmann, and X Zhang.
"A General Imperfect-Software-Debugging Model with S-Shaped FaultDetection rate", IEEE Transaction on
Reliability Vol. 48, No.2, 1999.
[3] Kapur and Garg. "Optimal software release policies for software reliability
growth models under imperfect debugging", RAIRO:Operations Research, Vol 24, pp. 295-305, 1990.
[4] J.D Musa and K. Okumoto. "A logarithmic Poisson execution time model
for software reliability measurement",
Proceedings of the 7th International
Conference on software engineering,
pp. 230-238, 1984.
[5] M. Obha. "Software Reliability Analysis Models", IBM. J. Res. Develop.
Vol. 28, No.4, pp. 428-443, 1984.
[6] A.L Goel and Kune-Zang Yang. "Software Reliability and Readiness Assessment based on NHPP", Advances
in Computers, Vol 45, pp. 235, 1997.
[7] D.R. Prince Williams, P. Vivekanandan.
"Truncated Software Reliability
Growth Model, Korean Journal of
Computational and Applied Mathematics", Vol 9(2) pp. 591-599, 2002.
[8] N. Kareer, P.K Kapur, and P.S. Grover.
"An S-Shaped Software Reliability
Growth Model with two types of errors", Microelectron Reliab, Vol 30,
No. 6, pp. 1085-1090. 1990.
[9] T. Lynch, H. Pham, and W. Kuo.
"Modeling Software-Reliability with
Multiple Failure-Types and Imperfect
Debugging", Proceedings Annual Reliability and Maintainability Symposium, pp. 235-240, 1994.
[10] M. Xie and M. Zhao. "On some Reliability Growth Models with simple
graphical interpretations", Microelectronics and Reliability, 33, pp. 149167, 1993.
[11] P.K Kapur and S. Younes. "Modeling
an Imperfect Debugging Phenomenon
in
Software
Reliability",
Microelectron Reliab Vol 36, No. 5,
pp. 645-650, 1996.
[12] Y.Tohma, H Yamano, M. Obha, and R
Jacoby. "Parametric Estimation of the
Hyper-Geometric Distribution Model
for Real Test/Debug", Data Proceedings, 1991, International Symposium
on Software Reliability Engineering,
pp. 28-34, 1991.
58
UPGRADE Vol. VI, No. 3, June 2005
© CEPIS
Mosaic
News & Events
Proposal of Directive on Software Patents Rejected
by The European Parliament
The long and heated debate about
the implementation in Europa of a
USA-like software patent model came
apparently to an end with the July 6
vote of the European Parliament
against the Proposal of Directive put
forward by the European Commission.
Let’s remark that, motivated by the
impact of software patents on both
European Information Technology industry and professionals, CEPIS set up
in 2004 a Working Group on this matter, led by Juan Antonio Esteban (ATI,
Spain). The discussion paper produced
by this group is available at <http://
www.ati.es/DOCS/>.
We publish the reactions from FFII
(Foundation for a Free Information
Infrastructure), EPO (European Patent
Office) and EICTA (European Information & Communications Technology Industry Association), after the
voting of the Europarliament.
FFII: European Parliament says No to software patents
(Press release issued by FFII, Foundation for a Free Information Infrastructure, <http://www.ffii.org>)
Strasbourg, 6 July 2005 — The
European Parliament today decided by
a large majority (729 members (of
which 689 signed that day’s attendance
register), 680 votes, 648 in favour, 14
against, 18 abstaining) to reject the directive "on the patentability of computer implemented inventions", also
known as the software patent directive.
This rejection was the logical answer
to the Commission’s refusal to restart
the legislative process in February and
the Council’s reluctance to take the will
of the European Parliament and national parliaments into account. The
FFII congratulates the European Parliament on its clear "No" to bad legislative proposals and procedures.
This is a great victory for those who
have campaigned to ensure that European innovation and competitiveness
is protected from monopolisation of
software functionalities and business
methods. It marks the end of an attempt
by the European Commission and governmental patent officials to impose
© CEPIS
detrimental and legally questionable
practises of the European Patent Office (EPO) on the member states. However, the problems created by these
practices remain unsolved. FFII believes that the Parliament’s work, in
particular the 21 cross-party compromise amendments, can provide a good
basis on which future solutions, both
at the national and European level, can
build.
Rejection provides breathing space
for new initiatives based on all the
knowledge gained during the last five
years. All institutions are now fully
aware of the concerns of all
stakeholders. However, the fact that the
Council Common Position needs 21
amendments in order to be transformed
into a coherent piece of legislation indicates that the text is simply not ready
to enter the Conciliation between Parliament, Commission and Council. We
hope the Commission and Council will
at least respond to the concerns raised
by Parliament the next time, in order
to avoid this sort of backlash in the
future.
Jonas Maebe, FFII Board Member,
comments on the outcome of today’s
vote: "This result clearly shows that thorough analysis, genuinely concerned citizens and factual information have more
impact than free ice-cream, boatloads of
hired lobbyists and outsourcing threats.
I hope this turn of events can give people new faith in the European decision
making process. I also hope that it will
encourage the Council and Commission
to model after the European Parliament
in terms of transparency and the ability
of stakeholders to participate in the decision-making process irrespective of
their size."
Hartmut Pilch, president of FFII,
explains why FFII supported the move
for rejection in its voting recommendations: “In recent days, the big holders of EPO-granted software patents
and their MEPs, who had previously
been campaigning for the Council’s
'Common Position', joined the call for
rejection of the directive because it
became clear that the 21 cross-party
amendments championed by Roithová,
Buzek, Rocard, Duff and others were
very likely to be adopted by the Parliament. It was well noticeable that
UPGRADE Vol. VI, No. 3, June 2005
59
Mosaic
support for all most of these amendments was becoming the mainstream
opinion in all political groups. Yet there
would not have been much of a point
in such a vote. We rather agree to the
assessment of the situation as given by
Othmar Karas MEP in the Plenary
yesterday: a No was the only logical
answer to the unconstructive attitude
and legally questionable maneuvers of
the Commission and Council, by which
this so-called Common Position had
come about in the first place.”
The FFII wishes to thank all those
people who have taken the time to contact their representatives. We also thank
the numerous volunteers who have so
generously given their time and energy.
This is your victory as well as the Parliament’s.
EPO: European Patent Office continues to advocate harmonisation
in the field of CII patents
(Press release issued by EPO, European Patent Office, <http://www.
european-patent-office.org/>)
Munich/Strasbourg, 6 July 2005 The European Patent Office (EPO) has
followed with interest the vote of the
European Parliament today and has
taken note of the decision of the European Parliament not to accept the Directive on the patentability of computer-implemented inventions (CII) according to the Common Position of the
Council. The proposed Directive is
therefore deemed not to have been
adopted. "The objective of the directive would have been to harmonize the
understanding of what constitutes a
patentable invention in the field of
CII", explained the President of the
EPO, Professor Alain Pompidou.
The EPO carries out a centralised
patent granting procedure for the 31
member states of the European Patent
Organisation. "Our Organisation was
founded by almost the same countries as
those which founded the European Union, and in the same spirit. The purpose
behind the creation of the EPO was to
make the patenting process in Europe
more efficient by applying a single procedure on the basis of the European Patent Convention (EPC). In its practice, the
EPO follows strictly the provisions of the
Convention, which has been ratified by
all member states of the Organisation",
President Pompidou explained.
Under the EPC a well-defined practice on granting patents in the field of
CII has been established: "The EPC
provides the general legal basis for the
grant of European patents, whereas the
objective of the directive would have
been to harmonise the EU member
states’ rules on CII and the relevant
provisions of the EPC. The EPC also
governs our work in the field of CII,
together with the case law of our judiciary, the Boards of Appeal of the
EPO", Mr Pompidou said.
As with all inventions, CII are only
patentable if they have technical character, are new and involve an inventive
technical contribution to the prior art.
Moreover, the EPO does not grant "software patents": computer programs
claimed as such, algorithms or computer-implemented business methods that
make no technical contribution are not
considered patentable inventions under
the EPC. In this respect, the practice of
the EPO differs significantly from that
of the United States Patent & Trademark
Office. For more information please contact: European Patent Office, Media
Relations Department, 80298 Munich,
[email protected]
EICTA: Europe’s High Tech Industry Welcomes
European Parliament Decision
(Press release issued by EICTA,
European Information & Communications Technology Industry Association,
<http://www.eicta.org/>)
06 July 2005
EICTA, the industry body representing Europe’s large and small high
tech companies, today welcomed the
European Parliament decision on the
CII Patents Directive. This decision
will ensure that all high tech companies in Europe continue to benefit from
a high level of patent protection.
60
UPGRADE Vol. VI, No. 3, June 2005
Commenting on the outcome of today’s vote, Mark MacGann, Director
General of EICTA, said: “This is a wise
decision that has helped industry to avoid
legislation that could have narrowed the
scope of patent legislation in Europe.
Parliament has today voted for the
status quo, which preserves the current
system that has served well the interests of our 10, 000 member companies,
both large and small.
EICTA will continue to make the
case throughout Europe for the contribution that CII patents make to re-
search, innovation and to overall European competitiveness.”
All the European institutions and
industry have worked hard and constructively on the issue of CII patents
for some time. Europe’s high tech industry will support the efforts of the
European institutions to find broader
improvements to the European patent
system that will particularly benefit the
interests of smaller companies. For
further information: Mark MacGann,
EICTA: +32 473 496 388; Richard
Jacques, Brunswick: +44 7974 982 557
© CEPIS
UPENET
This section includes articles published by the journals that make part of UPENET.
For further information see <http://www.upgrade-cepis.org/pages/upenet.html>.
Informatics Law
Security, Surveillance and Monitoring of
Electronic Communications at The Workplace
Olga Georgiades-Van der Pol
© Pliroforiki 2005
This paper was first published, in English, by Pliroforiki (issue no. 11, June 2005, pp. 10-16). Pliroforiki, (“Informatics” in Greek), a founding member
of UPENET (Upgrade European Network), is a journal published, in Greek or English, by the Cyprus CEPIS society CCS (Cyprus Computer Society,
<http://www.ccs.org.cy/about/>)
(Keywords and section numbering added by the Editor of UPGRADE.)
This article, which is an extract from the author’s book "PRIVACY: Processing of Personal Data, Obligations of Companies, Surveillance of Employees, Privacy on the Internet", has as its main objective to offer a first approach to the security
obligations of companies in relation to the personal information they hold about their employees. It also gives an overview
of the rights and obligations of the company when monitoring its employees for the purpose of ensuring the security of its
systems.
Keywords: Electronic Communications, Monitoring, Privacy, Security,
Surveillance, Workers’s Rights,
Workplace.
1 Introduction
The vast majority of the population
will either permanently or at various
times find itself in an employment relationship in the public or private sector. The availability and use of many
new monitoring technical methods
available to the company is raising new
issues such as the extent of monitoring workers’ communications (e-mail
and telephone calls), workplace super-
1
S.10 of the Processing of Personal Data (protection of the Person) Law of 2001, Law
N.138(I)/2001 as amended. (Available at <http:/
/ w w w. m o h . g o v. c y / m o h / m o h . n s f / 0 /
9267FFD2810B177EC2256D49003 E D
1FC?OpenDocument>.)
© CEPIS
vision, workers’ data transfer to third
parties, use of biometric methods for
controlling access in the workplace,
etc.
2 Security Issues in Keeping
Employee Data
2.1 Main Principles
By virtue of the Cyprus Data ProOlga Georgiades-Van der Pol is the
holder of a Bachelor of Laws (LLB) in
English Law and French Law from the
University of East Anglia, UK, and of a
Masters’ in Laws (LLM) in European
Law with Telecommunications Law
from the University College London
(UCL), UK. She is also a holder of a
Diploma of French higher legal studies
from the University of Robert
Schumann, Strasbourg, France. Olga has
trained in the European Commission, at
the Information Society Directorate.
Since she has been admitted as a Lawyer,
she has worked as an Advocate at Lellos
tection Law1, the employer must take
the appropriate organizational and
technical measures for the security of
worker’s personal data and their protection against accidental or unlawful
destruction, accidental loss, alteration,
unauthorised dissemination or access
and any other form of unlawful
processing. Such measures must enP. Demetriades Law Office in Nicosia,
Cyprus, heading the European and I.T.
Law department, where she specialises
in European Law, Internet Law,
Telecommunications Law and
Competition Law. She is the author of
various books and reports concentrating
on Privacy, Processing of Personal Data,
Surveillance of Employees, Privacy on
the Internet & Obligations of
Companies, Financial Assistance for
Cyprus under EU programs and
Competition Law in Cyprus especially
in the field of Telecommunications.
<[email protected]>
UPGRADE Vol. VI, No., June 2005 61
UPENET
sure a level of security which is proportionate to the risks involved in the
processing and the nature of the data
processed. As a result, employee personal data must remain safe from the
curiosity of other workers or third parties not employed by the company.
Within this context, employers must
use appropriate technological means
for preventing such unauthorised access or disclosure, allowing in any case
the identification of the staff accessing the files.
Where an external data processor
is used by the company, there must be
a contract between him and the employer, providing security guarantees
and ensuring that the processor acts
only according to the employer’s instructions. The European Union2 recommends that the following security
measures be used at the workplace:
Password/identification systems for
access to computerised employment
records;
Login and tracing of access and disclosures;
Backup copies;
Encryption of messages, in particular when the data is transferred outside
the company.
2.2 Code of Practice of The
International Labour Office on The
Protection of Workers’ Personal
Data
In the field of employment, the
Data Protection Commissioner may
take into account the Code of Practice
of the International Labour Office on
the protection of workers’ personal
data which establishes the following
general principles:
1. Personal data should be processed
lawfully and fairly, and only for reasons directly relevant to the employment of the worker.
3.1 Main Principles
This issue concerns the question
what are the acceptable limits of monitoring by the company of e-mail and
Internet use by employees and what
constitutes legitimate monitoring activities.
The basic principle is that workers
do not abandon their right to privacy
and data protection every morning at
the doors of the workplace3. They do
have a legitimate expectation of a certain degree of privacy in the workplace
as they develop a significant part of
their relationships with other human
beings within the workplace. Their fundamental right of privacy is safeguarded by Article 15 of the Constitution of the Republic of Cyprus, by Article 8 of the Convention for the Pro-
tection of Human Rights and Fundamental Freedoms and by other European and international legislative instruments.
While new technologies constitute
a positive development of the resources
available to employers, tools of electronic surveillance present the possibility of being used in such a way so
as to intrude upon the fundamental
rights and freedoms of workers. It
should not be forgotten that with the
coming of the information technologies it is vital that workers should enjoy the same rights whether they work
on-line or off-line.
However, companies should not
panic. While workers have a right to a
certain degree of privacy in the workplace, this right must be balanced with
other legitimate rights and interests of
the company as employer, in particular:
The need to ensure the security of
the system.
The employer’s right to run and control the functioning of his business efficiently.
The right to protect his legitimate
interests from the liability or the harm
that workers’ actions may create, for
example the employer’s liability for the
action of their workers, i.e. from criminal activities.
The need of the employer to protect
his business from significant threats,
such as to prevent transmission of confidential information to a competitor.
These rights and interests constitute
legitimate grounds that may justify
appropriate measures to limit the worker’s right to privacy.4
Nevertheless, balancing different
rights and interests requires taking a
number of principles into account, in
particular proportionality. The simple
fact that a monitoring activity or sur-
2
3
4
Article 29 Data Protection Working Party,
Opinion 8/2001 on the processing of personal
data in the employment context, 5062/01/EN/
Final, WP 48, 13 September 2001. (The Article
29 Working Party is an advisory group composed
by representatives of the data protection authorities of the European Union member States.)
62
UPGRADE Vol. VI, No., June 2005
2. Personal data should, in principle,
be used only for the purposes for which
they were originally collected.
3. If personal data are to be processed
for purposes other than those for which
they were collected, the employer
should ensure that they are not used in
a manner incompatible with the original purpose, and should take the necessary measures to avoid any misinterpretation caused by a change of context.
4. Personal data collected in connection with technical or organisational
measures to ensure the security and
proper operation of automated information systems should not be used to
control the behaviour of workers.
5. Decisions concerning a worker
should not be based solely on the automated processing of that worker’s
personal data.
3 Security Issues in Monitoring Employee Electronic Communications
Article 29 Data Protection Working Party,
Working document on the surveillance of electronic communications in the workplace, 5401/
01/EN/Final, WP 55.
Article 29 Data Protection Working Party,
Working document on the surveillance of electronic communications in the workplace, 5401/
01/EN/Final WP 55, 29 May 2002.
© CEPIS
UPENET
veillance is considered convenient to
serve the employer’s interest, would
not solely justify any intrusion in worker’s privacy. In this respect, where the
objective identified can be achieved in
a less intrusive way, the employer
should consider this option. For example, the employer should avoid systems
that monitor the worker automatically
and continuously.
3.2 The Constitution of The
Republic of Cyprus
The Right of Privacy is safeguarded
by Article 15.1 of the Constitution of
Cyprus, that reads:
"1. Every person has the right to
respect for his private and family life.
2. There shall be no interference with
the exercise of this right except such as
is in accordance with the law and is necessary only in the interests of the security of the Republic or the constitutional
order or the public safety or the public
order or the public health or the public
morals or for the protection of the rights
and liberties guaranteed by this Constitution to any person."
3.3 The European Convention for
The Protection of Human Rights
Article 15.1 of the Constitution of
Cyprus is modelled on Article 8 of the
European Convention of Human
Rights which has been ratified by the
European Convention on Human
Rights (Ratification) Law of 19625.
Article 8 reads:
"1. Everyone has the right to respect for his private and family life, his
home and his correspondence.
2. There shall be no interference by
a public authority with the exercise of
this right except such as is in accordance with the law and is necessary in
a democratic society in the interests of
national security, public safety or the
economic well-being of the country, for
the prevention of disorder or crime, for
the protection of health or morals, or
for the protection of the rights and
freedoms of others."
5
Cyprus Law No. 39/1962.
© CEPIS
3.3 Case Law of The European
Court of Human Rights
The position of the European Court
of Human Rights is that the protection
of "private life" enshrined in Article 8
does not exclude the professional life
as a worker and is not limited to life
within home.
In the case of Niemitz v. Germany6
that concerned the search by a government authority of the complainant’s
office, the Court stated that respect for
private life must also comprise to a
certain degree the right to establish and
develop relationships with other human beings. There appears, furthermore, to be no reason of principle why
this understanding of the notion of "private life" should be taken to exclude
activities of a professional or business
nature since it is, after all, in the course
of their working lives that the majority of people have a significant, if not
the greatest, opportunity of developing relationships with the outside
world. This view is supported by the
fact that it is not always possible to distinguish clearly, which of an individual’s activities form part of his professional or business life and which do
not.
Law, it is necessary for safeguarding
the legitimate interests pursued by the
company, on condition that such interests override the rights, interests and
fundamental freedoms of the employee.
One such legitimate purpose is safeguarding the security of the company;
or
(b) if there is suspicion on reasonable grounds of criminal activity or
other serious wrongdoing by the employee.
3. Continuous monitoring should
be permitted only if required for health
and security or the protection of property, i.e. from theft.
4. Workers’ representatives, where
they exist, and in conformity with national law and practice, should be informed and consulted:
(a) concerning the introduction or
modification of automated systems that
process worker’s personal data,
(b) before the introduction of any
electronic monitoring of workers’ behaviour in the workplace,
(c) about the purpose, contents and
the manner of administering and interpreting any questionnaires and tests
concerning the personal data of the
workers."
3.4 Code of Practice of The
International Labour Office on The
Protection of Workers’ Personal
Data
The Code of Practice of the International Labour Office (ILO) establishes the following general principles
with regards to monitoring of employees:
"1. If workers are monitored, they
should be informed in advance of the
reasons for monitoring, the time schedule, the method and techniques used
and the data to be collected, i.e. by
establishing an e-policy."
2. Secret monitoring should be
permitted only:
(a) if it is in conformity with national legislation, i.e. in accordance
with section 5 of the Data Protection
3.5 Position of The Article 29
Data Protection Working Party of
The European Community
According to the Article 29 Data
Protection Working Party of the European Community, prevention should
be more important than detection.
In other words, the interest of the employer is better served in preventing
Internet misuse rather than in detecting such misuse. In this context, technological solutions are particularly useful. A ban on personal use of the
Internet by employees does not appear
to be reasonable and fails to reflect the
degree to which the Internet can assist
employees in their daily lives.
6
23 November 1992, Series A n° 251/B, par.
29. Available at <http://www. worldlii. org/eu/
cases/ECHR/1992/80.html>.
3.6 Obligation to Inform The
Worker - Transparency
An employer must be clear and
open about his activities and should not
engage in covert e-mail monitoring
except where specific criminal activity or security breach has been identi-
UPGRADE Vol. VI, No., June 2005 63
UPENET
fied. The Data Protection Commissioner’s authorisation should be requested
for this.
The employer has to provide his
workers with a readily accessible, clear
and accurate statement of his policy
with regard to e-mail and Internet
monitoring. Elements of this information should be:
E-mail/Internet policy within the
company describing in detail the extent to which communication facilities
owned by the company may be used
for personal/private communications
by the employees (e.g. limitation on
time and duration of use).
Reasons and purposes for which
surveillance, if any, is being carried
out. Where the employer has allowed
the use of the company’s communication facilities for express private purposes, such private communications
may under very limited circumstances
be subject to surveillance, e.g. to ensure the security of the information
system and virus checking.
The details of surveillance measures taken, i.e. by whom, for what purpose, how and when.
Details of any enforcement procedures outlining how and when workers will be notified of breaches of internal policies and be given the opportunity to respond to any such claims
against them.
It is essential that the employer also
inform the worker of:
The presence, use and purpose of
any detection equipment and/or apparatus activated with regards to his/her
working station; and
Any misuse of the electronic communications detected (e-mail or the
Internet), unless important reasons justify the continuation of the secret surveillance.
The employer should immediately
inform the worker of any misuse of the
electronic communications detected,
unless important reasons justify the
continuation of the surveillance.
Prompt information can be easily delivered by software such as warning
windows, which pop up and alert the
worker that the system has detected
and/or has taken steps to prevent an
unauthorised use of the network.
64
UPGRADE Vol. VI, No., June 2005
3.7 Necessity of Monitoring
The employer must check if any
form of monitoring is absolutely necessary for a specified purpose before
proceeding to engage in any such activity. Traditional methods of supervision that are less intrusive for the privacy of individuals should be preferred
before engaging in any monitoring of
electronic communications.
It would only be in exceptional circumstances that the monitoring of a
workers mail or Internet use would be
considered necessary. For instance,
monitoring of a worker’s e-mail may
become necessary in order to obtain
confirmation or proof of certain actions
on his part.
Such actions would include criminal activity on the part of the worker
insofar as it is necessary for the employer to defend his own interests, for
example, where he is vicariously liable
for the actions of the worker. These
activities would also include detection
of viruses and in general terms any activity carried out by the employer to
guarantee the security of the system.
It should be mentioned that opening an employee’s e-mail may also be
necessary for reasons other than monitoring or surveillance, for example in
order to maintain correspondence in
case the employee is out of office (e.g.
sickness or holidays) and correspondence cannot be guaranteed otherwise
(e.g. via auto reply or automatic forwarding).
3.8 Proportionality
The monitoring of e-mails should,
if possible, be limited to traffic data on
the participants and time of a communication rather than the contents of
communications.
If access to the e-mail’s content is
absolutely necessary, account should
be taken of the privacy of those outside the organisation receiving them as
well as those inside. The employer, for
instance, cannot obtain the consent of
those outside the company sending emails to his workers. The employer
should make reasonable efforts to inform those outside the organisation of
the existence of monitoring activities
to the extent that people outside the
organisation could be affected by them.
A practical example could be the insertion of warning notices regarding
the existence of the monitoring systems, which may be added to all outbound messages from the company (email notices).
Since technology gives the employer much opportunity to assess the
use of e-mail by his workers by checking, for example, the number of mails
sent or received or the format of any
attachments, as a result the actual opening of mails would be considered disproportionate.
Technology can further be used to
ensure that the measures taken by an
employer to safeguard the Internet access he provides to his workers from
abuse are proportionate by utilising
blocking, as opposed to monitoring,
mechanisms:
(a) In the case of the Internet, companies could use, for example, software
tools that can be configured in order
to block any connection to predetermined categories of websites. The employer can, after consultation of the
aggregated list of websites visited by
his employees, decide to add some
websites to the list of those already
blocked (possibly after notice to the
employees that connection to such site
will be blocked except if the need to
connect to that site is demonstrated by
an employee).
(b) In the case of e-mail, companies
could use, for example, an automatic
redirect facility to an isolated server,
for all e-mails exceeding a certain size.
The intended recipient is automatically
informed that a suspect e-mail has been
redirected to that server and can be
consulted there.
3.9 Two E-mail Accounts /
Web-mail
The Article 29 Data Protection
Working Party recommends that, as a
pragmatic solution of the problem at
issue and for the purpose of reducing
the possibility of employers invading
7
Webmail is a web e-mail system, which provides web based e-mail from any POP or IMAP
server, which is generally user name and password protected.
© CEPIS
UPENET
their workers’ privacy, employers
should adopt a policy providing workers with two e-mail accounts or webmail:7
one for only professional purposes,
in which monitoring within the
limits of this working document
would be possible,
another account only for purely
private purposes (or authorisation
for the use of web mail), which
would only be subject to security
measures and would be checked
for abuse in exceptional cases.
If an employer adopts such a policy
then it would be possible, in specific
cases where there is a serious suspicion about the behaviour of a worker,
to monitor the extent to which that
worker is using their PC for personal
purposes by noting the time spent in
web-mail accounts. In this way the
employer’s interests would be served
without any possibility of worker’s
personal data being disclosed.
Furthermore such a policy may be
of benefit to workers as it would provide certainty for them as to level of
privacy they can expect which may be
lacking in more complex and confusing codes of conduct.
3.10 Company Internet Policies
The employer must set out clearly
to workers the conditions on which
private use of the Internet is permitted
as well as specifying material, which
cannot be viewed or copied. These
conditions and limitations have to be
explained to the workers.
In addition, workers need to be informed about the systems implemented both to prevent access to certain sites and to detect misuse.
The extent of such monitoring
should be specified, for instance,
whether such monitoring relates to individuals or particular sections of the
company or whether the content of the
sites visited is viewed or recorded by
the employer in particular circumstances.
Furthermore, the policy should
specify what use, if any, will be made
of any data collected in relation to who
visited what sites.
Employers should finally inform
© CEPIS
workers about the involvement of their
representatives, both in the implementation of this policy and in the investigation of alleged breaches.
4 Conclusions - Recommendations
Surveillance of workers is not a
new issue. In the past, companies may
have gone about monitoring their employees without giving much thought
of the legal implications, mainly because little legislation existed regulating such monitoring.
However, this is not the case today.
With the enactment of the Data Protection Law in 2001 for the purpose of
harmonising Cypriot legislation with
European Union Directives on the protection of individuals with regard to the
processing of personal data, specific
rules have been imposed on companies
when monitoring e-mail communications and surveillance of Internet access of employees.
Companies should not be alarmed
by the rules set out in the law and described in this article. On the contrary,
these rules should serve as a guideline
for the legitimate surveillance of their
employees and for avoiding any legal
liability and fines.
Companies do have a right to monitor their employees but this right must
be exercised with due care and for specific purposes, i.e. for ensuring the security of their systems, for running and
controlling the functioning of their
business efficiently, for protecting their
business from significant threats, such
as to prevent transmission of confidential information to a competitor or for
avoiding liability from their employees criminal activities.
In order to avoid any legal problems, the author strongly recommends
to companies that need to monitor their
employees to set up an e-policy document setting out clearly to workers the
conditions on which such monitoring
or surveillance will be carried out.
UPGRADE Vol. VI, No., June 2005 65
UPENET
Evolutionary Computation
Evolutionary Algorithms: Concepts and Applications
Andrea G. B. Tettamanzi
© Mondo Digitale, 2005
This paper was first published, in its original Italian version, under the title “Algoritmi evolutivi: concetti e applicazioni”, by Mondo
Digitale (issue no. 3, March 2005, pp. 3-17, available at <http://www.mondodigitale.net/>). Mondo Digitale, a founding member of
UPENET, is the digital journal of the CEPIS Italian society AICA (Associazione Italiana per l’Informatica ed il Calcolo Automatico,
<http://www.aicanet.it/>.)
(Keywords added by the Editor of UPGRADE.)
Evolutionary algorithms are a family of stochastic problem-solving techniques, within the broader category of what we
might call “natural-metaphor models”, together with neural networks, ant systems, etc. They find their inspiration in
biology and, in particular, they are based on mimicking the mechanisms of what we know as “natural evolution”. During
the last twenty-five years these techniques have been applied to a large number of problems of great practical and economic importance with excellent results. This paper presents a survey of these techniques and a few sample applications.
Keywords: Evolutionary Algorithms,
Evolutionary Computation, Naturalmetaphor Models.
1 What Are Evolutionary Algorithms?
If we think about living beings, including humans, and their organs, their
complexity, and their perfection, we
cannot help but wonder how it was
possible for such sophisticated solutions to have evolved autonomously.
Yet there is a theory, initially proposed
by Charles Darwin and later refined by
many other natural scientists, biologists and geneticists, which provides a
satisfactory explanation for most of
these biological phenomena by studying the mechanisms which enable species to adapt to mutable and complex
environments. This theory is supported
by a considerable body of evidence and
has yet to be refuted by any experimental data. According to Darwin’s theory,
these wonderful creations are simply
the result of a purposeless evolutionary process, driven on the one hand by
randomness and on the other hand by
the law of the survival of the fittest.
Such is natural evolution.
If such a process has been capable
of producing something as sophisticated as the eye, the immune system,
66
UPGRADE Vol. VI, No., June 2005
and even our brain, it would seem only
logical to try and do the same by simulating the process on computers to attempt to solve complicated problems
in the real world. This is the idea behind the development of evolutionary
algorithms (see the box entitled "Some
History" for the birth and evolution of
these algorithms).
1.1 The Underlying Metaphor
Evolutionary algorithms are thus
bio-inspired computer-science techniques based on a metaphor which is
schematically outlined in Table 1. Just
as an individual in a population of organisms must adapt to its surrounding
environment to survive and reproduce,
so a candidate solution must be adapted
to solving its particular problem. The
problem is the environment in which a
solution lives within a population of
other candidate solutions. Solutions
differ from one another in terms of their
quality, i.e., their cost or merit, reflected by the evaluation of the objective function, in the same way as the
individuals of a population of organisms differ from one another in terms
of their degree of adaptation to the environment; what biologists refer to as
fitness. If natural selection allows a
population of organisms to adapt to its
surrounding environment, when applied to a population of solutions to a
problem, it should also be able to bring
about the evolution of better and better, and eventually, given enough time,
optimal solutions.
Based on this metaphor, the computational model borrows a number of
concepts and their relevant terms from
biology: every solution is coded by
means of one or more chromosomes;
the genes are the pieces of encoding
responsible for one or more traits of a
solution; the alleles are the possible
configurations a gene can take on; the
Andrea Tettamanzi is an Associate
Professor at the Information Technology
Dept. of the University of Milan, Italy.
He received his M.Sc. in Computer
Science in 1991, and a Ph.D. in
Computational Mathematics and
Operations Research in 1995. In the
same year he founded Genetica S.r.l., a
Milan-based company specialising in
industrial applications for evolutionary
algorithms and soft computing. He is
active in research into evolutionary
algorithms and soft computing, where he
has always striven to bridge the gap
between theoretical aspects and practical
and
applicational
aspects.
<[email protected]>
© CEPIS
UPENET
EVOLUTION
PROBLEM
SOLVING
Environment
Individual
Fitness
Object problem
Candidate solution
Solution quality
Table 1: A Schematic Illustration of The Metaphor Underlying
Evolutionary Algorithms.
exchange of genetic material between
two chromosomes is called crossover,
whereas a perturbation to the code of
a solution is termed mutation (see the
box entitled "A Genetic Algorithm at
Work" for an example).
Although the computational model
involves drastic simpli-fications compared to the natural world, evolutionary algorithms have proved capable of
causing surprisingly complex and interesting structures to emerge. Given
appropriate encoding, any individual
can be the representation of a particular solution to a problem, the strategy
for a game, a plan, a picture, or even a
simple computer program.
1.2 The Ingredients of An
Evolutionary Algorithm
Now we have introduced the concepts, let us take a look at what an evolutionary algorithm consists of in practice.
An evolutionary algorithm is a
stochastic optimisation technique that
Some History
The idea of using selection and random mutation for optimisation tasks
goes back to the fifties at least and the work of the statistician George E. P.
Box, the man who famously said "all models are wrong, but some are useful".
Box, however, did not make use of computers, though he did manage to formulate a statistical methodology that would become widely used in industry,
which he called evolutionary operation [1]. At around the same time, other
scholars conceived the idea of simulating evolution on computers: Barricelli
and Fraser used computer simulations to study the mechanisms of natural
evolution, while the bio-mathematician Hans J. Bremermann is credited as
being the first person to recognise an optimisation process in biological evolution [2].
As often happens with pioneering ideas, these early efforts met with considerable scepticism. Nevertheless, the time was evidently ripe for those ideas,
in an embryonic stage at that point, to be developed. A decisive factor behind
their development was the fact that the computational power available at that
time in major universities broke through a critical threshold, allowing evolutionary computation to be put into practice at last. What we recognise today as
the original varieties of evolutionary algorithms were invented independently
and practically simultaneously in the mid sixties by three separate research
groups. In America, Lawrence Fogel and colleagues at the University of California in San Diego laid down the foundations of evolutionary programming
[3], while at the University of Michigan in Ann Arbor John Holland proposed his
first genetic algorithms [4]. In Europe, Ingo Rechenberg and colleagues, then
students at the Technical University of Berlin, created what they called "evolution
strategies" (Evolutionsstrategien) [5]. During the following 25 years, each of these
three threads developed essentially on its own, until in 1990 there was a concerted effort to bring about their convergence. The first edition of the PPSN
(Parallel Problem Solving from Nature) conference was held that year in Dortmund. Since then, researchers interested in evolutionary computation form a
single, albeit articulated, scientific
© CEPIS
proceeds in an iterative way. An evolutionary algorithms maintains a population (which in this context means a
multiset or bag, i.e., a collection of elements not necessarily all distinct from
one another) of individuals representing candidate solutions for the problem at hand (the object problem), and
makes it evolve by applying a (usually
quite small) number of stochastic operators: mutation, recombination, and
selection.
Mutation can be any operator that
randomly perturbs a solution. Recombination operators decompose two or
more distinct individuals and then
combine their constituent parts to form
a number of new individuals. Selection
creates copies of those individuals that
represent the best solutions within the
population at a rate proportional to
their fitness.
The initial population may originate from a random sampling of the
solution space or from a set of initial
solutions found by simple local search
procedures, if available, or determined
by a human expert.
Stochastic operators, applied and
composed according to the rules defining a specific evolutionary algorithm, determine a stochastic population-transforming operator. Based on
that operator, it is possible to model the
workings of an evolutionary algorithm
as a Markov chain whose states are
populations. It is possible to prove that,
given some entirely reasonable assumptions, such a stochastic process
will converge to the global optimum
of the problem [16].
When talking about evolutionary
algorithms, we often hear the phrase
implicit parallelism. This term refers
to the fact that each individual can be
thought of as a representative of a
multitude of solution schemata, i.e., of
partially specified solutions, such that,
while processing a single individual,
the evolutionary algorithm will in fact
be implicitly processing at the same
time (i.e., in parallel) all the solution
schemata of which that individual is a
representative. This concept should not
be confused with the inherent parallelism of evolutionary algorithms. This
refers to the fact that they carry out a
UPGRADE Vol. VI, No., June 2005 67
UPENET
A Genetic Algorithm at Work
We can take a close look at how a genetic algorithm works by using an example. Let us assume we have to solve a
problem, called maxone, which consists of searching for all binary strings of length l for the string containing the maximum number of ones. At first sight this might seem to be a trivial problem, as we know the solution beforehand: it will be
the string made up entirely of ones. However, if we were to suppose that we had to make l binary choices to solve a
problem, and that the quality of the solution were proportional to the number of correct choices we made, then we would
have a problem of equivalent difficulty, by no means easy to solve. In this example we assume that all correct choices
correspond to a one merely to make the example easier to follow. We can therefore define the fitness of a solution as the
number of ones in its binary coding, set l = 10, which is a number small enough to make things manageable, and try to
apply the genetic algorithm to this problem.
First of all, we have to establish the size of the population. A sensible
choice to begin with might be 6 individuals. At this point, we need to
generate an initial population: we will do this by tossing a fair coin 60
times (6 individuals times 10 binary digits) and writing 0 if the outcome is 'heads' and 1 if the outcome is 'tails'. The initial population
thus obtained is shown in Table A. Note that the average fitness in
the initial population is 5.67.
NO.
INDIVIDUAL
FITNESS
1)
2)
3)
4)
5)
6)
1111010101
0111000101
1110110101
0100010011
1110111101
0100110000
7
5
7
4
8
3
The evolutionary cycle can now begin. To use fitness-proportionate
selection, the simplest method is to simulate throwing a ball into a
special roulette wheel which has as many slots as individuals in the
population (6 in this case). Each slot has a width that is to the cirTable A: The Initial Population of The Genetic
cumference of the wheel as the fitness of the corresponding indiAlgorithm to Solve The maxone Problem,
vidual is to the sum of the fitness of all the individuals in the populaShowing The Fitness for All Individuals.
tion (36 in this case). Therefore, when we spin the wheel, the ball will
have a 7/34 probability of coming to rest in the individual 1 slot, 5/34
of landing in the individual 2 slot, and so on. We will have to throw the ball exactly 6 times in order to put together an
intermediate population of 6 strings for reproduction. Let us assume the outcomes are: 1, 3, 5, 2, 4, and 5 again. This
means two copies of individual 5 and a single copy of the other individuals with the exception of individual 6 will be used
for reproduction. Individual 6 will not leave descendants. The next operator to be applied is recombination. Couples are
formed, the first individual extracted with the second, the third with the fourth, and so forth. For each couple, we decide
with a given probability, say 0.6, whether to perform crossover. Let us assume that we perform crossover with only the
first and the last couple, with cutting points randomly chosen after the second digit and after the fifth digit respectively.
For the first couple, we will have
11.11010101
11.10110101
becoming
"
11.10110101
11.11010101.
We observe that, since the parts to the left of the cutting point are identical, this crossover will have no effect. This
contingency is more common than you might imagine, especially when, after many generations, the population is full of
equally good and nearly identical individuals. For the third couple we will have instead
01000.10011
11101.11101
becoming
"
01000.11101
11101.10011.
All that remains is to apply mutation to the six strings resulting
from recombination by deciding with a probability of, say, 1/10 for
each digit whether to invert it. As there are 60 binary digits in total,
we would expect an average of 6 mutations randomly distributed
over the whole population. After applying all the genetic operators,
the new population might be the one shown in Table B, where the
mutated binary digits have been highlighted in bold type.
In one generation, the average fitness in the population has
changed from 5.67 to 6.17, with an 8.8% increase. By iterating the
same process again and again, very quickly we reach a point at
which an individual made entirely of ones appears, the optimal
solution to our problem.
68
UPGRADE Vol. VI, No., June 2005
NO.
INDIVIDUAL
1)
2)
3)
4)
5)
6)
1110100101
1111110100
1110101111
0111000101
0100011101
1110110001
FITNESS
6
7
8
5
5
6
Table B: The Population of The Genetic
Algorithm to Solve The maxone Problem after
One Generation, Showing The Fitness for All
Individuals.
© CEPIS
UPENET
population-based search, which means
that, although for the sake of convenience they are usually expressed by
means of a sequential description, they
are particularly useful and easy to implement on parallel hardware.
1.3 Genetic Algorithms
The best way to understand how
evolutionary algorithms work is to consider one of their simplest versions,
namely genetic algorithms [6]. In genetic algorithms, solutions are represented as fixed-length binary strings.
This type of representation is by far the
most general, although, as we shall see
below, not always the most convenient,
although the fact is that any data structure, no matter how complex and articulated, will always be encoded in
binary in a computer’s memory. A sequence of two symbols, 0 and 1, from
which it is possible to reconstruct a
solution, is very reminiscent of a DNA
thread made up of a sequence of four
bases, A, C, G, and T, from which it is
possible to reconstruct a living organism! In other words, we can consider a
binary string as the DNA of a solution
to the object problem.
A genetic algorithm consists of two
parts:
1. a routine that generates (randomly
or by using heuristics) the initial population;
2. an evolutionary cycle, which at each
iteration (or generation), creates a new
population by applying the genetic
operators to the previous population.
The evolutionary cycle of the genetic algorithms can be represented
using the pseudocode in Table 2. Each
individual is assigned a particular fitness value, which depends on the quality of the solution it represents. The
first operator to be applied is selection,
whose purpose is to simulate the Darwinian law of the survival of the fittest. In the original version of genetic
algorithms, that law is implemented by
means of what is known as the fitnessproportionate selection: to create a new
intermediate population of n ‘parent’
individuals, n independent extractions
of an individual from the existing
population are carried out, where the
probability for each individual to be
extracted is directly proportional to its
© CEPIS
generation = 0
Initialize population
while not <termination condition> do
generation = generation + 1
Compute the fitness of all individuals
Selection
Crossover(pcross)
Mutation(pmut)
end while
Table 2: Pseudocode Illustrating A Typical Simple Genetic Algorithm.
fitness. As a consequence, above-average individuals will be extracted
more than once on average, whereas
below-average individuals will face
extinction.
Once n parents are extracted as described, the individuals of the next generation will be produced by applying a
number of reproduction operators,
which may involve one parent only
(thus simulating a sort of asexual reproduction) in which case we speak of
mutation, or more than one parent, usually two (sexual reproduction), in
which case we speak of recombination.
In genetic algorithms, two reproduction operators are used: crossover and
mutation.
To apply crossover, the parent individuals are mated two by two. Then,
with a certain probability pcross, called
the "crossover rate", which is a parameter of the algorithm, each couple undergoes crossover itself. This is done
by lining up the two binary strings,
cutting them at a randomly chosen
point, and swapping the right-hand
halves, thus yielding two new individuals, which inherit part of their characters from one parent and part from the
other.
After crossover, all individuals undergo mutation, whose purpose is to
simulate the effect of random transcription errors that can happen with a very
low probability pmut every time a chromosome is duplicated. Mutation
amounts to deciding whether to invert
each binary digit, independently of the
others, with probability pmut. In other
words, every zero has probability pmut
of becoming a one and vice versa.
The evolutionary cycle, according
to how it is conceived, could go on
forever. In practice, however, one has
to decide when to halt it, based on some
user-specified termination criterion.
Examples of termination criteria are:
· a fixed number of generations or a
certain elapsed time;
· a satisfactory solution, according to
some particular criterion, has been
found;
· no improvement has taken place for
a given number of generations.
1.4 Evolution Strategies
Evolution strategies approach the
optimisation of a real-valued objective
function of real variables in an l-dimensional space. The most direct representation is used for the independent variables of the function (the solution),
namely a vector of real numbers. Besides encoding the independent variables, however, evolution strategies
give the individual additional information on the probability distribution to
be used for its perturbation (mutation
operator). Depending on the version,
this information may range from just
the variance, valid for all independent
variables, to the entire variancecovariance matrix C of a joint normal
distribution; in other words, the size of
an individual can range from l + 1 to
l(l + 1) real numbers.
In its most general form, the mutation operator perturbs an individual in
two steps:
1. It perturbs the C matrix (or, more
exactly, an equivalent matrix of rotation angles from which the C matrix
can be easily calculated) with the same
probability distribution for all individuals;
2. It perturbs the parameter vector representing the solution to the
optimisation problem according to a
joint normal probability distribution
UPGRADE Vol. VI, No., June 2005 69
UPENET
having mean 0 and the perturbed C as
its variance-covariance matrix.
This mutation mechanism allows
the algorithm to evolve the parameters
of its search strategy autonomously
while it is searching for the optimal
solution. The resulting process, called
self-adaptation, is one of the most
powerful and interesting features of
this type of evolutionary algorithm.
Recombination in evolution strategies can take different forms. The
most frequently used are discrete and
intermediate recombination. In discrete recombination, each component
of the offspring individuals is taken
from one of the parents at random,
while in intermediate recombination
each component is obtained by linear
combination of the corresponding
components in the parents with a random parameter.
There are two alternative selection
schemes defining two classes of evolution strategies: (n, m) and (n + m). In
(n, m) strategies, starting from a population of n individuals, m > n offspring
are produced and the n best of them
are selected to form the population of
the next generation. In (n + m) strategies, on the other hand, the n parent
individuals participate in selection as
well. Of those n + m individuals, only
the best n make it to the population of
the next generation. Note that, in both
cases, selection is deterministic and
works "by truncation", i.e., by discarding the worst individuals. In this way,
it is not necessary to define a non-negative fitness, and optimisation can consider the objective function, which can
be maximised or minimised according
to individual cases, directly.
1.5 Evolutionary Programming
Evolution, whether natural or artificial, has nothing ‘intelligent’ about it,
in the literal sense of the term: it does
not understand what it is doing, nor is
it supposed to. Intelligence, assuming
such a thing can be defined, is rather
an ‘emergent’ phenomenon of evolution, in the sense that evolution may
manage to produce organisms or solutions endowed with some form of ‘intelligence’.
Evolutionary programming is intended as an approach to artificial in70
UPGRADE Vol. VI, No., June 2005
telligence, as an alternative to symbolic
reasoning techniques. Its goal is to
evolve intelligent behaviours represented through finite-state machines
rather than define them a priori. In
evolutionary programming, therefore,
the object problem determines the input and output alphabet of a family of
finite-state machines, and individuals
are appropriate representations of finite-states machines operating on those
alphabets. The natural representation
of a finite-state machine is the matrix
that defines its state-transition and output functions. The definition of the
mutation and recombination operators
is slightly more complex than in the
case of genetic algorithms or evolution
strategies, as it has to take into account
the structure of the objects those operators have to manipulate. The fitness
of an individual can be computed by
testing the finite-state machine it represents on a set of instances of the problem. For example, if we wish to evolve
individuals capable of modelling a historical series, we need to select a
number of pieces from the previous
series and feed them into an individual.
We can then interpret the symbols produced by the individual as predictions
and compare them with the actual data
to measure their accuracy.
1.6 Genetic Programming
Genetic programming [7] is a relatively new branch of evolutionary algorithms, whose goal is an old dream
of artificial intelligence: automatic programming. In a programming problem,
a solution is a program in a given pro-
gramming language. In genetic programming, therefore, individuals represent computer programs.
Any programming language can be
used, at least in principle. However, the
syntax of most languages would make
the definition of the genetic operators
that preserve it particularly awkward
and burdensome. This is why early efforts in that direction found a sort of
restricted LISP to be an ideal expression medium of expression. LISP has
the advantage of possessing a particularly simple syntax. Furthermore, it
allows us to manipulate data and programs in a uniform fashion. In practice, approaching a programming problem calls for the definition of a suitable set of variables, constants, and
primitive functions, thus limiting the
search space which would otherwise
be unwieldy. The functions chosen will
be those that a priori are deemed useful for the purpose. It is also customary to try and arrange things so that all
functions accept the results returned by
all others as arguments, as well as all
variables and predefined constants. As
a consequence, the space of all possible programs from which the program
that will solve the problem is to be
found will contain all possible compositions of functions that can be formed
recursively from the set of primitive
functions, variables, and predefined
constants.
For the sake of simplicity, and without loss of generality, a genetic programming individual can be regarded
as the parse tree of the corresponding
program, as illustrated in Figure 1. The
Figure 1: A Sample LISP Program with Its Associated Parse Tree.
© CEPIS
UPENET
OR
OR
NOT
AND
d0
d0
OR
d1
d1
AND
NOT
NOT
d0
d0
OR
AND
d0
NOT
d1
OR
AND
NOT
NOT
d0
OR
d1
d1
d1
NOT
NOT
d0
d0
Figure 2: Schematic Illustration of Recombination in Genetic Programming.
recombination of two programs is carried out by randomly selecting a node
in the tree of both parents and by swapping the subtrees rooted in the selected
nodes, as illustrated in Figure 2. The
importance of the mutation operator is
limited in genetic programming, for
recombination alone is capable creating enough diversity to allow evolution to work.
Computing the fitness of an individual is not so different from testing
a program. A set of test cases must be
given as an integral part of the description of the object problem. A test case
is a pair (input data, desired output).
The test cases are used to test the program as follows: for each case, the program is executed with the relevant input data; the actual output is compared
with the desired output; and the error
is measured. Finally, fitness is obtained
as a function of the accumulated total
error over the whole test set.
An even more recent approach to
genetic programming is what is known
as grammatical evolution [8], whose
basic idea is simple but powerful: given
the grammar of a programming language (in this case completely arbitrary, without limitations deriving from
its particular syntax), consisting of a
number of production rules, a program
in this language is represented by
means of a string of binary digits. This
representation is decoded by starting
from the target non-terminal symbol of
the grammar and reading the binary
digits from left to right – enough digits each time to be able to decide which
of the applicable production rules
should actually be applied. The production rule is then applied and the decod© CEPIS
ing continues. The string is considered
to be circular, so that the decoding
process never runs out of digits. The
process finishes when no production
rule is applicable and a well-formed
program has therefore been produced,
which can be compiled and executed
in a controlled environment.
2 ‘Modern’ Evolutionary Algorithms
Since the early eighties, evolutionary algorithms have been successfully
applied to many real-world problems
which are difficult or impossible to
solve with exact methods and are of
great interest to operations researchers.
Evolutionary algorithms have gained
a respectable place in the problem solver’s toolbox, and this last quarter of a
century has witnessed the coming of
age of the various evolutionary techniques and their cross-fertilisation as
well as progressive hybridisation with
other technologies.
If there is one major trend line in
this development process, it is the progressive separation from elegant representations, based on binary strings,
of the early genetic algorithms, so suggestively close to their biological
source of inspiration, and an increasing propensity for adopting representations closer to the nature of the object problem, ones which map more
directly onto the elements of a solution, thus allowing all available information to be exploited to ‘help’, as it
were, the evolutionary process to find
its way to the optimum [9].
Adopting representations closer to
the problem also necessarily implies
designing mutation and recombination
operators that manipulate the elements
of a solution in an explicit, informed
manner. On the one hand, those operators end up being less general, but on
the other hand, the advantages in terms
of performance are often remarkable
and compensate for the increased design effort.
Clearly, the demand for efficient
solutions has prompted a shift away
from the coherence of the genetic
model.
2.1 Handling Constraints
Real-world problems, encountered
in industry, business, finance and the
public sector, whose solution often has
a significant economical impact and
which constitute the main target of
operations research, all share a common feature: they have complex and
hard to handle constraints. In early
work on evolutionary computation, the
best way to approach constraint handling was not clear. Over time, evolutionary algorithms began to be appreciated as approximate methods for operations research and they have been
able to take advantage of techniques
and expedients devised within the
framework of operations research for
other approximate methods. Three
main techniques emerged from this
cross-fertilisation, which can be combined if needed, that enable nontrivial
constraints to be taken into account in
an evolutionary algorithm:
· the use of penalty functions;
· the use of decoders or repair algorithms;
· the design of specialised encodings
and genetic operators.
Penalty functions are functions associated with each problem constraint
that measure the degree to which a solution violates its relevant constraint.
As the name suggests, these functions
are combined with the objective function in order to penalise the fitness of
individuals that do not respect certain
constraints. Although the penalty function approach is a very general one,
easy to apply to all kinds of problems,
its use is not without pitfalls. If penalty functions are not accurately
weighted, the algorithm can waste a
UPGRADE Vol. VI, No., June 2005 71
UPENET
great deal of time processing infeasible solutions, or it might even end up
converging to an apparent optimum
which is actually impossible to implement. For instance, in a transportation
problem, described by n factories and
m customers to which a given quantity
of a commodity has to be delivered,
where the cost of transporting a unit
of the commodity from every factory
to any of the customers, a solutions that
minimises the overall cost in an unbeatable way is the solution where absolutely nothing is transported! If the violation of the constraints imposing that
the ordered quantity of the commodity
is delivered to each customer is not
penalised to a sufficient extent, the
absurd solution of not delivering anything could come out as better than any
solution that actually meets customers’
orders. For some problems, called feasibility problems, finding a solution
that doe not violate any constraint is
almost as difficult as finding the optimum solution. For this kind of problems, penalty functions have to be designed with care or else the evolution
may never succeed in finding any feasible solution.
Decoders are algorithms based on
a parameterised heuristics, which aim
to construct an optimal solution from
scratch by making a number of choices.
When such an algorithm is available,
the idea is to encode the parameters of
the heuristics into the individuals processed by the evolutionary algorithms,
rather than the solution directly, and to
use the decoder to reconstruct the corresponding solution from the parameter values. We have thus what we
might call an indirect representation of
solutions.
Repair algorithms are operators
that, based on some heuristics, take an
infeasible solution and ‘repair’ it by
enforcing the satisfaction of one violated constraint, then of another, and
so on, until they obtain a feasible solution. When applied to the outcome of
genetic operators of mutation and recombination, repair algorithms can
ensure that the evolutionary algorithm
is at all times only processing feasible
solutions. Nevertheless, the applicability of this technique is limited, since
for many problems the computational
72
UPGRADE Vol. VI, No., June 2005
complexity of the repair algorithm far
outweighs any advantages to be gained
from its use.
Designing specialised encodings
and genetic operators would be the
ideal technique, but also the most complicated to apply in all cases. The underlying idea is to try and design a solutions representation that, by its construction, is capable of encoding all and
only feasible solutions, and to design
specific mutation and recombination
operators alongside it that preserve the
feasibility of the solutions they are applied to. Unsurprisingly, as the complexity and number of constraints increases, this exercise soon becomes
formidable and eventually impossible.
However, when possible, this is the
optimal way to go, for it guarantees the
evolutionary algorithm processes feasible solutions only and therefore reduces the search space to the absolute
minimum.
2.2 Combinations with Other
Soft-Computing Techniques
Evolutionary algorithms, together
with fuzzy logic and neural network,
are part of what we might call soft computing, as opposed to traditional or
hard computing, which is based on criteria like precision, determinism, and
the limitation of complexity. Soft computing differs from hard computing in
that it is tolerant of imprecision, uncertainty, and partial truth. Its guiding
principle is to exploit that tolerance to
obtain tractability, robustness, and
lower solution costs.
Soft computing is not just a mixture of its ingredients, but a discipline
in which each constituent contributes
a distinct methodology for addressing
problems in its domain, in a complementary rather than competitive way
[10]. Thus evolutionary algorithms can
be employed not only to design and
optimise fuzzy systems, such as fuzzy
rule bases or fuzzy decision trees, but
also to improve the learning characteristics of neural networks, or even determine their optimal topology. Fuzzy
logic can also be used to control the
evolutionary process by acting dynamically on the algorithm parameters,
to speed up convergence to the global
optimum and escape from local optima,
and to fuzzify, as it were, some elements of the algorithm, such as the fitness of individuals or their encoding.
Meanwhile neural networks can help
an evolutionary algorithm obtain an
approximate estimate of the fitness of
individuals for problems where fitness
calculation requires computationally
heavy simulations, thus reducing CPU
time and improving overall performance.
The combination of evolutionary
algorithms with other soft computing
techniques is a fascinating research
field and one of the most promising of
this group of computing techniques.
3 Applications
Evolutionary algorithms have been
successfully applied to a large number
of domains. For purely illustrative purposes, and while this is not intended to
be a meaningful classification, we
could divide the field of application of
these techniques into five broad domains:
Planning, including all problems
that require choosing the most economical and best performing way to
use a finite set of resources. Among the
problems in this domain are vehicle
routing, transport problems, robot trajectory planning, production scheduling in an industrial plant, timetabling,
determining the optimal load of a transport, etc.
Design, including all those problem that require determining an optimal layout of elements (electronic or
mechanic components, architectural elements, etc.) with the aim of meeting
a set of functional, aesthetic, and robustness requirements. Among the
problems in this domain are electronic
circuit design, engineering structure
design, information system design, etc.
Simulation and identification,
which requires determining how a
given design or model of a system will
behave. In some cases this needs to be
done because we are not sure about
how the system behaves, while in others its behaviour is known but the accuracy of a model has to be assessed.
Systems under scrutiny may be chemical (determining the 3D structure of a
protein, the equilibrium of a chemical
reaction), economical (simulating the
© CEPIS
UPENET
dynamics of competition in a market
economy), medical, etc.
Control, including all problems that
require a control strategy to be established for a given system;
Classification, modelling and machine learning, whereby a model of the
underlying phenomenon needs to be
built based on a set of observations.
Depending on the circumstances, such
a model may consist of simply determining which of a number of classes
an observation belongs to, or building
(or learning) a more or less complex
model, often used for prediction purposes. Among the problems in this domain is data mining, which consists of
discovering regularities in huge
amounts of data that are difficult to spot
"with the naked eye".
Of course the boundaries between
these five application domains are not
clearly defined and the domains themselves may in some cases overlap to
some extent. However, it is clear that
together they make up a set of problems of great economic importance and
enormous complexity.
In the following sections we will
try to give an idea of what it means to
apply evolutionary algorithms to problems of practical importance, by describing three sample applications in
domains that differ greatly from one
another, namely school timetabling,
electronic circuit design, and behavioural customer modelling.
3.1 School Timetabling
The timetable problem consists of
planning a number of meetings (e.g.,
exams, lessons, matches) involving a
group of people (e.g., students, teachers, players) for a given period and requiring given resources (e.g., rooms,
laboratories, sports facilities) according to their availability and respecting
some other constraints. This problem
is known to be NP-complete: that is the
II
D
D
Operation
Code operand 1
operand 2
Description
Input
Delay
Left shift
Right shift
Add
Subtract
Complement
I
D
L
R
A
S
C
not used
not used
p
p
m
m
not used
Copy input
Delay n cycles
multiply by 2p
divide by 2p
add
subtract
complement input
not used
n
n
n
n
n
n
Table 3: Primitive Operations for The Representation of Digital Filters. (The format
of the primitives is fixed, with two operands, of which only the required operands
are used. The integers n and m refer to the inputs at cycles t – n and t – m
respectively.)
main reason why it cannot be approached in a satisfactory way (from
the viewpoint of performance) with
exact algorithms, and for a long time
it has been a testbed for alternative
techniques, such as evolutionary algorithms. The problem of designing timetables, in particular for Italian high
schools, many of which are distributed
over several buildings, is further complicated by the presence of very strict
constraints, which makes it very much
a feasibility problem.
An instance of this problem consists of the following entities and their
relations:
rooms, defined by their type, capacity, and location;
subjects, identified by their required
room type;
teachers, characterised by the subjects they teach and their availability;
classes, i.e., groups of students following the same curriculum, assigned
to a given location, with a timetable
during which they have to be at school;
lessons, meaning the relation <t, s,
c, l>, where t is a teacher, s is a subject, c is a class and l is its duration
expressed in periods (for example,
hours); in some cases, more than one
teacher and more than one class can
participate in a lesson, in which case
we speak of grouping.
<<2
<<2
+
+
This problem involves a great many
constraints, both hard and soft, too
many for us to go into now in this article. Fortunately, anybody who has
gone to a high school in Europe should
at least have some idea of what those
constraints might be.
This problem has been approached
by means of an evolutionary algorithm,
which is the heart of a commercial
product, EvoSchool [11]. The algorithm adopts a ‘direct’ solution representation, which is a vector whose
components correspond to the lessons
that have to be scheduled, while the
(integer) value of a component indicates the period in which the corresponding lesson is to begin. The function that associates a fitness to each
timetable, one of the critical points of
the algorithm, is in practice a combination of penalty functions with the
form
where hi is the penalty associated
with the violation of the ith hard constraint, sj is the penalty associated with
the violation of the jth soft constraint,
and parameters ai and bj are appropriate weightings associated with each
constraint. Finally, g is an indicator
whose value is 1 when all hard con-
D
D
+
+
––
Figure 3: A Schematic Diagram of A Sample Circuit Obtained by Composition of 6 Primitive Operations.
© CEPIS
UPGRADE Vol. VI, No., June 2005 73
UPENET
straints are satisfied and zero otherwise. In effect, this means that soft
constraints are taken into consideration
only after all hard constraints have
been satisfied.
All other ingredients of the evolutionary algorithm are fairly standard,
with the exception of the presence of
two mutually exclusive perturbation
operators, called by the mutation operator, each with its own probability:
· intelligent mutation;
· improvement.
Intelligent mutation, while preserving its random nature, is aimed at performing changes that do not decrease
the fitness of the timetable to which it
is applied. In particular, if the operator
affects the ith lesson, it will propagate
its action to all the other lessons involving the same class, teacher or room.
The choice of the "action range" of this
operator is random with any given
probability distribution. In practice, the
effect of this operator is to randomly
move some interconnected lessons in
such a way as to decrease the number
of constraint violations.
Improvement, in contrast, restructures an individual to a major extent.
Restructuring commences by randomly
selecting a lesson and concentrates on
the partial timetables for the relevant
class, teacher, or room. It compacts the
existing lessons to free up enough
space to arrange the selected lesson
without conflicts.
A precisely balanced interaction
between these two operators is the secret behind the efficiency of this evolutionary algorithm, which has proven
capable of generating high quality
timetables for schools with thousands
of lessons to schedule over different
buildings scattered over several sites.
A typical run takes a few hours on a
not so powerful PC of the kind to be
found in high schools.
3.2 Digital Electronic Circuit
Design
One of the problems that has received considerable attention from the
international evolutionary computation
community is the design of finite impulse response digital filters. This interest is due to their presence in a large
74
UPGRADE Vol. VI, No., June 2005
number of electronic devices that form
part of many consumer products, such
as cellular telephones, network devices, etc.
The main criterion of traditional
electronic circuit design methodologies
is minimising the number of transistors used and, consequently, production costs. However, another very significant criterion is power absorption,
which is a function of the number of
logic transitions affecting the nodes of
a circuit. The design of minimum
power absorption digital filters has
been successfully approached by
means of an evolutionary algorithm
[12].
A digital filter can be represented
as a composition of a very small
number of elementary operations, like
the primitives listed in Table 3. Each
elementary operation is encoded by
means of its own code (one character)
and two integers, which represent the
relative offset (calculated backwards
from the current position) of the two
operands. When all offsets are positive,
the circuit does not contain any feedback and the resulting structure is that
of a finite impulse response filter. For
example, the individual
(I 0 2) (D 1 3) (L 2 2) (A 2 1) (D 1 0) (S 1 5)
corresponds to the schematic diagram in Figure 3.
The fitness function has two stages.
In the first stage, it penalises violations
of the filter frequency response specifications, represented by means of a
‘mask’ in the graph of frequency response. In the second stage, which is
activated when the frequency response
is within the mask, fitness is inversely
proportional to the circuit activity,
which in turn is directly proportional
to power absorption.
The evolutionary algorithm which
solves this problem requires a great
deal of computing power. For this reason, it has been implemented as a distributed system, running on a cluster
of computers according to an island
model, whereby the population is divided into a number of islands, residing on distinct machines, which evolve
independently, except that, every now
and then, they exchange ‘migrant’ individuals, which allow genetic mate-
rial to circulate while at the same time
keeping the required communication
bandwidth as small as we wish.
A surprising result of the above
evolutionary approach to electronic
circuit design has been that the digital
filters discovered by evolution, besides
having a much lower power absorption
in comparison with the corresponding
filters obtained using traditional design
techniques, as was intended, they also
bring about a 40% to 60% reduction
in the number of logic elements and,
as a consequence, in area and speed as
well. In other words, the decrease in
consumption has not been achieved at
the expense of production cost and
speed. On the contrary, it has brought
about an overall increase in efficiency
in comparison with traditional design
methods.
3.3 Data Mining
A critical success factor for any
business today is its ability to use information (and knowledge that can be
extracted from information) effectively. This strategic use of data can
result in opportunities presented by
discovering hidden, previously undetected, and frequently extremely valuable facts about consumers, retailers,
and suppliers, and business trends in
general. Knowing this information, an
organisation can formulate effective
business, marketing, and sales strategies; precisely target promotional activity; discover and penetrate new markets; and successfully compete in the
marketplace from a position of informed strength. The task of sifting
information with the aim of obtaining
such a competitive advantage is known
as data mining [13]. From a technical
point of view, data mining can be defined as the search for correlations,
trends, and patterns that are difficult
to perceive "ith the naked eye" by digging into large amounts of data stored
in warehouses and large databases,
using statistical, artificial intelligence,
machine learning, and soft computing
techniques. Many large companies and
organisations, such as banks, insurance
companies, large retailers, etc., have a
huge amount of information about their
customers’ behaviour. The possibility
© CEPIS
UPENET
of exploiting such information to infer
behaviour models of their current and
prospective customers with regard to
specific products or classes of products
is a very attractive proposition for organisations. If the models thus obtained
are accurate, intelligible, and informative, they can later be used for decision making and to improve the focus
of marketing actions,.
For the last five years the author
has participated in the design, tuning,
and validation of a powerful data mining engine, developed by Genetica
S.r.l. and Nomos Sistema S.p.A (now
an Accenture company) in collaboration with the University of Milan, as
part of two Eureka projects funded by
the Italian Ministry of Education and
University.
The engine is based on a genetic
algorithm for the synthesis of predictive models of customer behaviour,
expressed by means of sets of fuzzy
IF-THEN rules. This approach is a
clear example of the advantages that
can be achieved by combining evolutionary algorithms and fuzzy logic.
The approach assumes a data set is
available: that is, a set as large as we
like of records representing observations or recordings of past customer
behaviour. The field of applicability
could be even wider: the records could
be observations of some phenomenon,
not necessarily related to economy or
business, such as the measurement of
free electrons in the ionosphere [14].
A record consists of m attributes,
i.e., values of variables describing the
customer. Among these attributes, we
assume that there is an attribute measuring the aspect of customer behaviour
we are interested in modelling. Without loss of generality, we can assume
there is just one attribute of this kind
— if we were interested in modelling
more than one aspect of behaviour, we
could develop distinct models for each
aspect. We could call this attribute ‘predictive’, as it is used to predict a customer’s behaviour. Within this conceptual framework, a model is a function
of m – 1 variables which returns the
value of the predictive attribute depending on the value of the other attributes.
The way we choose to represent
© CEPIS
this function is critical. Experience
proves that the usefulness and acceptability of a model does not derive from
its accuracy alone.
Accuracy is certainly a necessary
condition, but more important is the
model’s intelligibility for the expert
who will have to evaluate it before authorising its use. A neural network or a
LISP program, to mention just two alternative ‘languages’ that others have
chosen to express their models, may
provide killer results when it comes to
accuracy. However, organisations will
be reluctant to ‘trust’ the results of the
model unless they can understand and
explain how the results have been obtained.
This is the main reason for using
sets of fuzzy IF-THEN rules as the language for expressing models. Fuzzy
IF-THEN rules are probably the nearest thing to the intuitive way experts
express their knowledge, due to the use
of rules that express relationships between linguistic variables (which take
on linguistic values of the type LOW,
MEDIUM, HIGH). Also, fuzzy rules
have the desirable property of behaving in an interpolative way, i.e., they
do not jump from one conclusion to the
opposite because of a slight change in
the value of a condition, as is the case
with crisp rules.
The encoding used to represent a
model in the genetic algorithm is quite
complicated, but it closely reflects the
logical structure of a fuzzy rule base.
It allows specific mutation and recombination operators to be designed
which operate in an informed way on
their constituent blocks. In particular,
the recombination operator is designed
in such a way as to preserve the syntactic correctness of the models. A child
model is obtained by combining the
rules of two parent models: every rule
in the child model may be inherited
from either parent with equal probability. Once inherited, a rule takes on all
the definitions of the linguistic values
(fuzzy sets) of the source parent model
that contribute to determining its semantics.
Models are evaluated by applying
them to a portion of the data set. This
yields a fitness value gauging their accuracy. As is customary in machine
learning, the remaining portion of the
data set is used to monitor the generalisation capability of the models and
avoid overfitting, which happens when
a model learns one by one the examples it has seen, instead of capturing
the general rules which can be applied
to cases never seen before.
The engine based on this approach
has been successfully applied to credit
scoring in the banking environment, to
estimating customer lifetime value in
the insurance world [15], and to the
collection of consumer credit receivables.
4 Conclusions
With this short survey on evolutionary algorithms we have tried to provide a complete, if not exhaustive - for
obvious reasons of space -, overview
of the various branches into which they
are traditionally divided (genetic algorithms, evolution strategies, evolutionary programming and genetic programming). We have gone on to provide
some information about the most significant issues concerning the practical application of evolutionary computing to problems of industrial and economic importance, such as solution
representation and constraint handling,
issues in which research has made substantial progress in the last few years.
Finally, we have completed the picture
with a more in-depth, but concise, illustration of three sample applications
to "real-world" problems, chosen for
being in domains which are as different from one another as possible, with
the idea of providing three complementary views on the criticalities and the
issues that can be encountered when
implementing a software system that
works. Readers should appreciate the
versatility and the enormous potential
of these techniques which are still coming of age almost forty years after their
introduction. Unfortunately, this survey necessarily lacks an illustration of
the theoretical foundations of evolutionary computing, which includes the
schema theorem (with its so-called
building block hypothesis) and the convergence theory. These topics have
been omitted on purpose, since they
would have required a level of formality unsuited to a survey. Interested
UPGRADE Vol. VI, No., June 2005 75
UPENET
Evolutionary Algorithms on The Internet
Below are a few selected websites where the reader can find introductory or advanced information about evolutionary algorithms:
· <http://www.isgec.org/>: the portal of the International Society for Genetic and
Evolutionary Computation;
· <http://evonet.lri.fr/>: the portal of the European network of excellence on evolutionary algorithms;
· <http://www.aic.nrl.navy.mil/galist/>: the GA Archives, originally the "GA-List" mailing list archives, now called the "EC Digest"; it contains up-to-date information
on major events in the field plus links to other related web pages;
· <http://www.fmi.uni-stuttgart.de/fk/evolalg/index.html>: the EC Repository, maintained at Stuttgart University.
readers can fill this gap by referring to
the bibliography below. Another aspect
that has been overlooked because it is
not really an ‘application’, although it
is of great scientific interest, is the impact that evolutionary computation has
had on the study of evolution itself and
of complex systems in general (for an
example, see the work by Axelrod on
spontaneous evolution of co-operative
behaviours in a world of selfish agents
[18]).
Readers wishing to look into the
field of evolutionary computation are
referred to some excellent introductory
books [6][9][17][19] or more in-depth
treatises [20][21], or can browse the
Internet sites mentioned in the box
"Evolutionary Algorithms on the
Internet".
References
[1] George E. P. Box, N. R. Draper. Evolutionary Operation: Statistical Method
for Process Improvement. John Wiley
& Sons, 1969.
[2] Hans J. Bremermann. "Optimization
through Evolution and Recombination". In M. C. Yovits, G. T. Jacobi and
G. D. Goldstein (editors), Self-Organizing Systems 1962, Spartan Books,
Washington D. C., 1962.
[3] Lawrence J. Fogel, A. J. Owens, M. J.
Walsh. Artificial Intelligence through
Simulated Evolution. John Wiley &
Sons, New York, 1966.
[4] John H. Holland. Adaptation in Natural and Artificial Systems. University
of Michigan Press, Ann Arbor, 1975.
[5] Ingo Rechenberg. Evolutions strategie:
Optimierung technischer Systeme
nach Prinzipien der biologischen Evolution. Frommann-Holzboog, Stuttgart, 1973.
[6] David E. Goldberg. Genetic Algorithms
in Search, Optimization, and Machine
76
UPGRADE Vol. VI, No., June 2005
Learning. Addison-Wesley, 1989.
[7] John R. Koza. Genetic Programming.
MIT Press, Cambridge, Massachusetts, 1992.
[8] Michael O’Neill, Conor Ryan. Grammatical Evolution. Evolutionary automatic programming in an arbitrary
language. Kluwer, 2003.
[9] Zbigniew Michalewicz. Genetic Algorithms + Data Structures = Evolution
Programs, 3rd Edition. Springer, Berlin, 1996.
[10] Andrea G. B. Tettamanzi, Marco
Tomassini. Soft Computing. Integrating evolutionary, neural, and fuzzy
systems. Springer, Berlin, 2001.
[11] Calogero Di Stefano, Andrea G. B.
Tettamanzi. " An Evolutionary Algorithm for Solving the School TimeTabling Problem". In E. Boers et al.,
Applications of Evolutionary Computing. EvoWorkshops 2001, Springer,
2001. Pages 452–462.
[12] Massimiliano Erba, Roberto Rossi,
Valentino Liberali, Andrea G. B.
Tettamanzi. "Digital Filter Design
Through Simulated Evolution". Proceedings of ECCTD’01 - European
Conference on Circuit Theory and Design, August 28-31, 2001, Espoo, Finland.
[13] Alex Berson, Stephen J. Smith. Data
Warehousing, Data Mining & OLAP,
McGraw Hill, New York, 1997.
[14] Mauro Beretta, Andrea G. B.
Tettamanzi. "Learning Fuzzy Classifiers with Evolutionary Algorithms".
In A. Bonarini, F. Masulli, G. Pasi (editors), Advances in Soft Computing,
Physica-Verlag, Heidelberg, 2003.
Pagg. 1–10.
[15] Andrea G. B. Tettamanzi et al. "Learning Environment for Life-Time Value
Calculation of Customers in Insurance
Domain". In K. Deb et al. (editors),
Proceedings of the Genetic and Evolutionary Computation Congress
(GECCO 2004), Seattle, June 26–30,
2004. Pages II-1251–1262.
[16] Günter Rudolph. Finite Markov Chain
Results in Evolutionary Computation:
A Tour d’Horizon. Fundamenta
Informaticae, vol. 35, 1998. Pages
67–89.
[17] Melanie Mitchell. An Introduction to
Genetic Algorithms. Bradford, 1996.
[18] Robert Axelrod. The Evolution of
Cooperation. Basic Books, 1985.
[19] David B. Fogel. Evolutionary Computation: Toward a new philosophy of
machine intelligence, 2nd Edition.
Wiley-IEEE Press, 1999.
[20] Thomas Bäck. Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms. Oxford University Press, 1996.
[21] Thomas Bäck, David B. Fogel,
Zbigniew Michalewicz (editors). Evolutionary Computation (2 volumes).
IoP, 2000.
© CEPIS

Documents pareils