|
|
SESSION: Keynote talks |
|
|
|
|
Session details: Keynote talks |
|
doi>10.1145/3245446 |
|
Full text: PDF
|
|
|
|
|
Development of software engineering: co-operative efforts from academia, government and industry |
| |
Fuqing Yang,
Hong Mei
|
|
Pages: 2-11 |
|
doi>10.1145/1134285.1134287 |
|
Full text: PDF
|
|
In the past 40 years, software engineering has emerged as an important sub-field of computer science. The quality and productivity of software have been improved and the cost and risk of software development been decreased due to the contributions made ...
In the past 40 years, software engineering has emerged as an important sub-field of computer science. The quality and productivity of software have been improved and the cost and risk of software development been decreased due to the contributions made in this sub-field. The software engineering community needs to invest much more efforts to cope with the drastically increasing demands on the information technology as well as the extremely open and dynamic nature of the Internet. The history of software engineering is reviewed with emphasis on the driving forces of software and the milestones of software engineering development. The history of software engineering in China is reviewed with emphasis on the relationship between software engineering and the software industry. Based on the above reviews, we argue that software engineering should become an independent discipline along with computer science and co-operative efforts from academia, governments and industries should be needed for the harmonious development of software engineering. Some results are presented based on China's experience of developing software engineering under this model. expand
|
|
|
A view of 20th and 21st century software engineering |
| |
Barry Boehm
|
|
Pages: 12-29 |
|
doi>10.1145/1134285.1134288 |
|
Full text: PDF
|
|
George Santayana's statement, "Those who cannot remember the past are condemned to repeat it," is only half true. The past also includes successful histories. If you haven't been made aware of them, you're often condemned not to repeat their successes.In ...
George Santayana's statement, "Those who cannot remember the past are condemned to repeat it," is only half true. The past also includes successful histories. If you haven't been made aware of them, you're often condemned not to repeat their successes.In a rapidly expanding field such as software engineering, this happens a lot. Extensive studies of many software projects such as the Standish Reports offer convincing evidence that many projects fail to repeat past successes.This paper tries to identify at least some of the major past software experiences that were well worth repeating, and some that were not. It also tries to identify underlying phenomena influencing the evolution of software engineering practices that have at least helped the author appreciate how our field has gotten to where it has been and where it is.A counterpart Santayana-like statement about the past and future might say, "In an era of rapid change, those who repeat the past are condemned to a bleak future." (Think about the dinosaurs, and think carefully about software engineering maturity models that emphasize repeatability.)This paper also tries to identify some of the major sources of change that will affect software engineering practices in the next couple of decades, and identifies some strategies for assessing and adapting to these sources of change. It also makes some first steps towards distinguishing relatively timeless software engineering principles that are risky not to repeat, and conditions of change under which aging practices will become increasingly risky to repeat. expand
|
|
|
Optimization of software development |
| |
Reinhold Achatz
|
|
Pages: 30-30 |
|
doi>10.1145/1134285.1134289 |
|
Full text: PDF
|
|
For many companies, including leaders in software development, software is becoming an increasingly critical competitive factor. For example, at Siemens, sixty percent of our business is strongly influenced by software and approximately fifty percent ...
For many companies, including leaders in software development, software is becoming an increasingly critical competitive factor. For example, at Siemens, sixty percent of our business is strongly influenced by software and approximately fifty percent of all Siemens patents are software related. With more than 30,000 software developers worldwide and an expenditure of more than 3 billion Euros, Siemens is playing in the 'Champions League' of the world's leading software companies. But this fact is not well known because most of the software developed at Siemens is embedded. It's embedded in the products and solutions sold to customers, such as medical devices, automation controls, trains, automotive components, and even power plants. Yet a major part of the functionality of these products and solutions is defined by software.Because of the high cost of developing software, and because of its extraordinarily high business impact, optimizing the software development process has become a top priority at Siemens. - Simplifying, standardizing and stabilizing development processes and measuring progress according to the CMMI of the Software Engineering Institute,
- Making better use of existing synergy potentials, for example by using cross-divisional software platforms and architectures, and
- Reducing costs by making structural changes, such as offshoring software development to low-wage countries and using the skills of different cultures, especially in rapidly-developing Asian markets.
The three levers will be described in detail and examples will be provided. The presentation will emphasize the early phases of software development, particularly with reference to requirements engineering and the importance of functional and non-functional requirements. A simple metrics system to measure success in software development will be presented. The Siemens Software Initiative, a company-wide program specializing on software engineering and management at Siemens will also be described. expand
|
|
|
SESSION: Invited talks |
|
|
|
|
Session details: Invited talks |
|
doi>10.1145/3245447 |
|
Full text: PDF
|
|
|
|
|
Empirically driven SE research: state of the art and required maturity |
| |
Victor Basili,
Sebastian Elbaum
|
|
Pages: 32-32 |
|
doi>10.1145/1134285.1134291 |
|
Full text: PDF
|
|
Software engineering researchers are increasingly relying on the empirical approach to advance the state of the art. The level of empirical rigor and evidence required to guide software engineering research, however, can vary drastically depending on ...
Software engineering researchers are increasingly relying on the empirical approach to advance the state of the art. The level of empirical rigor and evidence required to guide software engineering research, however, can vary drastically depending on many factors. In this session we identify some of these factors through a discussion of the state of the art in performing empirical studies in software engineering, and we show how we can utilize the notion of empirical maturity to set and adjust the empirical expectations for software engineering research efforts.Regarding the state of the art in performing empirical studies, we will offer perspectives on two classes of study: those concerned with humans utilizing a technology, e.g., a person applying a methodology, a technique, or a tool, where human skills and the ability to interact with the technology are some of the primes issues, and those concerned with the application of the technology to an artifact, e.g., a technique or tool applied to a design or a program. In the first case, the emphasis is typically on issues like feasibility, usefulness, and then on effectiveness. The technology tends to be less well specified and based more on the experience and skills of the technology applier. In the second case, the emphasis is typically on the efficiency and effectiveness of the technology. The technology tends to be well defined and the assumption is that the individual skill and experience plays a less important role. We will discuss the set of factors that influence the design, implementation, and validity of these studies.Regarding empirical maturity and its implications on the SE community's expectations, we will provide examples of the large spectrum of studies with different maturity levels that can be performed to successfully support software engineering research. We will then identify and analyze the following aspects that are likely to impact a study's maturity level: technology (well-specified vs. under development), goals of the study (effectiveness vs. feasibility), type of design and analysis (controlled experiment vs. case study, quantitative vs. qualitative), control and specification of threats to validity (internal vs. external threats), dependence on context (in vivo vs. in vitro), relationship to previous empirical work (replicated on-site, replicated off-site, non-replicated, non-replicable), and purposes of the study (exploratory vs. confirmatory). We will lead a discussion on these key aspects that must be considered to assess the empirical maturity of a piece of work in the context of its research area and the empirical maturity of that area. expand
|
|
|
Challenges in automotive software engineering |
| |
Manfred Broy
|
|
Pages: 33-42 |
|
doi>10.1145/1134285.1134292 |
|
Full text: PDF
|
|
The amount of software in cars grows exponentially. Driving forces of this development are cheaper and more powerful hardware and the demand for innovations by new functions. The rapid increase of software and software based functionality brings various ...
The amount of software in cars grows exponentially. Driving forces of this development are cheaper and more powerful hardware and the demand for innovations by new functions. The rapid increase of software and software based functionality brings various challenges (see [21], [23], [25], [26]) for the automotive industries, for their organization, key competencies, processes, methods, tools, models, product structures, division of work, logistics, maintenance, and long term strategies. From a software engineering perspective, the automotive industry is an ideal and fascinating application domain for advanced techniques. Although the automotive industry may adopt general results and solutions from the software engineering body of knowledge gained in other domains, the specific constraints and domain specific requirements in the automotive industry ask for individual solutions and bring various challenges for automotive software engineering. In cars we find literally all interesting problems and challenging issues of software and systems engineering. expand
|
|
|
Living assistance systems: an ambient intelligence approach |
| |
Jürgen Nehmer,
Martin Becker,
Arthur Karshmer,
Rosemarie Lamm
|
|
Pages: 43-50 |
|
doi>10.1145/1134285.1134293 |
|
Full text: PDF
|
|
In this paper, we present an integrated system concept for the living assistance domain based on ambient intelligence technology and discuss the resulting challenges for the software engineering discipline. Automated living assistance systems represent ...
In this paper, we present an integrated system concept for the living assistance domain based on ambient intelligence technology and discuss the resulting challenges for the software engineering discipline. Automated living assistance systems represent a promising approach for the prolongation of an independent and self-conducted life of handicapped and elderly people thereby, enhancing their quality of life and minimizing the need for manual social/medical care. It is demonstrated that living assistance systems must realize flexibility and adaptability at the algorithmic, architectural and human interface level to an extent unknown in present systems. The construction of robust, trustworthy living assistance systems is an extremely challenging task and requires novel approaches for dependable self-adapting software architectures, resource efficiency, and self-adapting multi-modal human-computer interfaces. The resulting consequences and challenges for the discipline of software engineering are outlined in this paper. expand
|
|
|
SESSION: Research papers: architecture & design I |
|
|
|
|
Session details: Research papers: architecture & design I |
|
doi>10.1145/3245448 |
|
Full text: PDF
|
|
|
|
|
Architectural support for trust models in decentralized applications |
| |
Girish Suryanarayana,
Mamadou H. Diallo,
Justin R. Erenkrantz,
Richard N. Taylor
|
|
Pages: 52-61 |
|
doi>10.1145/1134285.1134295 |
|
Full text: PDF
|
|
Decentralized applications are composed of distributed entities that directly interact with each other and make local autonomous decisions in the absence of a centralized coordinating authority. Such decentralized applications, where entities can join ...
Decentralized applications are composed of distributed entities that directly interact with each other and make local autonomous decisions in the absence of a centralized coordinating authority. Such decentralized applications, where entities can join and leave the system at any time, are particularly susceptible to the attacks of malicious entities. Each entity therefore requires protective measures to safeguard itself against these entities. Trust management solutions serve to provide effective protective measures against such malicious attacks. Trust relationships help an entity model and evaluate its confidence in other entities towards securing itself. Trust management is, thus, both an essential and intrinsic ingredient of decentralized applications. However, research in trust management has not focused on how trust models can be composed into a decentralized architecture. The PACE architectural style, described previously [21], provides structured and detailed guidance on the assimilation of trust models into a decentralized entity's architecture. In this paper, we describe our experiments with incorporating four different reputation-based trust models into a decentralized application using the PACE architectural style. Our observations lead us to conclude that PACE not only provides an effective and easy way to integrate trust management into decentralized applications, but also facilitates reuse while supporting different types of trust models. Additionally, PACE serves as a suitable platform to aid the evaluation and comparison of trust models in a fixed setting towards providing a way to choose an appropriate model for the setting. expand
|
|
|
Efficient exploration of service-oriented architectures using aspects |
| |
Ingolf H. Krüger,
Reena Mathew,
Michael Meisinger
|
|
Pages: 62-71 |
|
doi>10.1145/1134285.1134296 |
|
Full text: PDF
|
|
An important step in the development of large-scale distributed, reactive systems is the design of architectures that effectively support the systems' purposes. Early prototypes help to decide upon the most effective architecture for a ...
An important step in the development of large-scale distributed, reactive systems is the design of architectures that effectively support the systems' purposes. Early prototypes help to decide upon the most effective architecture for a given situation. Questions to answer include the boundaries of components, communication topologies and of replication. It is desirable to evaluate and compare architectures for functionality and quality attributes before implementing or changing the whole system. Often, the effort required is prohibitive. In this paper we present an approach to efficiently create prototypes for service-oriented architectures using aspect-oriented programming techniques. We explain a procedure for transforming interaction based software specifications into AspectJ programs. We show how to map the same set of interaction scenarios to different candidate architectures. This significantly reduces the effort required to explore architectural alternatives. We explain and evaluate our approach using the Center TRACON Automation System as a running example. expand
|
|
|
Symbolic invariant verification for systems with dynamic structural adaptation |
| |
Basil Becker,
Dirk Beyer,
Holger Giese,
Florian Klein,
Daniela Schilling
|
|
Pages: 72-81 |
|
doi>10.1145/1134285.1134297 |
|
Full text: PDF
|
|
The next generation of networked mechatronic systems will be characterized by complex coordination and structural adaptation at run-time. Crucial safety properties have to be guaranteed for all potential structural configurations. Testing cannot provide ...
The next generation of networked mechatronic systems will be characterized by complex coordination and structural adaptation at run-time. Crucial safety properties have to be guaranteed for all potential structural configurations. Testing cannot provide safety guarantees, while current model checking and theorem proving techniques do not scale for such systems. We present a verification technique for arbitrarily large multi-agent systems from the mechatronic domain, featuring complex coordination and structural adaptation. We overcome the limitations of existing techniques by exploiting the local character of structural safety properties. The system state is modeled as a graph, system transitions are modeled as rule applications in a graph transformation system, and safety properties of the system are encoded as inductive invariants (permitting the verification of infinite state systems). We developed a symbolic verification procedure that allows us to perform the computation on an efficient BDD-based graph manipulation engine, and we report performance results for several examples. expand
|
|
|
SESSION: Research papers: test & analysis I |
|
|
|
|
Improving test suites for efficient fault localization |
| |
Benoit Baudry,
Franck Fleurey,
Yves Le Traon
|
|
Pages: 82-91 |
|
doi>10.1145/1134285.1134299 |
|
Full text: PDF
|
|
The need for testing-for-diagnosis strategies has been identified for a long time, but the explicit link from testing to diagnosis (fault localization) is rare. Analyzing the type of information needed for efficient fault localization, we identify the ...
The need for testing-for-diagnosis strategies has been identified for a long time, but the explicit link from testing to diagnosis (fault localization) is rare. Analyzing the type of information needed for efficient fault localization, we identify the attribute (called Dynamic Basic Block) that restricts the accuracy of a diagnosis algorithm. Based on this attribute, a test-for-diagnosis criterion is proposed and validated through rigorous case studies: it shows that a test suite can be improved to reach a high level of diagnosis accuracy. So, the dilemma between a reduced testing effort (with as few test cases as possible) and the diagnosis accuracy (that needs as much test cases as possible to get more information) is partly solved by selecting test cases that are dedicated to diagnosis. expand
|
|
|
Automated, contract-based user testing of commercial-off-the-shelf components |
| |
Lionel C. Briand,
Yvan Labiche,
Michal M. Sówka
|
|
Pages: 92-101 |
|
doi>10.1145/1134285.1134300 |
|
Full text: PDF
|
|
Commercial-off-the-Shelf (COTS) components provide a means to construct software (component-based) systems in reduced time and cost. In a COTS component software market there exist component vendors (original developers of the component) and component ...
Commercial-off-the-Shelf (COTS) components provide a means to construct software (component-based) systems in reduced time and cost. In a COTS component software market there exist component vendors (original developers of the component) and component users (developers of the component-based systems). The former provide the component to the user without source code or design documentation, and as a result it is difficult for the latter to adequately test the component when deployed in their system. In this article we propose a framework that clarifies the roles and responsibilities of both parties so that the user can adequately test the component in a deployment environment and the vendor does not need to release proprietary details. Then, based on this framework we combine and adapt two specification-based testing techniques and describe (and implement) a method for the automated generation of adequate test sets. An evaluation of our approach on a case study demonstrates that it is possible to automatically generate cost effective test sequences and that these test sequences are effective at detecting complex errors. expand
|
|
|
An intensional approach to the specification of test cases for database applications |
| |
David Willmor,
Suzanne M. Embury
|
|
Pages: 102-111 |
|
doi>10.1145/1134285.1134301 |
|
Full text: PDF
|
|
When testing database applications, in addition to creating in-memory fixtures it is also necessary to create an initial database state that is appropriate for each test case. Current approaches either require exact database states to be specified in ...
When testing database applications, in addition to creating in-memory fixtures it is also necessary to create an initial database state that is appropriate for each test case. Current approaches either require exact database states to be specified in advance, or else generate a single initial state (under guidance from the user) that is intended to be suitable for execution of all test cases. The first method allows large test suites to be executed in batch, but requires considerable programmer effort to create the test cases (and to maintain them). The second method requires less programmer effort, but increases the likelihood that test cases will fail in non-fault situations, due to unexpected changes to the content of the database. In this paper, we propose a new approach in which the database states required for testing are specified intensionally, as constrained queries, that can be used to prepare the database for testing automatically. This technique overcomes the limitations of the other approaches, and does not appear to impose significant performance overheads. expand
|
|
|
SESSION: Research papers: software components & reuse |
|
|
|
|
Feature oriented refactoring of legacy applications |
| |
Jia Liu,
Don Batory,
Christian Lengauer
|
|
Pages: 112-121 |
|
doi>10.1145/1134285.1134303 |
|
Full text: PDF
|
|
Feature oriented refactoring (FOR) is the process of decomposinga program into features, where a feature is an increment in programfunctionality. We develop a theory of FOR that relates code refac-toring to algebraic factoring. Our theory explains relationshipsbetween ...
Feature oriented refactoring (FOR) is the process of decomposinga program into features, where a feature is an increment in programfunctionality. We develop a theory of FOR that relates code refac-toring to algebraic factoring. Our theory explains relationshipsbetween features and their implementing modules, and why fea-tures in different programs of a product-line can have differentimplementations. We describe a tool and refactoring methodologybased on our theory, and present a validating case study. expand
|
|
|
Aspectual mixin layers: aspects and features in concert |
| |
Sven Apel,
Thomas Leich,
Gunter Saake
|
|
Pages: 122-131 |
|
doi>10.1145/1134285.1134304 |
|
Full text: PDF
|
|
Feature-Oriented Programming (FOP) decomposes complex software into features. Features are main abstractions in design and implementation. They reflect user requirements and incrementally refine one another. Although, features crosscut object-oriented ...
Feature-Oriented Programming (FOP) decomposes complex software into features. Features are main abstractions in design and implementation. They reflect user requirements and incrementally refine one another. Although, features crosscut object-oriented architectures they fail to express all kinds of crosscutting concerns. This weakness is exactly the strength of aspects, the main abstraction mechanism of Aspect-Oriented Programming (AOP). In this article we contribute a systematic evaluation and comparison of both paradigms, AOP and FOP, with focus on incremental software development. It reveals that aspects and features are not competing concepts. In fact AOP has several strengths to improve FOP in order to implement crosscutting featuresSymmetrically, the development model of FOP can aid AOP in implementing incremental designs. Consequently, we propose the architectural integration of aspects and features in order to profit from both paradigms. We introduce aspectual mixin layers (AMLs), an implementation approach that realizes this symbiosis. A subsequent evaluation and a case study reveal that AMLs improve the crosscutting modularity of features as well as aspects become well integrated into incremental development style. expand
|
|
|
Evaluating pattern catalogs: the computer games experience |
| |
M. Cutumisu,
C. Onuczko,
D. Szafron,
J. Schaeffer,
M. McNaughton,
T. Roy,
J. Siegel,
M. Carbonaro
|
|
Pages: 132-141 |
|
doi>10.1145/1134285.1134305 |
|
Full text: PDF
|
|
Patterns and pattern catalogs (pattern languages) have been proposed as a mechanism for re-use. Traditionally, patterns have been used to foster design re-use, and generative design patterns have been used to achieve both design and code re-use. In theory, ...
Patterns and pattern catalogs (pattern languages) have been proposed as a mechanism for re-use. Traditionally, patterns have been used to foster design re-use, and generative design patterns have been used to achieve both design and code re-use. In theory, a pattern catalog could be created and used to provide re-usable patterns within a project and across a group of related projects. This idea raises a natural question. How can we measure the effectiveness of a pattern catalog or compare the effectiveness of different pattern catalogs? In this paper, we define four metrics that can be used to measure the effectiveness of pattern catalogs. We illustrate these metrics by applying them to a case study that uses a pattern catalog of generative design patterns to generate scripting code for computer games. The metrics are general enough to assess any pattern catalog, independent of application domain or whether the patterns are generative or descriptive. expand
|
|
|
SESSION: Research papers: test & analysis II |
|
|
|
|
HDD: hierarchical delta debugging |
| |
Ghassan Misherghi,
Zhendong Su
|
|
Pages: 142-151 |
|
doi>10.1145/1134285.1134307 |
|
Full text: PDF
|
|
Inputs causing a program to fail are usually large and often contain information irrelevant to the failure. It thus helps debugging to simplify program inputs. The Delta Debugging algorithm is a general technique applicable to minimizing all failure-inducing ...
Inputs causing a program to fail are usually large and often contain information irrelevant to the failure. It thus helps debugging to simplify program inputs. The Delta Debugging algorithm is a general technique applicable to minimizing all failure-inducing inputs for more effective debugging. In this paper, we present HDD, a simple but effective algorithm that significantly speeds up Delta Debugging and increases its output quality on tree structured inputs such as XML. Instead of treating the inputs as one flat atomic list, we apply Delta Debugging to the very structure of the data. In particular, we apply the original Delta Debugging algorithm to each level of a program's input, working from the coarsest to the finest levels. We are thus able to prune the large irrelevant portions of the input early. All the generated input configurations are syntactically valid, reducing the number of inconclusive configurations that need to be tested and accordingly the amount of time spent simplifying. We have implemented HDD and evaluated it on a number of real failure-inducing inputs from the GCC and Mozilla bugzilla databases. Our Hierarchical Delta Debugging algorithm produces simpler outputs and takes orders of magnitude fewer test cases than the original Delta Debugging algorithm. It is able to scale to inputs of considerable size that the original Delta Debugging algorithm cannot process in practice. We argue that HDD is an effective tool for automatic debugging of programs expecting structured inputs. expand
|
|
|
Managing space for finite-state verification |
| |
Jianbin Tan,
George S. Avrunin,
Lori A. Clarke
|
|
Pages: 152-161 |
|
doi>10.1145/1134285.1134308 |
|
Full text: PDF
|
|
Finite-state verification (FSV) techniques attempt to prove properties about a model of a system by examining all possible behaviors of that model. This approach suffers from the state-explosion problem, where the size of the model or the analysis costs ...
Finite-state verification (FSV) techniques attempt to prove properties about a model of a system by examining all possible behaviors of that model. This approach suffers from the state-explosion problem, where the size of the model or the analysis costs may be exponentially large with respect to the size of the system. Using symbolic data structures to represent subsets of the state space has been shown to usually be an effective optimization approach for hardware verification. The value for software verification, however, is still unclear. In this paper, we investigate applying two symbolic data structures, Binary Decision Diagrams (BDDs) and Zero-suppressed Binary Decision Diagrams (ZDDs), in two FSV tools, LTSA and FLAVERS. We describe an experiment showing that these two symbolic approaches can improve the performance of both FSV tools and are more efficient than two other algorithms that store the state space explicitly. Moreover, the ZDD-based approach often runs faster and can handle larger systems than the BDD-based approach. expand
|
|
|
Backwards-compatible array bounds checking for C with very low overhead |
| |
Dinakar Dhurjati,
Vikram Adve
|
|
Pages: 162-171 |
|
doi>10.1145/1134285.1134309 |
|
Full text: PDF
|
|
The problem of enforcing correct usage of array and pointer references in C and C++ programs remains unsolved. The approach proposed by Jones and Kelly (extended by Ruwase and Lam) is the only one we know of that does not require significant manual changes ...
The problem of enforcing correct usage of array and pointer references in C and C++ programs remains unsolved. The approach proposed by Jones and Kelly (extended by Ruwase and Lam) is the only one we know of that does not require significant manual changes to programs, but it has extremely high overheads of 5x-6x and 11x-12x in the two versions. In this paper, we describe a collection of techniques that dramatically reduce the overhead of this approach, by exploiting a fine-grain partitioning of memory called Automatic Pool Allocation. Together, these techniques bring the average overhead checks down to only 12% for a set of benchmarks (but 69% for one case). We show that the memory partitioning is key to bringing down this overhead. We also show that our technique successfully detects all buffer overrun violations in a test suite modeling reported violations in some important real-world programs. expand
|
|
|
SESSION: Research papers: reverse engineering & refactoring |
|
|
|
|
JunGL: a scripting language for refactoring |
| |
Mathieu Verbaere,
Ran Ettinger,
Oege de Moor
|
|
Pages: 172-181 |
|
doi>10.1145/1134285.1134311 |
|
Full text: PDF
|
|
Refactorings are behaviour-preserving program transformations, typically for improving the structure of existing code. A few of these transformations have been mechanised in interactive development environments. Many more refactorings have been proposed, ...
Refactorings are behaviour-preserving program transformations, typically for improving the structure of existing code. A few of these transformations have been mechanised in interactive development environments. Many more refactorings have been proposed, and it would be desirable for programmers to script their own refactorings. Implementing such source-to-source transformations, however, is quite complex: even the most sophisticated development environments contain significant bugs in their refactoring tools.We present a domain-specific language for refactoring, named JunGL. It manipulates a graph representation of the program: all information about the program, including ASTs for its compilation units, variable binding, control flow and so on is represented in a uniform graph format. The language is a hybrid of a functional language (in the style of ML) and a logic query language (akin to Datalog). JunGL furthermore has a notion of demand-driven evaluation for constructing computed information in the graph, such as control flow edges. Borrowing from earlier work on the specification of compiler optimisations, JunGL uses so-called `path queries' to express dataflow properties.We motivate the design of JunGL via a number of non-trivial refactorings, and describe its implementation on the.NET platform. expand
|
|
|
Inferring templates from spreadsheets |
| |
Robin Abraham,
Martin Erwig
|
|
Pages: 182-191 |
|
doi>10.1145/1134285.1134312 |
|
Full text: PDF
|
|
We present a study investigating the performance of a system for automatically inferring spreadsheet templates. These templates allow users to safely edit spreadsheets, that is, certain kinds of errors such as range, reference, and type errors can be ...
We present a study investigating the performance of a system for automatically inferring spreadsheet templates. These templates allow users to safely edit spreadsheets, that is, certain kinds of errors such as range, reference, and type errors can be provably prevented. Since the inference of templates is inherently ambiguous, such a study is required to demonstrate the effectiveness of any such automatic system. The study results show that the system considered performs significantly better than subjects with intermediate to expert level programming expertise. These results are important because the translation of the huge body of existing spreadsheets into a system based on safety-guaranteeing templates cannot be performed without automatic support. We also carried out post-hoc analyses of the video recordings of the subjects' interactions with the spreadsheets and found that although expert-level subjects needed less time and developed more accurate templates than less experienced subjects, they did not inspect fewer cells in the spreadsheet. %and found that expert-level subjects spend less time and inspect fewer cells in the spreadsheet and develop more accurate templates than subjects with less experience. expand
|
|
|
Semantics-based reverse engineering of object-oriented data models |
| |
G. Ramalingam,
Raghavan Komondoor,
John Field,
Saurabh Sinha
|
|
Pages: 192-201 |
|
doi>10.1145/1134285.1134313 |
|
Full text: PDF
|
|
We present an algorithm for reverse engineering object-oriented (OO) data models from programs written in weakly-typed languages like Cobol. These models, similar to UML class diagrams, can facilitate a variety of program maintenance and migration activities. ...
We present an algorithm for reverse engineering object-oriented (OO) data models from programs written in weakly-typed languages like Cobol. These models, similar to UML class diagrams, can facilitate a variety of program maintenance and migration activities. Our algorithm is based on a semantic analysis of the program's code, and we provide a bisimulation-based formalization of what it means for an OO data model to be correct for a program. expand
|
|
|
SESSION: Research papers: architecture & design II |
|
|
|
|
Modeling behavioral design patterns of concurrent objects |
| |
Robert G. Pettit, IV,
Hassan Gomaa
|
|
Pages: 202-211 |
|
doi>10.1145/1134285.1134315 |
|
Full text: PDF
|
|
Object-oriented software development practices are being rapidly adopted within increasingly complex systems, including reactive, real-time and concurrent system applications. While data modeling is performed very well under current object-oriented development ...
Object-oriented software development practices are being rapidly adopted within increasingly complex systems, including reactive, real-time and concurrent system applications. While data modeling is performed very well under current object-oriented development practices, behavioral modeling necessary to capture critical information in real-time, reactive, and concurrent systems is often lacking. Addressing this deficiency, we offer an approach for modeling and analyzing concurrent object-oriented software designs through the use of behavioral design patterns, allowing us to map stereotyped UML objects to colored Petri net (CPN) representations in the form of reusable templates. The resulting CPNs are then used to model and analyze behavioral properties of the software architecture, applying the results of the analysis to the original software design. expand
|
|
|
Modeling aspect mechanisms: a top-down approach |
| |
Sergei Kojarski,
David H. Lorenz
|
|
Pages: 212-221 |
|
doi>10.1145/1134285.1134316 |
|
Full text: PDF
|
|
A plethora of aspect mechanisms exist today. All of these diverse mechanisms integrate concerns into artifacts that exhibit crosscutting structure. What we lack and need is a characterization of the design space that these aspect mechanisms inhabit and ...
A plethora of aspect mechanisms exist today. All of these diverse mechanisms integrate concerns into artifacts that exhibit crosscutting structure. What we lack and need is a characterization of the design space that these aspect mechanisms inhabit and a model description of their weaving processes. A good design space representation provides a common framework for understanding and evaluating existing mechanisms. A well-understood model of the weaving process can guide the implementor of new aspect mechanisms. It can guide the designer when mechanisms implementing new kinds of weaving are needed. It can also help teach aspect-oriented programming (AOP). In this paper we present and evaluate such a model of the design space for aspect mechanisms and their weaving processes. We model weaving, at an abstract level, as a concern integration process. We derive a weaving process model (WPM) top-down, differentiating a reactive from a nonreactive process. The model provides an in-depth explanation of the key subprocesses used by existing aspect mechanisms. expand
|
|
|
A Bayesian approach to diagram matching with application to architectural models |
| |
David Mandelin,
Doug Kimelman,
Daniel Yellin
|
|
Pages: 222-231 |
|
doi>10.1145/1134285.1134317 |
|
Full text: PDF
|
|
IT system architectures, as well as other systems, are often described by formal models or informal diagrams. In practice, there are often a number of versions of a model, e.g. for different views of a system, divergent variants, or a series of revisions. ...
IT system architectures, as well as other systems, are often described by formal models or informal diagrams. In practice, there are often a number of versions of a model, e.g. for different views of a system, divergent variants, or a series of revisions. Understanding how versions of a model correspond or differ is crucial, yet little work has been done on automated assistance for matching models and diagrams.We have designed a framework based on Bayesian methods for finding these correspondences automatically. We represent models and diagrams as graphs whose nodes have attributes such as name, type, connections, and containment relations, and we have developed probabilistic models for rating the quality of candidate correspondences based on various features of the nodes in the graphs. Given the probabilistic models, we can find high quality correspondences using search algorithms. Preliminary experiments focusing on architectural models suggest that the technique is promising. expand
|
|
|
SESSION: Research papers: test & analysis III |
|
|
|
|
Modular checking for buffer overflows in the large |
| |
Brian Hackett,
Manuvir Das,
Daniel Wang,
Zhe Yang
|
|
Pages: 232-241 |
|
doi>10.1145/1134285.1134319 |
|
Full text: PDF
|
|
We describe an ongoing project, the deployment of a modular checker to statically find and prevent every buffer overflow in future versions of a Microsoft product. Lightweight annotations specify requirements for safely using each buffer, and functions ...
We describe an ongoing project, the deployment of a modular checker to statically find and prevent every buffer overflow in future versions of a Microsoft product. Lightweight annotations specify requirements for safely using each buffer, and functions are checked individually to ensure they obey these requirements and do not overflow. Our focus is on the incremental deployment of this technology: by layering the annotation language, using aggressive inference techniques, and slicing warnings by checker confidence, teams must pay only part of the cost of annotating a program to achieve part of the benefit, which provides incentive for further annotation. To date over 400,000 annotations have been added to specify buffer usage in the source code for this product, of which over 150,000 were automatically inferred, and over 3,000 potential buffer overflows have been found and fixed. expand
|
|
|
Discovering faults in idiom-based exception handling |
| |
Magiel Bruntink,
Arie van Deursen,
Tom Tourwé
|
|
Pages: 242-251 |
|
doi>10.1145/1134285.1134320 |
|
Full text: PDF
|
|
In this paper, we analyse the exception handling mechanism of a state-of-the-art industrial embedded software system. Like many systems implemented in classic programming languages, our subject system uses the popular return-code idiom for dealing with ...
In this paper, we analyse the exception handling mechanism of a state-of-the-art industrial embedded software system. Like many systems implemented in classic programming languages, our subject system uses the popular return-code idiom for dealing with exceptions. Our goal is to evaluate the fault-proneness of this idiom, and we therefore present a characterisation of the idiom, a fault model accompanied by an analysis tool, and empirical data. Our findings show that the idiom is indeed fault prone, but that a simple solution can lead to significant improvements. expand
|
|
|
Static detection of leaks in polymorphic containers |
| |
David L. Heine,
Monica S. Lam
|
|
Pages: 252-261 |
|
doi>10.1145/1134285.1134321 |
|
Full text: PDF
|
|
This paper presents the first practical static analysis tool that can find memory leaks and double deletions of objects held in polymorphic containers. This is especially important since most dynamically allocated objects are stored in containers.The ...
This paper presents the first practical static analysis tool that can find memory leaks and double deletions of objects held in polymorphic containers. This is especially important since most dynamically allocated objects are stored in containers.The tool is based on the concept of object ownership: every object has one and only one owning pointer. The owning pointer holds the exclusive right and obligation to either delete the object or to transfer the obligation. This paper presents a new type system that allows different instances of a polymorphic container to hold different types of elements, and to independently own or not own their elements.Our tool is sound: it will report all potential memory leaks and multiple deletions of pointers in a program. Our system automatically identifies the container implementation routines in an application. The user provides a short specification on the container structure and ownership constraints for these routines. The system then solves for the ownership constraints flow- and context-sensitively, and reports inconsistencies in ownership constraints as potential memory leaks and double deletions.We applied our tool to a suite of five large open-source and commercial C and C++ applications totaling one million lines of code. The tool successfully identified memory leaks in these programs and found double deletions of objects that could lead to program failures or security vulnerabilities. expand
|
|
|
SESSION: Research papers: test & analysis IV |
|
|
|
|
Osprey: a practical type system for validating dimensional unit correctness of C programs |
| |
Lingxiao Jiang,
Zhendong Su
|
|
Pages: 262-271 |
|
doi>10.1145/1134285.1134323 |
|
Full text: PDF
|
|
Misuse of measurement units is a common source of errors in scientific applications, but standard type systems do not prevent such errors. Dimensional analysis in physics can be used to manually detect such errors in physical equations. It is, however, ...
Misuse of measurement units is a common source of errors in scientific applications, but standard type systems do not prevent such errors. Dimensional analysis in physics can be used to manually detect such errors in physical equations. It is, however, not feasible to perform such manual analysis for programs computing physical equations because of code complexity. In this paper, we present a type system to automatically detect potential errors involving measurement units. It is constraint-based: we model units as types and flow of units as constraints. However, standard type checking algorithms are not powerful enough to handle units because of their abelian group nature (e.g., being commutative, multiplicative, and associative). Our system combines techniques such as type inference and Gaussian Elimination to overcome this problem. We have implemented Osprey, a prototype of the system for C programs, and evaluated it on various test programs, including computational physics and mechanical engineering applications. Osprey discovered unknown errors in mature code; it is precise with few false positives; it is also efficient and scales to large programs---we have successfully used it to analyze programs with hundreds of thousands of lines of code. expand
|
|
|
Locating faults through automated predicate switching |
| |
Xiangyu Zhang,
Neelam Gupta,
Rajiv Gupta
|
|
Pages: 272-281 |
|
doi>10.1145/1134285.1134324 |
|
Full text: PDF
|
|
Typically debugging begins when during a program execution a point is reached at which an obviously incorrect value is observed. A general and powerful approach to automated debugging can be based upon identifying modifications to the program state that ...
Typically debugging begins when during a program execution a point is reached at which an obviously incorrect value is observed. A general and powerful approach to automated debugging can be based upon identifying modifications to the program state that will bring the execution to a successful conclusion. However, searching for arbitrary changes to the program state is difficult due to the extremely large search space. In this paper we demonstrate that by forcibly switching a predicate's outcome at runtime and altering the control flow, the program state can not only be inexpensively modified, but in addition it is often possible to bring the program execution to a successful completion (i.e., program produces the desired output). By examining the switched predicate, also called the critical predicate, the cause of the bug can then be identified. Since the outcome of a branch can only be either true or false, the number of modified states resulting by predicate switching is far less than those possible through arbitrary state changes. Thus, it is possible to automatically search through modified states to find one that leads to the correct output. We have developed an implementation based upon dynamic instrumentation to perform this search through program re-execution -- the program is executed from the beginning and a predicate's outcome is switched to produce the desired change in control flow. To evaluate our approach, we tried our technique on several reported bugs for a number of UNIX utility programs. Our technique was found to be practical (i.e., acceptable in time taken) and effective (i.e., we were able to automatically identify critical predicates). Moreover we show that bidirectional dynamic slices of critical predicates capture the faulty code. expand
|
|
|
Perracotta: mining temporal API rules from imperfect traces |
| |
Jinlin Yang,
David Evans,
Deepali Bhardwaj,
Thirumalesh Bhat,
Manuvir Das
|
|
Pages: 282-291 |
|
doi>10.1145/1134285.1134325 |
|
Full text: PDF
|
|
Dynamic inference techniques have been demonstrated to provide useful support for various software engineering tasks including bug finding, test suite evaluation and improvement, and specification generation. To date, however, dynamic inference has only ...
Dynamic inference techniques have been demonstrated to provide useful support for various software engineering tasks including bug finding, test suite evaluation and improvement, and specification generation. To date, however, dynamic inference has only been used effectively on small programs under controlled conditions. In this paper, we identify reasons why scaling dynamic inference techniques has proven difficult, and introduce solutions that enable a dynamic inference technique to scale to large programs and work effectively with the imperfect traces typically available in industrial scenarios. We describe our approximate inference algorithm, present and evaluate heuristics for winnowing the large number of inferred properties to a manageable set of interesting properties, and report on experiments using inferred properties. We evaluate our techniques on JBoss and the Windows kernel. Our tool is able to infer many of the properties checked by the Static Driver Verifier and leads us to discover a previously unknown bug in Windows. expand
|
|
|
SESSION: Research papers: theory & formal methods |
|
|
|
|
Incremental consistency checking for pervasive context |
| |
Chang Xu,
S. C. Cheung,
W. K. Chan
|
|
Pages: 292-301 |
|
doi>10.1145/1134285.1134327 |
|
Full text: PDF
|
|
Applications in pervasive computing are typically required to interact seamlessly with their changing environments. To provide users with smart computational services, these applications must be aware of incessant context changes in their environments ...
Applications in pervasive computing are typically required to interact seamlessly with their changing environments. To provide users with smart computational services, these applications must be aware of incessant context changes in their environments and adjust their behaviors accordingly. As these environments are highly dynamic and noisy, context changes thus acquired could be obsolete, corrupted or inaccurate. This gives rise to the problem of context inconsistency, which must be timely detected in order to prevent applications from behaving anomalously. In this paper, we propose a formal model of incremental consistency checking for pervasive contexts. Based on this model, we further propose an efficient checking algorithm to detect inconsistent contexts. The performance of the algorithm and its advantages over conventional checking techniques are evaluated experimentally using Cabot middleware. expand
|
|
|
Interacting process classes |
| |
Ankit Goel,
Sun Meng,
Abhik Roychoudhury,
P. S. Thiagarajan
|
|
Pages: 302-311 |
|
doi>10.1145/1134285.1134328 |
|
Full text: PDF
|
|
Many reactive control systems consist of classes of interacting objects where the objects belonging to a class exhibit similar behaviors. Such interacting process classes appear in telecommunication, transportation and avionics domains. In this paper, ...
Many reactive control systems consist of classes of interacting objects where the objects belonging to a class exhibit similar behaviors. Such interacting process classes appear in telecommunication, transportation and avionics domains. In this paper, we propose a modeling and simulation technique for interacting process classes. Our modeling style uses standard notations to capture behavior. In particular, the control flow of a process class is captured by a labeled transition system, unit interactions between process objects are described by Message Sequence Charts and the structural relations are captured via class diagrams. The key feature of our approach is that our execution semantics leads to a symbolic simulation technique. Our simulation strategy is both time and memory efficient and we demonstrate this on well-studied non-trivial examples of reactive systems. expand
|
|
|
Symbolic model checking of declarative relational models |
| |
Felix Sheng-Ho Chang,
Daniel Jackson
|
|
Pages: 312-320 |
|
doi>10.1145/1134285.1134329 |
|
Full text: PDF
|
|
This paper explores the idea of augmenting traditional model checkers with the expressiveness of a declarative, relational language. The goal is to enable programmers to write very intuitive and compact specifications, in order to allow the automatic ...
This paper explores the idea of augmenting traditional model checkers with the expressiveness of a declarative, relational language. The goal is to enable programmers to write very intuitive and compact specifications, in order to allow the automatic verification of more complicated software systems. The key idea is that many structural operations (common in object-oriented programs) can be easily described using relations and relational operators, while other operations are best described using the primitive data types and their operations (such as simple arithmetic operations on numbers). By allowing a mixture of both, and by allowing parts of the model to be described declaratively rather than imperatively, the programmer has the freedom to model each part of the system differently, using the most intuitive and simple constructs. We built a BDD-based model checker for the language, and successfully verified a straightforward model of the dependency algorithm in Apache Ant for up to 5 nodes. expand
|
|
|
SESSION: Research papers: empirical methods & measurement |
|
|
|
|
Estimating LOC for information systems from their conceptual data models |
| |
Hee Beng Kuan Tan,
Yuan Zhao,
Hongyu Zhang
|
|
Pages: 321-330 |
|
doi>10.1145/1134285.1134331 |
|
Full text: PDF
|
|
Effort and cost estimation is crucial in software management. Estimation of software size plays a key role in the estimation. Line of Code (LOC) is still a commonly used software size measure. Despite the fact that software sizing is well recognized ...
Effort and cost estimation is crucial in software management. Estimation of software size plays a key role in the estimation. Line of Code (LOC) is still a commonly used software size measure. Despite the fact that software sizing is well recognized as an important problem for more than two decades, there is still much problem in existing methods. Conceptual data model is widely used in the requirements analysis for information systems. It is also not difficult to construct conceptual data models in the early stage of developing information systems. Much characteristic of an information system is actually reflected from its conceptual data model. We explore into the use of conceptual data model for estimating LOC. This paper proposes a novel method for estimating LOC for an information system from its conceptual data model through the use of multiple linear regression model. We have validated the method through collecting samples from both the industry and open-source systems. expand
|
|
|
Development of a hybrid cost estimation model in an iterative manner |
| |
Adam Trendowicz,
Jens Heidrich,
Jürgen Münch,
Yasushi Ishigai,
Kenji Yokoyama,
Nahomi Kikuchi
|
|
Pages: 331-340 |
|
doi>10.1145/1134285.1134332 |
|
Full text: PDF
|
|
Cost estimation is a very crucial field for software developing companies. The acceptance of an estimation technique is highly dependent on estimation accuracy. Often, this accuracy is only determined after an initial application. Possible further steps ...
Cost estimation is a very crucial field for software developing companies. The acceptance of an estimation technique is highly dependent on estimation accuracy. Often, this accuracy is only determined after an initial application. Possible further steps for improving the underlying estimation model typically do not influence the decision on whether to discard the technique or deploy it. In addition, most estimation techniques do not explicitly support the evolution of the underlying estimation model in an iterative manner. This increases the risk of overlooking some important cost drivers or data inconsistencies. This paper presents an enhanced process for developing a CoBRA® cost estimation model by systematically including iterative analysis and feedback cycles, and its evaluation in a software development unit of Oki Electric Industry Co., Ltd., Japan. During the model improvement cycles, estimation accuracy was improved from an initial 120% down to 14%. In addition, lessons learned with the iterative development approach are described. expand
|
|
|
On the success of empirical studies in the international conference on software engineering |
| |
Carmen Zannier,
Grigori Melnik,
Frank Maurer
|
|
Pages: 341-350 |
|
doi>10.1145/1134285.1134333 |
|
Full text: PDF
|
|
Critiques of the quantity and quality of empirical evaluations in software engineering have existed for quite some time. However such critiques are typically not empirically evaluated. This paper fills this gap by empirically analyzing papers published ...
Critiques of the quantity and quality of empirical evaluations in software engineering have existed for quite some time. However such critiques are typically not empirically evaluated. This paper fills this gap by empirically analyzing papers published by ICSE, the prime research conference on Software Engineering. We present quantitative and qualitative results of a quasi-random experiment of empirical evaluations over the lifetime of the conference. Our quantitative results show the quantity of empirical evaluation has increased over 29 ICSE proceedings but we still have room to improve the soundness of empirical evaluations in ICSE proceedings. Our qualitative results point to specific areas of improvement in empirical evaluations. expand
|
|
|
SESSION: Research papers: software process & workflow |
|
|
|
|
Publishing and composition of atomicity-equivalent services for B2B collaboration |
| |
Chunyang Ye,
S. C. Cheung,
W. K. Chan
|
|
Pages: 351-360 |
|
doi>10.1145/1134285.1134335 |
|
Full text: PDF
|
|
Exception handling resolves inconsistency by backward or forward error recovery methods or both in Business-to-Business (B2B) process collaboration. To avoid committing irrevocable tasks followed by exceptions, B2B processes, which guarantee the atomicity ...
Exception handling resolves inconsistency by backward or forward error recovery methods or both in Business-to-Business (B2B) process collaboration. To avoid committing irrevocable tasks followed by exceptions, B2B processes, which guarantee the atomicity sphere property, are attractive. While atomicity sphere ensures its outcomes to be either all or nothing, conflicting local recoveries may lead to global B2B inconsistencies. Existing (global) analysis techniques however mandate every process unveiling all individual tasks. Such an analysis is infeasible when some business parties refuse to disclose their process details for privacy or business reasons. To address this problem, we propose a process algebraic technique to prove, construct, and check atomicity-equivalent public views from B2B processes. By checking atomicity spheres in the composition of these public views, business parties can identify suitable services that respect their individual and overall atomicity requirements. An example based on a real-life multilateral supply chain process is included. expand
|
|
|
Who should fix this bug? |
| |
John Anvik,
Lyndon Hiew,
Gail C. Murphy
|
|
Pages: 361-370 |
|
doi>10.1145/1134285.1134336 |
|
Full text: PDF
|
|
Open source development projects typically support an open bug repository to which both developers and users can report bugs. The reports that appear in this repository must be triaged to determine if the report is one which requires attention and if ...
Open source development projects typically support an open bug repository to which both developers and users can report bugs. The reports that appear in this repository must be triaged to determine if the report is one which requires attention and if it is, which developer will be assigned the responsibility of resolving the report. Large open source developments are burdened by the rate at which new bug reports appear in the bug repository. In this paper, we present a semi-automated approach intended to ease one part of this process, the assignment of reports to a developer. Our approach applies a machine learning algorithm to the open bug repository to learn the kinds of reports each developer resolves. When a new report arrives, the classifier produced by the machine learning technique suggests a small number of developers suitable to resolve the report. With this approach, we have reached precision levels of 57% and 64% on the Eclipse and Firefox development projects respectively. We have also applied our approach to the gcc open source development with less positive results. We describe the conditions under which the approach is applicable and also report on the lessons we learned about applying machine learning to repositories used in open source development. expand
|
|
|
Model-based development of dynamically adaptive software |
| |
Ji Zhang,
Betty H. C. Cheng
|
|
Pages: 371-380 |
|
doi>10.1145/1134285.1134337 |
|
Full text: PDF
|
|
Increasingly, software should dynamically adapt its behavior at run-time in response to changing conditions in the supporting computing and communication infrastructure, and in the surrounding physical environment. In order for an adaptive program to ...
Increasingly, software should dynamically adapt its behavior at run-time in response to changing conditions in the supporting computing and communication infrastructure, and in the surrounding physical environment. In order for an adaptive program to be trusted, it is important to have mechanisms to ensure that the program functions correctly during and after adaptations. Adaptive programs are generally more difficult to specify, verify, and validate due to their high complexity. Particularly, when involving multi-threaded adaptations, the program behavior is the result of the collaborative behavior of multiple threads and software components. This paper introduces an approach to create formal models for the behavior of adaptive programs. Our approach separates the adaptation behavior and non-adaptive behavior specifications of adaptive programs, making the models easier to specify and more amenable to automated analysis and visual inspection. We introduce a process to construct adaptation models, automatically generate adaptive programs from the models, and verify and validate the models. We illustrate our approach through the development of an adaptive GSM-oriented audio streaming protocol for a mobile computing application. expand
|
|
|
SESSION: Research papers: development with UML |
|
|
|
|
Instant consistency checking for the UML |
| |
Alexander Egyed
|
|
Pages: 381-390 |
|
doi>10.1145/1134285.1134339 |
|
Full text: PDF
|
|
Inconsistencies in design models should be detected immediately to save the engineer from unnecessary rework. Yet, tools are not capable of keeping up with the engineers' rate of model changes. This paper presents an approach for quickly, correctly, ...
Inconsistencies in design models should be detected immediately to save the engineer from unnecessary rework. Yet, tools are not capable of keeping up with the engineers' rate of model changes. This paper presents an approach for quickly, correctly, and automatically deciding what consistency rules to evaluate when a model changes. The approach does not require consistency rules with special annotations. Instead, it treats consistency rules as black-box entities and observes their behavior during their evaluation to identify what model elements they access. The UML/Analyzer tool, integrated with IBM Rational Rose™, fully implements this approach. It was used to evaluate 29 models with tens-of-thousands of model elements, evaluated on 24 types of consistency rules over 140,000 times. We found that the approach provided design feedback correctly and required, in average, less than 9ms evaluation time per model change with a worst case of less than 2 seconds at the expense of a linearly increasing memory need. This is a significant improvement over the state-of-the-art. expand
|
|
|
Traffic-aware stress testing of distributed systems based on UML models |
| |
Vahid Garousi,
Lionel C. Briand,
Yvan Labiche
|
|
Pages: 391-400 |
|
doi>10.1145/1134285.1134340 |
|
Full text: PDF
|
|
A stress test methodology aimed at increasing chances of discovering faults related to network traffic in distributed systems is presented. The technique uses the UML 2.0 model of the distributed system under test, augmented with timing information, ...
A stress test methodology aimed at increasing chances of discovering faults related to network traffic in distributed systems is presented. The technique uses the UML 2.0 model of the distributed system under test, augmented with timing information, and is based on an analysis of the control flow in sequence diagrams. It yields stress test requirements that are made of specific control flow paths along with time values indicating when to trigger them. Different variants of our stress testing technique already exist (they stress different aspects of a distributed system) and we focus here on one variant that is designed to identify and to stress test the system at the instant when data traffic on a network is maximal. Using a real-world distributed system specification, we design and implement a prototype distributed system and describe, for that particular system, how the stress test cases are derived and executed using our methodology. The stress test results indicate that the technique is significantly more effective at detecting network traffic-related faults when compared to test cases based on an operational profile. expand
|
|
|
Effects of defects in UML models: an experimental investigation |
| |
Christian F. J. Lange,
Michel R. V. Chaudron
|
|
Pages: 401-411 |
|
doi>10.1145/1134285.1134341 |
|
Full text: PDF
|
|
The Unified Modeling Language (UML) is the de facto standard for designing and architecting software systems. UML offers a large number of diagram types that can be used with varying degree of rigour. As a result UML models may contain consistency defects. ...
The Unified Modeling Language (UML) is the de facto standard for designing and architecting software systems. UML offers a large number of diagram types that can be used with varying degree of rigour. As a result UML models may contain consistency defects. Previous research has shown that industrial UML models that are used as basis for implementation and maintenance contain large numbers of defects. This study investigates to what extent implementers detect defects and to what extent defects cause different interpretations by different readers. We performed two controlled experiments with a large group of students (111) and a group of industrial practitioners (48). The experiment's results show that defects often remain undetected and cause misinterpretations. We present a classification of defect types based on a ranking of detection rate and risk for misinterpretation. Additionally we observed effects of using domain knowledge to compensate defects. The results are generalizable to industrial UML users and can be used for improving quality assurance techniques for UML-based development. expand
|
|
|
SESSION: Experience papers: risk analysis |
|
|
|
|
Session details: Experience papers: risk analysis |
| |
Forrest Shull
|
|
doi>10.1145/3245449 |
|
Full text: PDF
|
|
|
|
|
Experiences and results from initiating field defect prediction and product test prioritization efforts at ABB Inc. |
| |
Paul Luo Li,
James Herbsleb,
Mary Shaw,
Brian Robinson
|
|
Pages: 413-422 |
|
doi>10.1145/1134285.1134343 |
|
Full text: PDF
|
|
Quantitatively-based risk management can reduce the risks associated with field defects for both software producers and software consumers. In this paper, we report experiences and results from initiating risk-management activities at a large systems ...
Quantitatively-based risk management can reduce the risks associated with field defects for both software producers and software consumers. In this paper, we report experiences and results from initiating risk-management activities at a large systems development organization. The initiated activities aim to improve product testing (system/integration testing), to improve maintenance resource allocation, and to plan for future process improvements. The experiences we report address practical issues not commonly addressed in research studies: how to select an appropriate modeling method for product testing prioritization and process improvement planning, how to evaluate accuracy of predictions across multiple releases in time, and how to conduct analysis with incomplete information. In addition, we report initial empirical results for two systems with 13 and 15 releases. We present prioritization of configurations to guide product testing, field defect predictions within the first year of deployment to aid maintenance resource allocation, and important predictors across both systems to guide process improvement planning. Our results and experiences are steps towards quantitatively-based risk management. expand
|
|
|
A risk-driven method for eXtreme programming release planning |
| |
Mingshu Li,
Meng Huang,
Fengdi Shu,
Juan Li
|
|
Pages: 423-430 |
|
doi>10.1145/1134285.1134344 |
|
Full text: PDF
|
|
XP (eXtreme Programming) has become popular for IID (Iteration and Increment Development). It is suitable for small teams, lightweight projects and vague/volatile requirements. However, some challenges are left to developers when they desire to practise ...
XP (eXtreme Programming) has become popular for IID (Iteration and Increment Development). It is suitable for small teams, lightweight projects and vague/volatile requirements. However, some challenges are left to developers when they desire to practise XP. A critical one of them is constructing the release plan and negotiating it with customers. In this paper, we propose a risk-driven method for XP release planning. It has been applied in a case study and the results show the method is feasible and effective. XP practicers can follow it to decide a suitable release plan and control the development process. expand
|
|
|
Assessing COTS integration risk using cost estimation inputs |
| |
Ye Yang,
Barry Boehm,
Betsy Clark
|
|
Pages: 431-438 |
|
doi>10.1145/1134285.1134345 |
|
Full text: PDF
|
|
Most risk analysis tools and techniques require the user to enter a good deal of information before they can provide useful diagnoses. In this paper, we describe an approach to enable the user to obtain a COTS glue code integration risk analysis with ...
Most risk analysis tools and techniques require the user to enter a good deal of information before they can provide useful diagnoses. In this paper, we describe an approach to enable the user to obtain a COTS glue code integration risk analysis with no inputs other than the set of glue code cost drivers the user submits to get a glue code integration effort estimate with the COnstructive COTS integration cost estimation (COCOTS) tool. The risk assessment approach is built on a knowledge base with 24 risk identification rules and a 3-level risk probability weighting scheme obtained from an expert Delphi analysis. Each risk rule is defined as one critical combination of two COCOTS cost drivers that may cause certain undesired outcome if they are both rated at their worst case ratings. The 3-level nonlinear risk weighting scheme represents the relative probability of risk occurring with respect to the individual cost driver ratings from the input. Further, to determine the relative risk impact, we use the productivity range of each cost driver in the risky combination to reflect the cost consequence of risk occurring. We also develop a prototype called COCOTS Risk Analyzer to automate our risk assessment method. The evaluation of our approach shows that it has done an effective job of estimating the relative risk levels of both small USC e-services and large industry COTS-based applications. expand
|
|
|
SESSION: Experience papers: using metrics |
|
|
|
|
Lessons learnt from the analysis of large-scale corporate databases |
| |
Barbara Kitchenham,
Cat Kutay,
Ross Jeffery,
Colin Connaughton
|
|
Pages: 439-444 |
|
doi>10.1145/1134285.1134347 |
|
Full text: PDF
|
|
This paper presents the lessons learnt during the analysis of the corporate databases developed by IBM Global Services (Australia). IBM is rated as CMM level 5. Following CMM level 4 and above practices, IBM designed several software metrics databases ...
This paper presents the lessons learnt during the analysis of the corporate databases developed by IBM Global Services (Australia). IBM is rated as CMM level 5. Following CMM level 4 and above practices, IBM designed several software metrics databases with associated data collection and reporting systems to manage its corporate goals. However, IBM quality staff believed the data were not as useful as they had expected. NICTA staff undertook a review of IBM's statistical process control procedures and found problems with the databases mainly due to a lack of links between the different data tables. Such problems might be avoided by using M3P variant of the GQM paradigm to define a hierarchy of goals, with project goals at the lowest level, then process goals and corporate goals at the highest level. We propose using E-R models to identify problems with existing databases and to design databases once goals have been defined. expand
|
|
|
Metrics for model driven requirements development |
| |
Brian Berenbach,
Gail Borotto
|
|
Pages: 445-451 |
|
doi>10.1145/1134285.1134348 |
|
Full text: PDF
|
|
The CMMI defines two process areas associated with requirements elicitation: Requirements Development (RD) and Requirements Management (REQM). The Measurements and Analysis process area (MA) requires measurements and quantitative objectives for RD and ...
The CMMI defines two process areas associated with requirements elicitation: Requirements Development (RD) and Requirements Management (REQM). The Measurements and Analysis process area (MA) requires measurements and quantitative objectives for RD and REQM, but nowhere does it state what those measurements are. Furthermore, in order to extract measurements and evaluate them, a process must enable or otherwise support the taking of measurements. It is especially difficult to do this during requirements development, as it is generally viewed as a writing activity that does not lend itself to quantitative measurements. This paper describes a CMMI compliant formal approach to measurement and analysis during a model-driven requirements development process. It presents a set of metrics that were used successfully on several Siemens projects, describing team dynamics, project size and staffing, how the metrics were captured and used, and lessons learned. expand
|
|
|
Mining metrics to predict component failures |
| |
Nachiappan Nagappan,
Thomas Ball,
Andreas Zeller
|
|
Pages: 452-461 |
|
doi>10.1145/1134285.1134349 |
|
Full text: PDF
|
|
What is it that makes software fail? In an empirical study of the post-release defect history of five Microsoft software systems, we found that failure-prone software entities are statistically correlated with code complexity measures. However, there ...
What is it that makes software fail? In an empirical study of the post-release defect history of five Microsoft software systems, we found that failure-prone software entities are statistically correlated with code complexity measures. However, there is no single set of complexity metrics that could act as a universally best defect predictor. Using principal component analysis on the code metrics, we built regression models that accurately predict the likelihood of post-release defects for new entities. The approach can easily be generalized to arbitrary projects; in particular, predictors obtained from one project can also be significant for new, similar projects. expand
|
|
|
SESSION: Experience papers: experiences with open source and legacy systems |
|
|
|
|
Experiences with place lab: an open source toolkit for location-aware computing |
| |
Timothy Sohn,
William G. Griswold,
James Scott,
Anthony LaMarca,
Yatin Chawathe,
Ian Smith,
Mike Chen
|
|
Pages: 462-471 |
|
doi>10.1145/1134285.1134351 |
|
Full text: PDF
|
|
Location-based computing (LBC) is becoming increasing important in both industry and academia. A key challenge is the pervasive deployment of LBC technologies; to be effective they must run on a wide variety of client platforms, including laptops, PDAs, ...
Location-based computing (LBC) is becoming increasing important in both industry and academia. A key challenge is the pervasive deployment of LBC technologies; to be effective they must run on a wide variety of client platforms, including laptops, PDAs, and mobile phones, so that location data can be acquired anywhere and accessed by any application. Moreover, as a nascent area, LBC is experiencing rapid innovation in sensing technologies, the positioning algorithms themselves, and the applications they support. Lastly, as a newcomer, LBC must integrate with existing communications and application technologies, including web browsers and location data interchange standard.This paper describes our experience in developing the Place Lab architecture, a widely used first-generation open source toolkit for client-side location sensing. Using a layered, pattern-based architecture, it supports modular development in any dimension of LBC, enabling the field to move forward more rapidly as these innovations are shared with the community as pluggable components. Our experience shows the benefits of domain-specific abstractions, and how we overcame high-level language constraints to support a wide array of platforms in this emerging space. We also describe our experience in re-engineering parts of the architecture based on the needs of the user community, including insights on software licensing issues. expand
|
|
|
A case study of a corporate open source development model |
| |
Vijay K. Gurbani,
Anita Garvert,
James D. Herbsleb
|
|
Pages: 472-481 |
|
doi>10.1145/1134285.1134352 |
|
Full text: PDF
|
|
Open source practices and tools have proven to be highly effective for overcoming the many problems of geographically distributed software development. We know relatively little, however, about the range of settings in which they work. In particular, ...
Open source practices and tools have proven to be highly effective for overcoming the many problems of geographically distributed software development. We know relatively little, however, about the range of settings in which they work. In particular, can corporations use the open source development model effectively for software projects inside the corporate domain? Or are these tools and practices incompatible with development environments, management practices, and market-driven schedule and feature decisions typical of a commercial software house? We present a case study of open source software development methodology adopted by a significant commercial software project in the telecommunications domain. We extract a number of lessons learned from the experience, and identify open research questions. expand
|
|
|
Redesigning legacy applications for the web with UWAT+: a case study |
| |
Damiano Distante,
Gerardo Canfora,
Scott Tilley,
Shihong Huang
|
|
Pages: 482-491 |
|
doi>10.1145/1134285.1134353 |
|
Full text: PDF
|
|
This paper reports on a case study of redesigning a legacy application for the Web using the Ubiquitous Web Applications Design Framework with an extended version of its Transaction Design Model (UWAT+). Web application design methodologies hold the ...
This paper reports on a case study of redesigning a legacy application for the Web using the Ubiquitous Web Applications Design Framework with an extended version of its Transaction Design Model (UWAT+). Web application design methodologies hold the promise of engineering high-quality and long-lived Web systems and rich Internet applications. However, many such techniques focus solely on green-field development, and do not properly address the situation of leveraging the value locked in legacy systems. The redesign process supported by UWAT+ holistically blends design recovery technologies for capturing the know-how embedded in the legacy application with forward design methods particularly well suited for Web-based systems. The case study highlights some of the benefits of using UWAT+ in this context, as well as identifying possible areas for improvement in the redesign process and opportunities for tool automation to support it. expand
|
|
|
SESSION: Experience papers: software development practices |
|
|
|
|
Maintaining mental models: a study of developer work habits |
| |
Thomas D. LaToza,
Gina Venolia,
Robert DeLine
|
|
Pages: 492-501 |
|
doi>10.1145/1134285.1134355 |
|
Full text: PDF
|
|
To understand developers' typical tools, activities, and practices and their satisfaction with each, we conducted two surveys and eleven interviews. We found that many problems arose because developers were forced to invest great effort recovering implicit ...
To understand developers' typical tools, activities, and practices and their satisfaction with each, we conducted two surveys and eleven interviews. We found that many problems arose because developers were forced to invest great effort recovering implicit knowledge by exploring code and interrupting teammates and this knowledge was only saved in their memory. Contrary to expectations that email and IM prevent expensive task switches caused by face-to-face interruptions, we found that face-to-face communication enjoys many advantages. Contrary to expectations that documentation makes understanding design rationale easy, we found that current design documents are inadequate. Contrary to expectations that code duplication involves the copy and paste of code snippets, developers reported several types of duplication. We use data to characterize these and other problems and draw implications for the design of tools for their solution. expand
|
|
|
Applying the Value/Petri process to ERP software development in China |
| |
LiGuo Huang,
Barry Boehm,
Hao Hu,
Jidong Ge,
Jian Lü,
Cheng Qian
|
|
Pages: 502-511 |
|
doi>10.1145/1134285.1134356 |
|
Full text: PDF
|
|
Commercial organizations increasingly need software processes sensitive to business value, quick to apply, and capable of early analysis for subprocess consistency and compatibility. This paper presents experience in applying a lightweight synthesis ...
Commercial organizations increasingly need software processes sensitive to business value, quick to apply, and capable of early analysis for subprocess consistency and compatibility. This paper presents experience in applying a lightweight synthesis of a Value-Based Software Quality Achievement (VBSQA) process and an Object-Petri-Net-based process model (called VBSQA-OPN) to achieve a manager-satisfactory process for software quality achievement in an on-going ERP software project in China. The results confirmed that 1) the application of value-based approaches was inherently better than value-neutral approaches adopted by most ERP software projects; 2) the VBSQA-OPN model provided project managers with a synchronization and stabilization framework for process activities, success-critical stakeholders and their value propositions; 3) process visualization and simulation tools significantly increased management visibility and controllability for the success of software project. expand
|
|
|
Applying regression test selection for COTS-based applications |
| |
Jiang Zheng,
Brian Robinson,
Laurie Williams,
Karen Smiley
|
|
Pages: 512-522 |
|
doi>10.1145/1134285.1134357 |
|
Full text: PDF
|
|
ABB incorporates a variety of commercial-off-the-shelf (COTS) components in its products. When new releases of these components are made available for integration and testing, source code is often not provided. Various regression test selection processes ...
ABB incorporates a variety of commercial-off-the-shelf (COTS) components in its products. When new releases of these components are made available for integration and testing, source code is often not provided. Various regression test selection processes have been developed and have been shown to be cost effectiveness. However, the majority of these test selection techniques rely on access to source code for change identification. In this paper we present the application of the lightweight Integrated - Black-box Approach for Component Change Identification (I-BACCI) Version 3 process that select regression tests for applications that use COTS components. Two case studies, examining a total of nine new component releases, were conducted at ABB on products written in C/C++ to determine the effectiveness of I-BACCI. The results of the case studies indicate this process can reduce the required number of regression tests at least 70% without sacrificing the regression fault exposure. expand
|
|
|
SESSION: Far east experience papers: development technique |
|
|
|
|
Session details: Far east experience papers: development technique |
| |
K. Kishida
|
|
doi>10.1145/3245450 |
|
Full text: PDF
|
|
|
|
|
Reengineering standalone C++ legacy systems into the J2EE partition distributed environment |
| |
Xinyu Wang,
Jianling Sun,
Xiaohu Yang,
Chao Huang,
Zhijun He,
Srinivasa R. Maddineni
|
|
Pages: 525-533 |
|
doi>10.1145/1134285.1134359 |
|
Full text: PDF
|
|
Many enterprise systems are developed in C++ language and most of them are standalone. Because the standalone software can not follow the new market environment, reengineering the standalone legacy systems into distributed environment becomes a critical ...
Many enterprise systems are developed in C++ language and most of them are standalone. Because the standalone software can not follow the new market environment, reengineering the standalone legacy systems into distributed environment becomes a critical problem. Some methods have been proposed on related topics such as design recovery, the identification of the component, modeling the interfaces of components and components allocation. Up to now, there does not exist a reengineering process for partition distributed environment, which will offer distinct advantages on horizontal scalability and performance over normal distributed solutions. This paper presents a new process to reengineer C++ legacy systems into the J2EE partition distributed environment. The process consists of four steps: translation from C++ to Java code; extraction of components using the cluster technology; modeling component interfaces and partition of the components in J2EE distribute environment. It has been applied to a large equity-trading legacy system which has proved to be successful. expand
|
|
|
UML-based service robot software development: a case study |
| |
Minseong Kim,
Suntae Kim,
Sooyong Park,
Mun-Taek Choi,
Munsang Kim,
Hassan Gomaa
|
|
Pages: 534-543 |
|
doi>10.1145/1134285.1134360 |
|
Full text: PDF
|
|
The research field of Intelligent Service Robots, which has become more and more popular over the last years, covers a wide range of applications from climbing machines for cleaning large storefronts to robotic assistance for disabled or elderly people. ...
The research field of Intelligent Service Robots, which has become more and more popular over the last years, covers a wide range of applications from climbing machines for cleaning large storefronts to robotic assistance for disabled or elderly people. When developing service robot software, it is a challenging problem to design the robot architecture by carefully considering user needs and requirements, implement robot application components based on the architecture, and integrate these components in a systematic and comprehensive way for maintainability and reusability. Furthermore, it becomes more difficult to communicate among development teams and with others when many engineers from different teams participate in developing the service robot. To solve these problems, we applied the COMET design method, which uses the industry-standard UML notation, to developing the software of an intelligent service robot for the elderly, called T-Rot, under development at Center for Intelligent Robotics (CIR). In this paper, we discuss our experiences with the project in which we successfully addressed these problems and developed the autonomous navigation system of the robot with the COMET/UML method. expand
|
|
|
Analysis of the interaction between practices for introducing XP effectively |
| |
Osamu Kobayashi,
Mitsuyoshi Kawabata,
Makoto Sakai,
Eddy Parkinson
|
|
Pages: 544-550 |
|
doi>10.1145/1134285.1134361 |
|
Full text: PDF
|
|
In this paper, we discuss interactions between XP (eXtreme Programming) practices. We discuss 2 case studies of introducing XP practices selectively from the 13 practices which are defined in XP, and we analyze how to select practices. Our analysis is ...
In this paper, we discuss interactions between XP (eXtreme Programming) practices. We discuss 2 case studies of introducing XP practices selectively from the 13 practices which are defined in XP, and we analyze how to select practices. Our analysis is based on interviews with developers. While it is difficult to introduce all the XP practices at once, our knowledge makes it easier to determine more effective combinations of practices. expand
|
|
|
SESSION: Far east experience papers: evaluation |
|
|
|
|
Experiments on quality evaluation of embedded software in Japan robot software design contest |
| |
Hironori Washizaki,
Yasuhide Kobayashi,
Hiroyuki Watanabe,
Eiji Nakajima,
Yuji Hagiwara,
Kenji Hiranabe,
Kazuya Fukuda
|
|
Pages: 551-560 |
|
doi>10.1145/1134285.1134363 |
|
Full text: PDF
|
|
As a practical opportunity for educating Japanese young developers in the field of embedded software development, a software design contest involving the design of software to automatically control a line-trace robot, and conduct running performance ...
As a practical opportunity for educating Japanese young developers in the field of embedded software development, a software design contest involving the design of software to automatically control a line-trace robot, and conduct running performance tests was held. In this paper,we give the results of the contest from the viewpoint of software quality evaluation. We create a framework for evaluating the software quality which integrated design model quality and the final system performance, and conduct analysis using the framework. As a result of analysis,it is found that the quantitative measurement of the structural complexity of the design models bears a strong relationship to qualitative evaluation of the design conducted by judges. It is also found that there is no strong correlation between design model quality evaluated by the judges and the final system performance. For embedded software development, it is particularly important to estimate and verify reliability and performance in the early stages,using the model. Based on the analysis result,we consider possible remedies with respect to the models submitted,the evaluation methods used and the contest specifications. In order to adequately measure several non-functional quality characteristics including performance on the model,it is necessary to improve the way of developing robot software (such as applying model driven development)and reexamine the evaluation methods. expand
|
|
|
Procurement of enterprise resource planning systems: experiences with some Hong Kong companies |
| |
Pak-Lok Poon,
Yuen Tak Yu
|
|
Pages: 561-568 |
|
doi>10.1145/1134285.1134364 |
|
Full text: PDF
|
|
Many cases of adoption of Enterprise Resource Planning (ERP) systems have been reported in the literature. Some of the adopted ERP systems fail to satisfy the customer's requirements, despite the high spending and substantial efforts that have been put ...
Many cases of adoption of Enterprise Resource Planning (ERP) systems have been reported in the literature. Some of the adopted ERP systems fail to satisfy the customer's requirements, despite the high spending and substantial efforts that have been put into the adoption exercise. This is undoubtedly unsatisfactory. A way to avoid this problem is to adopt a well planned, managed, and controlled ERP procurement process. This paper describes our studies of three Chinese companies in Hong Kong which have adopted ERP systems. We report the experience of these companies, and discuss how the Chinese culture might have shaped the procurement practices in their ERP adoption exercises. expand
|
|
|
Detecting low usability web pages using quantitative data of users' behavior |
| |
Noboru Nakamichi,
Kazuyuki Shima,
Makoto Sakai,
Ken-ichi Matsumoto
|
|
Pages: 569-576 |
|
doi>10.1145/1134285.1134365 |
|
Full text: PDF
|
|
The purpose of this research is to detect low usability web pages from the behavior of users, such as browsing time, mouse movement and eye movement. We experimented to investigate the relation between the quantitative data viewing behavior of users ...
The purpose of this research is to detect low usability web pages from the behavior of users, such as browsing time, mouse movement and eye movement. We experimented to investigate the relation between the quantitative data viewing behavior of users and web usability evaluation by subjects. We analyzed the data to detect low usability web pages using discriminant analysis. Low usability web pages, 94.4% (17pages / 18pages = detectable pages / low usability pages) were detectable from the moving speed of gazing points and the amount of wheel rolling of a mouse. Moreover, this detection reduced the number of web pages which should be evaluated by half (46% = 89 pages / 192 pages = detected pages / all pages). expand
|
|
|
SESSION: Far east experience papers: software process |
|
|
|
|
Experiences of applying SPC techniques to software development processes |
| |
Mutsumi Komuro
|
|
Pages: 577-584 |
|
doi>10.1145/1134285.1134367 |
|
Full text: PDF
|
|
Experiences of applying SPC techniques to software development processes are described. Several real examples to apply SPC in Hitachi Software are given. Measures, control charts, and analysis judgment are given. Characteristics of software development ...
Experiences of applying SPC techniques to software development processes are described. Several real examples to apply SPC in Hitachi Software are given. Measures, control charts, and analysis judgment are given. Characteristics of software development processes, their influence on SPC, and lessons learned when applying SPC to software processes are described. In particular, the importance of self-directed and proactive improvement is discussed. expand
|
|
|
BSR: a statistic-based approach for establishing and refining software process performance baseline |
| |
Qing Wang,
Nan Jiang,
Lang Gou,
Xia Liu,
Mingshu Li,
Yongji Wang
|
|
Pages: 585-594 |
|
doi>10.1145/1134285.1134368 |
|
Full text: PDF
|
|
High-level process management is quantitative management. The Process Performance Baseline (PPB) of process or subprocess under statistical management is the most important concept. It is the basis of process control and improvement. The existing methods ...
High-level process management is quantitative management. The Process Performance Baseline (PPB) of process or subprocess under statistical management is the most important concept. It is the basis of process control and improvement. The existing methods for establishing process baseline are too coarse-grained or have some limitation, which lead to inaccurate or ineffective quantitative management. In this paper, we propose an approach called BSR (Baseline-Statistic-Refinement) for establishing and refining software process performance baseline, and present the experience result to validate its effectiveness for quantitative process management. expand
|
|
|
Practical approach to development of SPI activities in a large organization: Toshiba's SPI history since 2000 |
| |
Hideto Ogasawara,
Takashi Ishikawa,
Tetsuro Moriya
|
|
Pages: 595-599 |
|
doi>10.1145/1134285.1134369 |
|
Full text: PDF
|
|
For the effective promotion of software process improvement (SPI) activities in a large-scale organization, it is necessary to establish an organizational structure and a deployment method for promotion and to develop training courses, support tools, ...
For the effective promotion of software process improvement (SPI) activities in a large-scale organization, it is necessary to establish an organizational structure and a deployment method for promotion and to develop training courses, support tools, and other materials. Even if an organizational promotion system is established, the SPI activities of each development department cannot be promoted effectively without SPI community. To promote SPI activities throughout the TOSHIBA group, we organized a Corporate Software Engineering Process Group in April 2000. We also have been focused to establish SPI community, while promoting SPI activities in each development department. The fundamental our operating policy of SPI is "bottom-up". This paper discusses the problems encountered in the promotion of SPI activities and presents solutions to the problems. The actual results obtained show that the framework and solutions developed by us can be used to effectively promote SPI activities. expand
|
|
|
POSTER SESSION: Far east experience papers: posters |
|
|
|
|
Estimation of project success using Bayesian classifier |
| |
Seiya Abe,
Osamu Mizuno,
Tohru Kikuno,
Nahomi Kikuchi,
Masayuki Hirayama
|
|
Pages: 600-603 |
|
doi>10.1145/1134285.1134371 |
|
Full text: PDF
|
|
The software projects are considered to be successful if the cost and the duration are within the estimated ones and the quality is satisfactory. To attain project success, the project management, in which the final status of project is estimated, must ...
The software projects are considered to be successful if the cost and the duration are within the estimated ones and the quality is satisfactory. To attain project success, the project management, in which the final status of project is estimated, must be incorporated.In this paper, we consider estimation of the final status(that is, successful or unsuccessful) of project by applying Bayesian classifier to metrics data collected from project. In order to attain high estimation accuracy rate, we must select only a set of appropriate metrics to be applied. Here we consider two selection methods: the first method by the experts and the second method by the statistical test.Then we conducted an experiment using 28 project data and 29 metrics data in an organization of a certain company. The result showed that the method by the test gave higher accuracy rates than the method by the experts, and Bayesian classifier with the test method is effective to estimate project success. expand
|
|
|
Efficiency analysis of model-based review in actual software design |
| |
Hitoshi Furusawa,
Eun-Hye Choi,
Hiroshi Watanabe
|
|
Pages: 604-607 |
|
doi>10.1145/1134285.1134372 |
|
Full text: PDF
|
|
In this paper, we quantitatively analyze the efficiency of the Model-Based Review (MBR) method in an actual software design from the two points of view; cost and reviewability. The MBR method is a modeling procedure for the purpose of reviewing preliminary ...
In this paper, we quantitatively analyze the efficiency of the Model-Based Review (MBR) method in an actual software design from the two points of view; cost and reviewability. The MBR method is a modeling procedure for the purpose of reviewing preliminary design specifications of web-based applications. We have collected process data in applying both of the MBR method and an ordinary review to a preliminary design of a developing web-based library system. Analyzing the collected process data, we quantitatively compare the efficiency of the MBR method and that of the ordinary review. As a result of this comparative analysis, we show that the MBR method is superior to the ordinary review in terms of not only reviewability but also cost through the experimental design process. expand
|
|
|
Research journey towards industrial application of reuse technique |
| |
Stan Jarzabek,
Ulf Pettersson
|
|
Pages: 608-611 |
|
doi>10.1145/1134285.1134373 |
|
Full text: PDF
|
|
Component-based reuse in mission critical Command and Control system domain was a starting point for a long lasting research collaboration between National University of Singapore (NUS) and ST Electronics Pte. Ltd. (STEE). STEE industrial projects as ...
Component-based reuse in mission critical Command and Control system domain was a starting point for a long lasting research collaboration between National University of Singapore (NUS) and ST Electronics Pte. Ltd. (STEE). STEE industrial projects as well as NUS lab studies revealed limitations of conventional architecture-centric, component-based reuse in the area of generic design to unify similarity patterns (e.g., similar classes, components or architectural patterns) commonly found in software. Further research showed that meta-level extensions to conventional techniques could strengthen their generic design capabilities, considerably improving effectiveness of reuse solutions, and increasing productivity gains due to reuse. These experiences led to development of "mixed strategy" approach based on synergistic application of meta-level generative programming technique of XVCL, together with conventional programming techniques. In the paper, we describe university-industry collaboration that proved beneficial for both parties: STEE advanced reuse practice via application of XVCL in several software product line projects. Early inputs from STEE helped NUS team validate and refine XVCL reuse methods, and expand into new research directions. We describe a sequence of projects that led to successful application of XVCL in industrial projects. We describe experiences from those projects and their significance for both industrial practice and understanding principles of flexible software, i.e., software that can be easily changed and adapted to various reuse contexts. expand
|
|
|
A series of development methodologies for a variety of systems in Korea |
| |
Jihyun Lee,
Jin-Sam Kim,
Jin-Hee Cho
|
|
Pages: 612-615 |
|
doi>10.1145/1134285.1134374 |
|
Full text: PDF
|
|
To meet the development condition of inside of the country, domestic development methodologies are made with the abbreviation of MaRMI (Magic and Robust Methodology Integrated) in a series of methodologies in South Korea. The MaRMI have the four different ...
To meet the development condition of inside of the country, domestic development methodologies are made with the abbreviation of MaRMI (Magic and Robust Methodology Integrated) in a series of methodologies in South Korea. The MaRMI have the four different methodologies for developing information, object-oriented, component-based, embedded systems. In this paper, the authors describe the feature and structure of each methodology and show our movement for the methodology transfer. expand
|
|
|
Effects of software industry structure on a research framework for empirical software engineering |
| |
Yoshiki Mitani,
Nahomi Kikuchi,
Tomoko Matsumura,
Satoshi Iwamura,
Yoshiki Higo,
Katsuro Inoue,
Mike Barker,
Ken-ichi Matsumoto
|
|
Pages: 616-619 |
|
doi>10.1145/1134285.1134375 |
|
Full text: PDF
|
|
The authors describe a new research framework for applying empirical software engineering methods in industrial practice and accomplishments in using it. The selected target for applying the framework is a governmentally funded software development project ...
The authors describe a new research framework for applying empirical software engineering methods in industrial practice and accomplishments in using it. The selected target for applying the framework is a governmentally funded software development project involving multiple vendors. This project involved in-process project data measurement in real time, data sharing with industry and academia, data analysis, and feedback to the project members. Today the project is in the system integration process. This paper shows the value of this research framework and describes issues of empirical data sharing between industry and academia which have emerged while using it. This experiment raised two major issues. One is the necessity of a new research framework for project measurement called the "Macro Measurement Tool". The other is effects of the software industry structure on this framework. expand
|
|
|
Experience from applying RIM to educational ERP development |
| |
Autcha Mutchalintungkul,
Juthamas Oonhawat,
Kittiphong Pholpipatanaphong,
Daricha Sutivong,
Nakornthip Prompoon
|
|
Pages: 620-623 |
|
doi>10.1145/1134285.1134376 |
|
Full text: PDF
|
|
Developing a complex system requires partitioning the target system into several subsystems. It is generally difficult to define each subsystem's scope and functional requirements as well as data dependencies among subsystems. Requirements Integration ...
Developing a complex system requires partitioning the target system into several subsystems. It is generally difficult to define each subsystem's scope and functional requirements as well as data dependencies among subsystems. Requirements Integration Model (RIM), which consists of a workflow model and a work procedure, can provide specific guidelines and techniques to increase quality of requirements obtained under a time constraint. We have applied the technique to the implementation of the educational system for the Department of Computer Engineering at Chulalongkorn University, Thailand. Experiences from this case study have shown that the model can assist developers to clearly specify functional requirements for each subsystem and to identify data dependencies among subsystems. expand
|
|
|
Critical factors in establishing and maintaining trust in software outsourcing relationships |
| |
Phong Thanh Nguyen,
Muhammad Ali Babar,
June M. Verner
|
|
Pages: 624-627 |
|
doi>10.1145/1134285.1134377 |
|
Full text: PDF
|
|
Trust is considered one of the most important factors for successfully managing software outsourcing relationships. However, there is lack of research into understanding the factors that are considered important in establishing and maintaining trust ...
Trust is considered one of the most important factors for successfully managing software outsourcing relationships. However, there is lack of research into understanding the factors that are considered important in establishing and maintaining trust between clients and vendors. The goal of this research is to gain an understanding of software outsourcing vendors' perceptions of the importance of factors that are critical to the establishment and maintenance of trust in software outsourcing projects in Vietnam. We used a multiple case study design to guide our research and in-depth interviews to collect qualitative data from 12 Vietnamese software development practitioners drawn from 8 companies that have been developing software for Far Eastern, European, and American clients. Vendor companies identified that cultural understanding, creditability, capabilities, and personal visits are important factors in gaining the initial trust of a client, while cultural understanding, communication strategies, contract conformance, and timely delivery are vital factors in maintaining that trust. expand
|
|
|
Software practices in five ASEAN countries: an exploratory study |
| |
Raymund Sison,
Stanislaw Jarzabek,
Ow Siew Hock,
Wanchai Rivepiboon,
Nguyen Nam Hai
|
|
Pages: 628-631 |
|
doi>10.1145/1134285.1134378 |
|
Full text: PDF
|
|
There is a lack of published studies on software development in Southeast Asia, which is fast becoming an IT outsourcing haven. This paper presents exploratory survey and case study results on software practices of some software firms in five ASEAN countries ...
There is a lack of published studies on software development in Southeast Asia, which is fast becoming an IT outsourcing haven. This paper presents exploratory survey and case study results on software practices of some software firms in five ASEAN countries (Malaysia, Philippines, Singapore, Thailand and Vietnam), and provides directions for further research on software practices in the ASEAN/Southeast Asian region. expand
|
|
|
Overseas development for a major U.S. eCommerce website |
| |
Jiang Wu,
Sheldon Wang,
Christine Chau,
Lei Zeng,
Jinsong Lin
|
|
Pages: 632-635 |
|
doi>10.1145/1134285.1134379 |
|
Full text: PDF
|
|
In this paper, we describe our experience in establishing a software development center in China with the purpose of supporting a major U.S. eCommerce website. We have established a set of development processes that fit our business needs to develop ...
In this paper, we describe our experience in establishing a software development center in China with the purpose of supporting a major U.S. eCommerce website. We have established a set of development processes that fit our business needs to develop a large number of relatively small projects and release them in very short intervals. Our processes allow us to accurately monitor the project status, schedule resources, predict the delivery and assess the productivity. We believe only with such systems in place that businesses can ensure successful overseas software operations. expand
|
|
|
An experimental comparison of four test suite reduction techniques |
| |
Hao Zhong,
Lu Zhang,
Hong Mei
|
|
Pages: 636-640 |
|
doi>10.1145/1134285.1134380 |
|
Full text: PDF
|
|
As a test suite usually contains redundancy, a subset of the test suite (representative set) may still satisfy all the test objectives. As the redundancy increases the cost of executing the test suite, many test suite reduction techniques have been brought ...
As a test suite usually contains redundancy, a subset of the test suite (representative set) may still satisfy all the test objectives. As the redundancy increases the cost of executing the test suite, many test suite reduction techniques have been brought out in spite of the NP-completeness of the general problem of finding the optimal representative set of the test suite. In the literature, some experimental studies of test suite reduction techniques have already been reported, but there are still shortcomings of the studies of these techniques. This paper presents an experimental comparison of the four typical test suite reduction techniques: heuristic H, heuristic GRE, genetic algorithm-based approach and ILP-based approach. The aim of the study is to provide a guideline for choosing the appropriate test suite reduction techniques. expand
|
|
|
SESSION: Education papers: advanced topics in software engineering education |
|
|
|
|
Session details: Education papers: advanced topics in software engineering education |
| |
L. Williams
|
|
doi>10.1145/3245451 |
|
Full text: PDF
|
|
|
|
|
Engineering the software requirements of nonprofits: a service-learning approach |
| |
Shankar Venkatagiri
|
|
Pages: 643-648 |
|
doi>10.1145/1134285.1134382 |
|
Full text: PDF
|
|
This paper is a cross-study of service-learning projects executed by student groups in a 10-week course on software engineering. The principal benefits of service-learning are demonstrated by the groups in this setting. The course is structured to support ...
This paper is a cross-study of service-learning projects executed by student groups in a 10-week course on software engineering. The principal benefits of service-learning are demonstrated by the groups in this setting. The course is structured to support the project activities; timely brainstorming and negotiation roleplay exercises help the teams arrive at pragmatic baselines with their clients. The study highlights overlaps in the software requirements of nonprofits. The paper apprises the reader of some common mistakes committed by the various stakeholders, some of which can eventually undermine the project's mission. expand
|
|
|
Using return on investment to compare agile and plan-driven practices in undergraduate group projects |
| |
P. J. Rundle,
R. G. Dewar
|
|
Pages: 649-654 |
|
doi>10.1145/1134285.1134383 |
|
Full text: PDF
|
|
In this paper we describe our experiences of introducing agile practices into undergraduate group work by comparing the results to more traditional plan-driven groups. When considering whether to adopt an agile or plan-driven project management strategy ...
In this paper we describe our experiences of introducing agile practices into undergraduate group work by comparing the results to more traditional plan-driven groups. When considering whether to adopt an agile or plan-driven project management strategy in a commercial context, Return On Investment (ROI) is an important factor. We have adapted the ROI model to our analysis to assess what affect a chosen development approach has on the outcome of the groups' projects. In our investigation we observed seven software teams as they implemented a business information system. Two groups adopted agile practices, including fortnightly iterative delivery; the other groups were controls. We found that being labelled agile did not necessarily imply that a group's practices were more agile. Also, it was unclear whether the so-called agile groups delivered a better ROI than their plan-driven counterparts. expand
|
|
|
So you want brooks in your classroom? |
| |
Daniel Port,
David Klappholz
|
|
Pages: 655-660 |
|
doi>10.1145/1134285.1134384 |
|
Full text: PDF
|
|
Fred Brooks' seminal book, "The Mythical Man-Month" (MMM) is a firmly established classic in software engineering. Many of us feel compelled to use this work to help our students appreciate and put into practice the fundamental software engineering concepts ...
Fred Brooks' seminal book, "The Mythical Man-Month" (MMM) is a firmly established classic in software engineering. Many of us feel compelled to use this work to help our students appreciate and put into practice the fundamental software engineering concepts contained between its covers. This often amounts to using "passive" lesson plans such as required readings followed by lectures and exams; these rarely fully satisfy our learning objectives. Rather, students often have mixed reactions to MMM with the result that it has little impact on their attitudes and practices, both in and out of the classroom. This paper outlines a more active approach to incorporating MMM into the classroom, one that we have refined over 6 years, at multiple universities and in both graduate and undergraduate courses. It includes learning objectives, a lesson plan, sample materials, an implementation discussion, and an evaluation of the approach's impact. expand
|
|
|
SESSION: Education papers: software engineering education fundamentals |
|
|
|
|
Software engineering for undergraduates |
| |
Nenad Stankovic
|
|
Pages: 661-666 |
|
doi>10.1145/1134285.1134386 |
|
Full text: PDF
|
|
Software engineering has evolved, over a short period of time, into a dominant and omnipresent industry. In education we have recognized the importance of both managerial and technical aspects, but often failed to organize them in a coherent course with ...
Software engineering has evolved, over a short period of time, into a dominant and omnipresent industry. In education we have recognized the importance of both managerial and technical aspects, but often failed to organize them in a coherent course with a relevant, if not realistic laboratory project. The problem is far-reaching, and should be dealt with accordingly. This paper presents our before and after findings, and elaborates on CPE 207, the new Software Engineering course that, in our opinion, helps in bridging the gap between university and industry. expand
|
|
|
Dimensions of software engineering course design |
| |
Mario Bernhart,
Thomas Grechenig,
Jennifer Hetzl,
Wolfgang Zuser
|
|
Pages: 667-672 |
|
doi>10.1145/1134285.1134387 |
|
Full text: PDF
|
|
A vast variety of topics relate to the field of Software Engineering. Some universities implement curricula covering all aspects of Software Engineering. A number of other courses cover detailed aspects, e.g. programming, usability and security issues, ...
A vast variety of topics relate to the field of Software Engineering. Some universities implement curricula covering all aspects of Software Engineering. A number of other courses cover detailed aspects, e.g. programming, usability and security issues, analysis, architecture, design, and quality. Other universities offer general curricula considering Software Engineering in few or single course only. In each case, a course set has to be defined which directly relates to a specific student outcome. This work provides a method for categorizing and analyzing a course set within abstract dimensions for course design. We subsequently show the results of applying the dimensions to the course degree scheme in use. The course design dimensions can also be related to the student outcomes defined in SE2004 CC Section 3.2 [10]. expand
|
|
|
Inculcating invariants in introductory courses |
| |
David Evans,
Michael Peck
|
|
Pages: 673-678 |
|
doi>10.1145/1134285.1134388 |
|
Full text: PDF
|
|
One goal of introductory software engineering courses is to motivate and instill good software engineering habits. Unfortunately, practical constraints on typical courses often lead to student experiences that are antithetical to that goal: instead of ...
One goal of introductory software engineering courses is to motivate and instill good software engineering habits. Unfortunately, practical constraints on typical courses often lead to student experiences that are antithetical to that goal: instead of working in large teams and dealing with changing requirements and maintaining programs over many years, courses generally involve students working alone or in small teams with short projects that end the first time the program works correctly on some selected input. Small projects tend to reinforce poor software engineering practices. Since the programs are small enough to manage cognitively in ad hoc ways, effort spent more precisely documenting assumptions seems wasteful. It is infeasible to carry out full industrial software development within the context of a typical university course. However, it is possible to simulate some aspects of safety critical software engineering in an introductory software engineering course. This paper describes an approach that focuses on thinking about and precisely documenting invariants, and checking invariants using lightweight analysis tools. We describe how assignments were designed to emphasize the importance of invariants and to incorporate program analysis tools with typical software engineering material and report on results from an experiment measuring students understanding of program invariants. expand
|
|
|
SESSION: Education papers: distributed development |
|
|
|
|
Distributed development: an education perspective on the global studio project |
| |
Ita Richardson,
Allen E. Milewski,
Neel Mullick,
Patrick Keil
|
|
Pages: 679-684 |
|
doi>10.1145/1134285.1134390 |
|
Full text: PDF
|
|
The Global Studio Project integrated the work of Software Engineering students spread across four countries into a single project and represented, for most of the students, their first major "real-world" development experience. Interviews indicated that ...
The Global Studio Project integrated the work of Software Engineering students spread across four countries into a single project and represented, for most of the students, their first major "real-world" development experience. Interviews indicated that the major areas of learning were informal skills that included learning to establish and work effectively within a team, learning how to react quickly to frequent changes in requirements, architecture and organization, and learning to manage and optimize communications. Since all these skills require rapid reaction to unpredictable factors, we view them as improvisation and discuss the role of experiential education in facilitating improvisation. expand
|
|
|
Instructional design and assessment strategies for teaching global software development: a framework |
| |
Daniela Damian,
Allyson Hadwin,
Ban Al-Ani
|
|
Pages: 685-690 |
|
doi>10.1145/1134285.1134391 |
|
Full text: PDF
|
|
In the context of increasing pressure to adopt global approaches to software development, the importance of teaching skills for geographically distributed software development (GSD) becomes essential. This paper reports the experience of teaching a course ...
In the context of increasing pressure to adopt global approaches to software development, the importance of teaching skills for geographically distributed software development (GSD) becomes essential. This paper reports the experience of teaching a course to prepare graduates for software engineering (SE) in global customer-developer teams, and which was taught in three-University collaboration (Canada, Australia and Italy). The course emphasized the learning of requirements management activities in frequent synchronous computer-mediated client-developer relationships and created a GSD environment with significant time zone and language differences. We describe our instructional approach and assessment strategies within a GSD instructional design framework which integrates (a) required GSD skills and strategies for aligning classroom projects with contemporary and authentic GSD conditions, (b) strategies for assessment of learning of GSD skills and (c) examples from our GSD course. expand
|
|
|
POSTER SESSION: Education papers: posters |
|
|
|
|
Assessing undergraduate experience of continuous integration and test-driven development |
| |
Jon Bowyer,
Janet Hughes
|
|
Pages: 691-694 |
|
doi>10.1145/1134285.1134393 |
|
Full text: PDF
|
|
A number of agile practices are included in software engineering curricula, including test-driven development. Continuous integration often is not included, despite it becoming increasingly common in industry to code, test, and integrate at the same ...
A number of agile practices are included in software engineering curricula, including test-driven development. Continuous integration often is not included, despite it becoming increasingly common in industry to code, test, and integrate at the same time. This paper describes a study whereby software engineering undergraduates were given a short intensive experience of test-driven development with continuous integration using an environment that imitated a typical industrial circumstance. Assessment was made of students' agile experience rather than of project deliverables, using a novel set of process measures that examined students' participation and performance in agile testing. Results showed good participation by student pairs, and clear understanding of agile processes and configuration management. Future work will investigate automation of the assessment of continuous integration and configuration management server data. expand
|
|
|
A comparison of communication technologies to support novice team programming |
| |
Davor ČubraniĆ,
Margaret-Anne D. Storey,
Jody Ryall
|
|
Pages: 695-698 |
|
doi>10.1145/1134285.1134394 |
|
Full text: PDF
|
|
This paper describes an initial investigation of how different conditions for conducting a team programming exercise impact learning. We conducted a series of in-depth case studies on the use of various communication technologies and compared them with ...
This paper describes an initial investigation of how different conditions for conducting a team programming exercise impact learning. We conducted a series of in-depth case studies on the use of various communication technologies and compared them with face-to-face case studies of team programming. We explored how these communication technologies can help improve students' learning. We summarize the findings from these studies and give guidance to instructors and to tool designers on how future tools can be improved to support collaborative learning in team programming. expand
|
|
|
Experience in teaching a software reengineering course |
| |
Mohammad El-Ramly
|
|
Pages: 699-702 |
|
doi>10.1145/1134285.1134395 |
|
Full text: PDF
|
|
Software engineering curricula emphasize developing new software systems. Little attention is given to how to change and modernize existing systems, i.e., the theory and practice of software maintenance and reengineering. This paper presents the author's ...
Software engineering curricula emphasize developing new software systems. Little attention is given to how to change and modernize existing systems, i.e., the theory and practice of software maintenance and reengineering. This paper presents the author's experience in teaching software reengineering in a masters-level course at University of Leicester, UK. It presents the course objectives, outline and the lessons learned. The main lessons are: first, there is a big shortage of educational materials for teaching software reengineering. Second, selecting the suitable materials (that balance theory and practice) and the right tool(s) for the level of students and depth of coverage required is a difficult task. Third, teaching reengineering using toy exercises and assignments does not convey the practical aspects of the subject. While, teaching with real, even small size, exercises and assignments, is almost infeasible. Getting the balance right requires careful consideration and experimentation. Finally, students understand and appreciate this topic much more if they have previous industrial experience and when they are presented with real industrial case studies. expand
|
|
|
Teaching framework for software development methods |
| |
Orit Hazzan,
Yael Dubinsky
|
|
Pages: 703-706 |
|
doi>10.1145/1134285.1134396 |
|
Full text: PDF
|
|
In this paper we suggest a framework for teaching software development methods (SDMs). Specifically, based on our accumulative research and in-practice experience of teaching SDMs, a set of principles, that guides our teaching of SDMs in different settings ...
In this paper we suggest a framework for teaching software development methods (SDMs). Specifically, based on our accumulative research and in-practice experience of teaching SDMs, a set of principles, that guides our teaching of SDMs in different settings and teaching experiences, has been formulated. The teaching framework consists of 14 principles that their actual implementation is varied and adjusted in different teaching environments. This paper outlines the principles and addresses their contribution to learners' understanding of the said software development method. expand
|
|
|
A software process for time-constrained course projects |
| |
Wilson P. Paula Filho
|
|
Pages: 707-710 |
|
doi>10.1145/1134285.1134397 |
|
Full text: PDF
|
|
Defined software engineering processes help to perform and guide software engineering course projects. However, several difficult issues are involved in designing a software process for this purpose. This design is even harder when it must suit time-constrained ...
Defined software engineering processes help to perform and guide software engineering course projects. However, several difficult issues are involved in designing a software process for this purpose. This design is even harder when it must suit time-constrained course projects. Here, we discuss several issues concerning such processes, focusing on an educational setting. expand
|
|
|
SESSION: Software engineering: achievements & challenges: ubiquitous and distributed systems |
|
|
|
|
Session details: Software engineering: achievements & challenges: ubiquitous and distributed systems |
| |
J. Kramer
|
|
doi>10.1145/3245452 |
|
Full text: PDF
|
|
|
|
|
Challenges in the age of ubiquitous computing: a case study of T-Engine, an open development platform for embedded systems |
| |
Ken Sakamura
|
|
Pages: 713-720 |
|
doi>10.1145/1134285.1134399 |
|
Full text: PDF
|
|
Ubiquitous Computing poses new challenges for the software engineering community. The T-Engine platform consisting of standard real-time kernel, T-Kernel, running on the standard hardware with networking facility creates broad application opportunities ...
Ubiquitous Computing poses new challenges for the software engineering community. The T-Engine platform consisting of standard real-time kernel, T-Kernel, running on the standard hardware with networking facility creates broad application opportunities based on the collaboration of cutting edge microelectronics, software and embedded system technologies. However, to realize the true potential of such a system in a ubiquitous computing environment, we need to overcome software engineering issues among many hurdles we encounter. We describe such issues and the future challenges inherent in the ubiquitous computing based on our experience of using Ubiquitous Communicator terminal that is based on T-Engine. expand
|
|
|
A software architecture-based framework for highly distributed and data intensive scientific applications |
| |
Chris A. Mattmann,
Daniel J. Crichton,
Nenad Medvidovic,
Steve Hughes
|
|
Pages: 721-730 |
|
doi>10.1145/1134285.1134400 |
|
Full text: PDF
|
|
Modern scientific research is increasingly conducted by virtual communities of scientists distributed around the world. The data volumes created by these communities are extremely large, and growing rapidly. The management of the resulting highly distributed, ...
Modern scientific research is increasingly conducted by virtual communities of scientists distributed around the world. The data volumes created by these communities are extremely large, and growing rapidly. The management of the resulting highly distributed, virtual data systems is a complex task, characterized by a number of formidable technical challenges, many of which are of a software engineering nature. In this paper we describe our experience over the past seven years in constructing and deploying OODT, a software framework that supports large, distributed, virtual scientific communities. We outline the key software engineering challenges that we faced, and addressed, along the way. We argue that a major contributor to the success of OODT was its explicit focus on software architecture. We describe several large-scale, real-world deployments of OODT, and the manner in which OODT helped us to address the domain-specific challenges induced by each deployment. expand
|
|
|
SESSION: Software engineering: achievements & challenges: domain-specific challenges |
|
|
|
|
A research agenda for distributed software development |
| |
Bikram Sengupta,
Satish Chandra,
Vibha Sinha
|
|
Pages: 731-740 |
|
doi>10.1145/1134285.1134402 |
|
Full text: PDF
|
|
In recent years, a number of business reasons have caused software development to become increasingly distributed. Remote development of software offers several advantages, but it is also fraught with challenges. In this paper, we report on our study ...
In recent years, a number of business reasons have caused software development to become increasingly distributed. Remote development of software offers several advantages, but it is also fraught with challenges. In this paper, we report on our study of distributed software development that helped shape a research agenda for this field. Our study has identified four areas where important research questions need to be addressed to make distributed development more effective. These areas are: collaborative software tools, knowledge acquisition and management, testing in a distributed set-up and process and metrics issues. We present a brief summary of related research in each of these areas, and also outline open research issues. expand
|
|
|
Managing exceptions in the medical workflow systems |
| |
Minmin Han,
Thomas Thiery,
Xiping Song
|
|
Pages: 741-750 |
|
doi>10.1145/1134285.1134403 |
|
Full text: PDF
|
|
Over the years, medical informatics researchers have studied how to use software technologies to provide decision support for using evidence-based medical procedures. Software professionals have investigated how to support hospital administration, therapy ...
Over the years, medical informatics researchers have studied how to use software technologies to provide decision support for using evidence-based medical procedures. Software professionals have investigated how to support hospital administration, therapy and laboratory workflows. For many of these efforts, managing the exceptions in the workflows is a key issue since the medical workflows must cope with a wide variety of patient medical situations as well as those of the healthcare environments. This paper presents an analysis of past research in managing medical workflow exceptions, and proposes future research that would benefit the medical applications. The paper is focused on three topics: representing, handling and analyzing exceptions. Based upon our analysis, we believe that techniques for verifying exception management models and for handling dynamic exceptions should be useful and possibly essential for developing large scale, practical medical workflow systems. expand
|
|
|
Multi-platform user interface construction: a challenge for software engineering-in-the-small |
| |
Judith Bishop
|
|
Pages: 751-760 |
|
doi>10.1145/1134285.1134404 |
|
Full text: PDF
|
|
The popular view of software engineering focuses on managing teams of people to produce large systems. This paper addresses a different angle of software engineering, that of development for re-use and portability. We consider how an essential part of ...
The popular view of software engineering focuses on managing teams of people to produce large systems. This paper addresses a different angle of software engineering, that of development for re-use and portability. We consider how an essential part of most software products - the user interface - can be successfully engineered so that it can be portable across multiple platforms and on multiple devices. Our research has identified the structure of the problem domain, and we have filled in some of the answers. We investigate promising solutions from the model-driven frameworks of the 1990s, to modern XML-based specification notations (Views, XUL, XIML, XAML), multi-platform toolkits (Qt and Gtk), and our new work, Mirrors which pioneers reflective libraries. The methodology on which Views and Mirrors is based enables existing GUI libraries to be transported to new operating systems. The paper also identifies cross-cutting challenges related to education, standardization and the impact of mobile and tangible devices on the future design of UIs. This paper seeks to position user interface construction as an important challenge in software engineering, worthy of ongoing research. expand
|
|
|
SESSION: Software engineering: achievements & challenges: formal methods |
|
|
|
|
Formal methods in industry: achievements, problems, future |
| |
Jean-Raymond Abrial
|
|
Pages: 761-768 |
|
doi>10.1145/1134285.1134406 |
|
Full text: PDF
|
|
Two real projects using the B formal method are quickly presented. They show how some important parts of complex systems can be developed in such a way that the outcome is "correct by construction". A number of factors are then analyzed relating the ...
Two real projects using the B formal method are quickly presented. They show how some important parts of complex systems can be developed in such a way that the outcome is "correct by construction". A number of factors are then analyzed relating the pros, the cons, and the difficulties in applying this approach in Industry. expand
|
|
|
DEMONSTRATION SESSION: Research demonstrations: verification and testing |
|
|
|
|
Session details: Research demonstrations: verification and testing |
| |
M. Dwyer,
K. Futatsugi
|
|
doi>10.1145/3245453 |
|
Full text: PDF
|
|
|
|
|
LTSA-WS: a tool for model-based verification of web service compositions and choreography |
| |
Howard Foster,
Sebastian Uchitel,
Jeff Magee,
Jeff Kramer
|
|
Pages: 771-774 |
|
doi>10.1145/1134285.1134408 |
|
Full text: PDF
|
|
In this paper we describe a tool for a model-based approach to verifying compositions of web service implementations. The tool supports verification of properties created from design specifications and implementation models to confirm expected results ...
In this paper we describe a tool for a model-based approach to verifying compositions of web service implementations. The tool supports verification of properties created from design specifications and implementation models to confirm expected results from the viewpoints of both the designer and implementer. Scenarios are modeled in UML, in the form of Message Sequence Charts (MSCs), and then compiled into the Finite State Process (FSP) process algebra to concisely model the required behavior. BPEL4WS implementations are mechanically translated to FSP to allow an equivalence trace verification process to be performed. By providing early design verification and validation, the implementation, testing and deployment of web service compositions can be eased through the understanding of the behavior exhibited by the composition. The approach is implemented as a plug-in for the Eclipse development environment providing cooperating tools for specification, formal modeling, verification and validation of the composition process. expand
|
|
|
HighSpec: a tool for building and checking OZTA models |
| |
J. S. Dong,
P. Hao,
X. Zhang,
S. C. Qin
|
|
Pages: 775-778 |
|
doi>10.1145/1134285.1134409 |
|
Full text: PDF
|
|
HighSpec is an interactive system for composing and checking OZTA specifications. The integrated high level specification language, OZTA, is a combination of Object-Z (OZ) and Timed Automata (TA). Building on the strength of Object-Z's in specifying ...
HighSpec is an interactive system for composing and checking OZTA specifications. The integrated high level specification language, OZTA, is a combination of Object-Z (OZ) and Timed Automata (TA). Building on the strength of Object-Z's in specifying data structures and Timed Automata's in modelling dynamic and real-time behaviors, OZTA is well suited for presenting complete and coherent requirement models for complex real-time systems. HighSpec supports editing, type-checking as well as projecting OZTA models into TA models and Alloy Models so that TA model checkers-UPPAAL and the Alloy Analyzer can be utilized for verification. Most importantly, HighSpec supports a novel yet effective mechanism advocated by OZTA for structural TA design, i.e., using a set of composable timed patterns to capture high level timing requirements and process behaviors and generate the TA part of model in a top-down way. HighSpec can also generate LaTeX document as an alternative media for the spread and read of established OZTA models. expand
|
|
|
GridUnit: software testing on the grid |
| |
Alexandre Duarte,
Walfredo Cirne,
Francisco Brasileiro,
Patricia Machado
|
|
Pages: 779-782 |
|
doi>10.1145/1134285.1134410 |
|
Full text: PDF
|
|
Software testing is a fundamental part of system development. As software grows, its test suite becomes larger and its execution time may become a problem to software developers. This is especially the case for agile methodologies, which preach a short ...
Software testing is a fundamental part of system development. As software grows, its test suite becomes larger and its execution time may become a problem to software developers. This is especially the case for agile methodologies, which preach a short develop/test cycle. Moreover, due to the increasing complexity of systems, there is the need to test software in a variety of environments. In this paper, we introduce GridUnit, an extension of the widely adopted JUnit testing framework, able to automatically distribute the execution of software tests on a computational grid with minimum user intervention. Experiments conducted with this solution have showed a speed-up of almost 70x, reducing the duration of the test phase of a synthetic application from 24 hours to less than 30 minutes. The solution does not require any source-code modification, hides the grid complexity from the user and provides a cost-effectiveness improvement to the software testing experience. expand
|
|
|
DEMONSTRATION SESSION: Research demonstrations: development and transformation |
|
|
|
|
ASADAL: a tool system for co-development of software and test environment based on product line engineering |
| |
Kyungseok Kim,
Hyejung Kim,
Miyoung Ahn,
Minseok Seo,
Yeop Chang,
Kyo C. Kang
|
|
Pages: 783-786 |
|
doi>10.1145/1134285.1134412 |
|
Full text: PDF
|
|
Recently, product line software engineering (PLSE) is gaining popularity. To employ PLSE methods, many organizations are looking for a tool system that supports PLSE methods so that core assets and target software can be developed and tested in an effective ...
Recently, product line software engineering (PLSE) is gaining popularity. To employ PLSE methods, many organizations are looking for a tool system that supports PLSE methods so that core assets and target software can be developed and tested in an effective and systematic way.ASADAL (A System Analysis and Design Aid tooL) supports the entire lifecycle of software development process based on a PLSE method called FORM (Feature-Oriented Reuse Method) [6]. It supports domain analysis, architecture and component design, code generation, and simulation-based verification and validation (V&V). Using the tool, users may co-develop target software and its test environment and verify software in a continuous and incremental way. expand
|
|
|
Developing and executing java AWT applications on limited devices with TCPTE |
| |
Gerardo Canfora,
Giuseppe Di Santo,
Eugenio Zimeo
|
|
Pages: 787-790 |
|
doi>10.1145/1134285.1134413 |
|
Full text: PDF
|
|
The paper describes TCPTE, a framework that supports the development of thin-client applications for mobile devices. By using this framework, Java AWT applications can be executed on a server and their graphical interfaces can be displayed on a remote ...
The paper describes TCPTE, a framework that supports the development of thin-client applications for mobile devices. By using this framework, Java AWT applications can be executed on a server and their graphical interfaces can be displayed on a remote client. TCPTE combines in a single framework the advantages of thin-client computing with the richness of client-server graphical interfaces and the simplicity of development that characterizes desktop applications. expand
|
|
|
Interactive transformation of java programs in eclipse |
| |
Marat Boshernitsan,
Susan L. Graham
|
|
Pages: 791-794 |
|
doi>10.1145/1134285.1134414 |
|
Full text: PDF
|
|
Implementing large and sweeping changes to software source code can be tedious and error-prone. A conceptually simple change may require a significant code editing effort. Integrating scriptable source-to-source program transformations into development ...
Implementing large and sweeping changes to software source code can be tedious and error-prone. A conceptually simple change may require a significant code editing effort. Integrating scriptable source-to-source program transformations into development environments can assist developers with this task. We present a developer-oriented interactive source code transformation tool for Java that addresses this need. expand
|
|
|
DEMONSTRATION SESSION: Research demonstrations: data base and business process |
|
|
|
|
Preventing SQL injection attacks using AMNESIA |
| |
William G. J. Halfond,
Alessandro Orso
|
|
Pages: 795-798 |
|
doi>10.1145/1134285.1134416 |
|
Full text: PDF
|
|
AMNESIA is a tool that detects and prevents SQL injection attacks by combining static analysis and runtime monitoring. Empirical evaluation has shown that AMNESIA is both effective and efficient against SQL injection.
AMNESIA is a tool that detects and prevents SQL injection attacks by combining static analysis and runtime monitoring. Empirical evaluation has shown that AMNESIA is both effective and efficient against SQL injection. expand
|
|
|
A framework for automatic generation of evolvable e-commerce workplaces using business processes |
| |
Ying Zou,
Qi Zhang
|
|
Pages: 799-802 |
|
doi>10.1145/1134285.1134417 |
|
Full text: PDF
|
|
Business processes encapsulate the knowledge of operations and services provided by organizations. Due to the changing nature of business processes, the design and implementation of e-commerce applications, such as workplace applications, could not be ...
Business processes encapsulate the knowledge of operations and services provided by organizations. Due to the changing nature of business processes, the design and implementation of e-commerce applications, such as workplace applications, could not be evolved consistently to support changing business requirements. E-commerce workplaces suffer from design and usability problems and may not conform to updated and constantly changing business processes. In this research demonstration, we present a framework that automatically generates business workplaces using workflow specifications. The generated workplaces can easily adapt to the changing business needs and reflect better the interaction within complex business processes in organizations. expand
|
|
|
LISFS: a logical information system as a file system |
| |
Yoann Padioleau,
Benjamin Sigonneau,
Olivier Ridoux
|
|
Pages: 803-806 |
|
doi>10.1145/1134285.1134418 |
|
Full text: PDF
|
|
We present Logical Information Systems (LIS). A LIS can be viewed as a schema-less database whose objects are described by logical formulas. Objects are automatically organized according to their logical description, and logical formulas can be used ...
We present Logical Information Systems (LIS). A LIS can be viewed as a schema-less database whose objects are described by logical formulas. Objects are automatically organized according to their logical description, and logical formulas can be used for representing both queries and navigation links. The key feature of a LIS is that it answers a query with a set of navigation links expressed in the same logic as the query. As navigation links are dynamically computed from any query, and can be used as query increments, it follows that querying and navigation steps can be combined in any order.We then present LISFS, a file-system implementation of a LIS, where objects are files or parts of files. This has the benefit to make LIS features available right now to existing applications. This implementation can easily be extended and specialized through a plug-in mechanism.Finally, we present some applications in the field of personal databases (e.g., music, images, emails) and in the field of software engineering. expand
|
|
|
DEMONSTRATION SESSION: Informal tool demonstrations |
|
|
|
|
Relational programming with CrocoPat |
| |
Dirk Beyer
|
|
Pages: 807-810 |
|
doi>10.1145/1134285.1134420 |
|
Full text: PDF
|
|
Many structural analyses of software systems are naturally formalized as relational queries, for example, the detection of design patterns, patterns of problematic design, code clones, dead code, and differences between the as-built and the as-designed ...
Many structural analyses of software systems are naturally formalized as relational queries, for example, the detection of design patterns, patterns of problematic design, code clones, dead code, and differences between the as-built and the as-designed architecture. This paper describes CrocoPat, an application-independent tool for relational programming. Through its efficiency and its expressive language, CrocoPat enables practically important analyses of real-world software systems that are not possible with other graph analysis tools, in particular analyses that involve transitive closures and the detection of patterns in graphs. The language is easy to use, because it is based on the well-known first-order predicate logic. The tool is easy to integrate into other software systems, because it is a small command-line tool that uses a simple text format for input and output of relations. expand
|
|
|
Addressing crosscutting deployment and configuration concerns of distributed real-time and embedded systems via aspect-oriented & model-driven software development |
| |
Gan Deng,
Douglas C. Schmidt,
Aniruddha Gokhale
|
|
Pages: 811-814 |
|
doi>10.1145/1134285.1134421 |
|
Full text: PDF
|
|
Model-driven development (MDD) is gaining importance as an approach to resolving lifecycle challenges of large-scale distributed real-time and embedded (DRE) systems (e.g., avionics mission computing). DRE systems are characterized by their stringent ...
Model-driven development (MDD) is gaining importance as an approach to resolving lifecycle challenges of large-scale distributed real-time and embedded (DRE) systems (e.g., avionics mission computing). DRE systems are characterized by their stringent requirements for quality of service (QoS), such as predictable end-to-end latencies, timeliness and scalability. Delivering the QoS needs of DRE systems entails the need to configure correctly, fine tune and provision the infrastructure used to host the DRE systems, which crosscuts different layers of middleware, operating systems and networks. Addressing these tangled deployment and configuration concerns of DRE systems requires integrating the principles of Aspect-Oriented Software Development (AOSD) with MDD. This demo showcases a set of software tools that resolve both the inherently and accidental complexities arising due to the configuration and deployment crosscutting concerns of component middleware-based DRE systems. expand
|
|
|
FormulaBuilder: a tool for graph-based modelling and generation of formulae |
| |
Sven Jörges,
Tiziana Margaria,
Bernhard Steffen
|
|
Pages: 815-818 |
|
doi>10.1145/1134285.1134422 |
|
Full text: PDF
|
|
In this paper we present the FormulaBuilder, a flexible tool for graph-based modelling and generation of formulae. The FormulaBuilder allows easy and intuitive creation of formulae by using basic components called Formula Building Blocks (FBBs) and arranging ...
In this paper we present the FormulaBuilder, a flexible tool for graph-based modelling and generation of formulae. The FormulaBuilder allows easy and intuitive creation of formulae by using basic components called Formula Building Blocks (FBBs) and arranging them as graphs according to the syntactic structure of a formula. Such a graph can then be validated and used to generate the corresponding formula on the basis of a specific syntax which is chosen from a list of syntaxes supported by the FormulaBuilder.An important application of the FormulaBuilder is the formal specification of properties that describe the requirements of a system. Such property specifications are usually needed by verification tools like model checkers, that help software engineers to detect errors in a specified system. The FormulaBuilder allows users to model property specifications as formula graphs by using commonly-occurring specification patterns. expand
|
|
|
Tools for model-based security engineering |
| |
Jan Jürjens,
Jorge Fox
|
|
Pages: 819-822 |
|
doi>10.1145/1134285.1134423 |
|
Full text: PDF
|
|
We present tool-support for checking UML models and C code against security requirements. A framework supports implementing verification routines, based on XMI output of the diagrams from UML CASE tools, and on control flow generated from the C code. ...
We present tool-support for checking UML models and C code against security requirements. A framework supports implementing verification routines, based on XMI output of the diagrams from UML CASE tools, and on control flow generated from the C code. The tool also supports weaving security aspects into the code generated from the models. Advanced users can use this open-source framework to implement verification routines for the constraints of self-defined security requirements. We focus on a verification routine that automatically verifies crypto-based software for security requirements by using automated theorem provers. expand
|
|
|
LtRules: an automated software library usage rule extraction tool |
| |
Chang Liu,
En Ye,
Debra J. Richardson
|
|
Pages: 823-826 |
|
doi>10.1145/1134285.1134424 |
|
Full text: PDF
|
|
The need to manually specify temporal properties of software systems is a major barrier to wider adoption of software model checking, because the specification of software temporal properties is a difficult, time-consuming, and error-prone process. To ...
The need to manually specify temporal properties of software systems is a major barrier to wider adoption of software model checking, because the specification of software temporal properties is a difficult, time-consuming, and error-prone process. To address this problem, we propose to automatically extract software library usage rules, which are one type of temporal specifications. Our approach uses a model checker to check a set of software library usage rule candidates against known good programs using that library, and identifies valid rules based on model checking results. These valid rules can help programmers learn about common software library usage. They can also be used to check new programs using the same library. We have implemented our approach in an Eclipse plug-in named LtRules, which can extract software library usage rules from C programs using BLAST as the underlying model checker. expand
|
|
|
MuJava: a mutation system for java |
| |
Yu-Seung Ma,
Jeff Offutt,
Yong-Rae Kwon
|
|
Pages: 827-830 |
|
doi>10.1145/1134285.1134425 |
|
Full text: PDF
|
|
Mutation testing is a valuable experimental research technique that has been used in many studies. It has been experimentally compared with other test criteria, and also used to support experimental comparisons of other test criteria, by using mutants ...
Mutation testing is a valuable experimental research technique that has been used in many studies. It has been experimentally compared with other test criteria, and also used to support experimental comparisons of other test criteria, by using mutants as a method to create faults. In effect, mutation is often used as a ``gold standard'' for experimental evaluations of test methods. Although mutation testing is powerful, it is a complicated and computationally expensive testing method. Therefore, automated tool support is indispensable for conducting mutation testing. This demo presents a publicly available mutation system for Java that supports both method-level mutants and class-level mutants. MuJava can be freely downloaded and installed with relative ease under both Unix and Windows. MuJava is offered as a free service to the community and we hope that it will promote the use of mutation analysis for experimental research in software testing. expand
|
|
|
A tool for analyzing and detecting malicious mobile code |
| |
Akira Mori,
Tomonori Izumida,
Toshimi Sawada,
Tadashi Inoue
|
|
Pages: 831-834 |
|
doi>10.1145/1134285.1134426 |
|
Full text: PDF
|
|
We present a tool for analysis and detection of malicious mobile code such as computer viruses and internet worms based on the combined use of code simulation, static code analysis, and OS execution emulation. Unlike traditional anti-virus methods, the ...
We present a tool for analysis and detection of malicious mobile code such as computer viruses and internet worms based on the combined use of code simulation, static code analysis, and OS execution emulation. Unlike traditional anti-virus methods, the tool directly inspects the code and identifies commonly found malicious behaviors such as mass mailing, self duplication, and registry overwrite without relying on ``pattern files'' that contain ``signatures'' of previously captured samples. The prohibited behaviors are defined separately as security policies at the level of API library function calls in a state-transition like language. The tool also features data flow analysis based on static single assignment forms, which are useful in tracing various values stored in registers and memory locations. The current tool targets at Win32 binary programs on Intel IA32 architectures and can detect most email virusesslash worms that had spread in the wild in recent years. expand
|
|
|
Automatic extraction of abstract-object-state machines from unit-test executions |
| |
Tao Xie,
Evan Martin,
Hai Yuan
|
|
Pages: 835-838 |
|
doi>10.1145/1134285.1134427 |
|
Full text: PDF
|
|
An automatic test-generation tool can produce a large number of test inputs to exercise the class under test. However, without specifications, developers cannot inspect the execution of each automatically generated test input practically. To address ...
An automatic test-generation tool can produce a large number of test inputs to exercise the class under test. However, without specifications, developers cannot inspect the execution of each automatically generated test input practically. To address the problem, we have developed an automatic test abstraction tool, called Abstra, to extract high level object-state-transition information from unit-test executions, without requiring a priori specifications. Given a class and a set of its generated test inputs, our tool extracts object state machines (OSM): a state in an OSM represents an object state of the class and a transition in an OSM represents method calls of the class. When an object state in an OSM is concrete (being represented by the values of all fields reachable from the object), the size of the OSM could be too large to be useful for inspection. To address this issue, we have developed techniques in the tool to abstract object states based on returns of observer methods, branch coverage of methods, and individual object fields, respectively. The tool provides useful object-state-transition information for programmers to inspect unit-test executions effectively. In particular, the tool helps facilitate correctness inspection, program understanding, fault isolation, and test characterization. expand
|
|
|
3D visualization for concept location in source code |
| |
Xinrong Xie,
Denys Poshyvanyk,
Andrian Marcus
|
|
Pages: 839-842 |
|
doi>10.1145/1134285.1134428 |
|
Full text: PDF
|
|
The paper presents a set of tools that work in conjunction to support concept location in software. One of the tools, IRiSS (Information Retrieval based Software Search), is a search engine, designed and implemented to allow searching the source code ...
The paper presents a set of tools that work in conjunction to support concept location in software. One of the tools, IRiSS (Information Retrieval based Software Search), is a search engine, designed and implemented to allow searching the source code of a software system. The other tool, sv3D (source viewer 3D), is a visualization front end, designed to represent software data with 3D renderings.The two tools are integrated with MS Visual Studio, with IRiSS providing the infrastructure for indexing the source code and querying, while sv3D helps the user in visually navigating the results of the queries and keeps track of the navigation path. expand
|
|
|
SESSION: Emerging results: architecture |
|
|
|
|
Session details: Emerging results: architecture |
| |
B. H. C. Cheng,
B. Shen
|
|
doi>10.1145/3245454 |
|
Full text: PDF
|
|
|
|
|
Towards a distributed software architecture evaluation process: a preliminary assessment |
| |
Muhammed Ali Babar,
Barbara Kitchenham,
Ian Gorton
|
|
Pages: 845-848 |
|
doi>10.1145/1134285.1134430 |
|
Full text: PDF
|
|
Scenario-based methods for evaluating software architecture require a large number of stakeholders to be collocated for evaluation sessions. Collocating stakeholders is often an expensive exercise. We have proposed a framework for distributed evaluation ...
Scenario-based methods for evaluating software architecture require a large number of stakeholders to be collocated for evaluation sessions. Collocating stakeholders is often an expensive exercise. We have proposed a framework for distributed evaluation process. We present the proposed framework and initial results of a controlled experiment that we ran to assess the effectiveness of the proposed idea. expand
|
|
|
Identifying "good" architectural design alternatives with multi-objective optimization strategies |
| |
Lars Grunske
|
|
Pages: 849-852 |
|
doi>10.1145/1134285.1134431 |
|
Full text: PDF
|
|
Architecture trade-off analysis methods are appropriate techniques to evaluate design decisions and design alternatives with respect to conflicting quality requirements. However, the identification of good design alternatives is a time consuming task, ...
Architecture trade-off analysis methods are appropriate techniques to evaluate design decisions and design alternatives with respect to conflicting quality requirements. However, the identification of good design alternatives is a time consuming task, which is currently performed manually. To automate this task, this paper proposes to use evolutionary algorithms and multi-objective optimization strategies based on architecture refactorings to identify a sufficient set of design alternatives. This approach will reduce development costs and improve the quality of the final system, because an automated and systematic search will identify more and better design alternatives. expand
|
|
|
Estimating software component reliability by leveraging architectural models |
| |
Roshanak Roshandel,
Somo Banerjee,
Leslie Cheung,
Nenad Medvidovic,
Leana Golubchik
|
|
Pages: 853-856 |
|
doi>10.1145/1134285.1134432 |
|
Full text: PDF
|
|
Software reliability techniques are aimed at reducing or eliminat-ing failures in software systems. Reliability in software systems istypically measured during or after system implementation. How-ever, software engineering methodology lays stress on ...
Software reliability techniques are aimed at reducing or eliminat-ing failures in software systems. Reliability in software systems istypically measured during or after system implementation. How-ever, software engineering methodology lays stress on doing the"correct things" early on in the software development lifecycle inorder to curb development and maintenance costs. In this paper, wepropose a framework for reliability estimation of software compo-nents at the level of software architecture. expand
|
|
|
An architectural style for high-performance asymmetrical parallel computations |
| |
David Woollard,
Nenad Medvidovic
|
|
Pages: 857-860 |
|
doi>10.1145/1134285.1134433 |
|
Full text: PDF
|
|
Researchers with deep knowledge of scientific domains are becoming more interested in developing highly-adaptive and irregular (asymmetrical) parallel computations, leading to development challenges for both delivery of data for computation and ...
Researchers with deep knowledge of scientific domains are becoming more interested in developing highly-adaptive and irregular (asymmetrical) parallel computations, leading to development challenges for both delivery of data for computation and mapping of processes to physical resources. Using software engineering principles, we have developed a new communications protocol and architectural style for asymmetrical parallel computations called ADaPT.Utilizing the support of architecturally-aware middleware, we show that ADaPT provides a more efficient solution in terms of message passing and load balancing than asymmetrical parallel computations using collective calls in the Message-Passing Interface (MPI) or more advanced frameworks implementing explicit load-balancing policies. Additionally, developers using ADaPT gain significant windfall from good practices in software engineering, including implementation-level support of architectural artifacts and separation of computational loci from communication protocols. expand
|
|
|
SESSION: Emerging results: formal methods and analysis |
|
|
|
|
Dynamically discovering likely interface invariants |
| |
Christoph Csallner,
Yannis Smaragdakis
|
|
Pages: 861-864 |
|
doi>10.1145/1134285.1134435 |
|
Full text: PDF
|
|
Dynamic invariant detection is an approach that has received considerable attention in the recent research literature. A natural question arises in languages that separate the interface of a code module from its implementation: does an inferred invariant ...
Dynamic invariant detection is an approach that has received considerable attention in the recent research literature. A natural question arises in languages that separate the interface of a code module from its implementation: does an inferred invariant describe the interface or the implementation? Furthermore, if an implementation is allowed to refine another, as, for instance, in object-oriented method overriding, what is the relation between the inferred invariants of the overriding and the overridden method? The problem is of great practical interest. Invariants derived by real tools, like Daikon, often suffer from internal inconsistencies when overriding is taken into account, becoming unsuitable for some automated uses. We discuss the interactions between overriding and inferred invariants, and describe the implementation of an invariant inference tool that produces consistent invariants for interfaces and overridden methods. expand
|
|
|
Easy language extension with meta-aspectJ |
| |
Shan Shan Huang,
Yannis Smaragdakis
|
|
Pages: 865-868 |
|
doi>10.1145/1134285.1134436 |
|
Full text: PDF
|
|
Domain-specific languages hold the potential of automating the software development process. Nevertheless, the adoption of a domain-specific language is hindered by the difficulty of transitioning to different language syntax and employing a separate ...
Domain-specific languages hold the potential of automating the software development process. Nevertheless, the adoption of a domain-specific language is hindered by the difficulty of transitioning to different language syntax and employing a separate translator in the software build process. We present a methodology that simplifies the development and deployment of small language extensions, in the context of Java. The main language design principle is that of language extension through unobtrusive annotations. The main language implementation idea is to express the language as a generator of customized AspectJ aspects, using our Meta-AspectJ tool. The advantages of the approach are twofold. First, the tool integrates into an existing software application much as a regular API or library, instead of as a language extension. This means that the programmer can remove the language extension at any point and choose to implement the required functionality by hand without needing to rewrite the client code. Second, a mature language implementation is easy to achieve with little effort since AspectJ takes care of the low-level issues of interfacing with the base Java language. expand
|
|
|
Evaluation of mutation testing for object-oriented programs |
| |
Yu-Seung Ma,
Mary Jean Harrold,
Yong-Rae Kwon
|
|
Pages: 869-872 |
|
doi>10.1145/1134285.1134437 |
|
Full text: PDF
|
|
The effectiveness of mutation testing depends heavily on the types of faults that the mutation operators are designed to represent. Thus, the quality of the mutation operators is key to mutation testing. Although, mutation operators for object-oriented ...
The effectiveness of mutation testing depends heavily on the types of faults that the mutation operators are designed to represent. Thus, the quality of the mutation operators is key to mutation testing. Although, mutation operators for object-oriented languages have previously been presented, little research has been done to show the usefulness of the class mutation operators. To assess the usefulness of class mutation operators, we conducted two empirical studies. In the first study, we examine the number and kinds of mutants that are generated for object-oriented programs. In the second study, we investigate the way in which class mutation operators model faults that are not detected by traditional mutation testing. We conducted our studies using a well-known object-oriented system, BCEL. expand
|
|
|
Integrating static analysis and general-purpose theorem proving for termination analysis |
| |
Panagiotis Manolios,
Daron Vroon
|
|
Pages: 873-876 |
|
doi>10.1145/1134285.1134438 |
|
Full text: PDF
|
|
We present emerging results from our work on termination analysis of software systems. We have designed a static analysis algorithm which attains increased precision and flexibility by issuing queries to a theorem prover. We have implemented our algorithm ...
We present emerging results from our work on termination analysis of software systems. We have designed a static analysis algorithm which attains increased precision and flexibility by issuing queries to a theorem prover. We have implemented our algorithm and initial results show that we obtain a significant improvement over the current state-of-the-art in termination analyses. We also outline how our approach, by integrating theorem proving queries into static analyses, can significantly impact the design of general-purpose static analyses. expand
|
|
|
The problem of knowledge decoupling in software development projects |
| |
Yutaka Yamauchi,
Jack Whalen,
Nozomi Ikeya,
Erik Vinkhuyzen
|
|
Pages: 877-880 |
|
doi>10.1145/1134285.1134439 |
|
Full text: PDF
|
|
In our ethnographic investigation of software integration projects a recurrent pattern emerges. The detailed understanding leaders have of the design and development decreases over time as they become busier and busier attending meetings, creating documents, ...
In our ethnographic investigation of software integration projects a recurrent pattern emerges. The detailed understanding leaders have of the design and development decreases over time as they become busier and busier attending meetings, creating documents, and resolving issues and thus cannot spend much time on design or development work. As a result, their leadership becomes increasingly decoupled from the work of the project. We discuss various dimensions of this problem. expand
|
|
|
SESSION: Emerging results: metrics |
|
|
|
|
Using the balanced scorecard process to compute the value of software applications |
| |
Steven B. Dolins
|
|
Pages: 881-884 |
|
doi>10.1145/1134285.1134441 |
|
Full text: PDF
|
|
This paper describes a method that will provide practical help for IT managers and other enterprise executives who want to determine the value of their software projects. A set of measures is described that can help the IT manager determine the value ...
This paper describes a method that will provide practical help for IT managers and other enterprise executives who want to determine the value of their software projects. A set of measures is described that can help the IT manager determine the value of IT projects, either at the time a project is initiated or at the time the project is operational. This paper identifies relevant performance measures used in the balanced scorecard process and then maps them to IT benefits: which are categorized by strategic, informational, and transactional dimensions. The appropriate measure can be identified and value quantified based upon the expected IT benefit. expand
|
|
|
Designing an economic-driven evaluation framework for process-oriented software technologies |
| |
Bela Mutschler,
Johannes Bumiller,
Manfred Reichert
|
|
Pages: 885-888 |
|
doi>10.1145/1134285.1134442 |
|
Full text: PDF
|
|
During the last decade there has been a dramatic increase in the number of paradigms, standards and tools that can be used to realize process-oriented information systems. A major problem neglected in software engineering research so far has been the ...
During the last decade there has been a dramatic increase in the number of paradigms, standards and tools that can be used to realize process-oriented information systems. A major problem neglected in software engineering research so far has been the systematic determination of costs, benefits, and risks that are related to the use of these process-oriented software engineering methods and technologies. This task is quite difficult as the added value is influenced by many drivers. This paper sketches an economic-driven evaluation methodology to analyze costs, benefits, and risks of process-oriented software technologies and corresponding projects. We introduce an evaluation meta model and sketch a formalism to describe economic-driven evaluation scenarios. expand
|
|
|
Portfolio management of software development projects using COCOMO II |
| |
Wiboon Jiamthubthugsin,
Daricha Sutivong
|
|
Pages: 889-892 |
|
doi>10.1145/1134285.1134443 |
|
Full text: PDF
|
|
Software development projects are subject to external and internal risks that cause delays, budget overrun and poor quality. Portfolio management can be used to alleviate this problem, as it pools resources together and allows for resource sharing among ...
Software development projects are subject to external and internal risks that cause delays, budget overrun and poor quality. Portfolio management can be used to alleviate this problem, as it pools resources together and allows for resource sharing among projects. Consequently, projects are more likely to succeed. However, portfolio management using only deadlines and the number of employees to improve probability of success is still confined. This paper proposes integrating portfolio management with COCOMO II that offers more management flexibility. Managers can adjust other resources, such as tools, staff capability, communication support, etc. to improve the project's success. The proposed method can also be applied despite limited historical data and expert judgment. In addition, this paper introduces time constraints into portfolio management without assuming unrealistic linearity between effort and time. expand
|
|
|
SESSION: Emerging results: program analysis |
|
|
|
|
Effective identification of source code authors using byte-level information |
| |
Georgia Frantzeskou,
Efstathios Stamatatos,
Stefanos Gritzalis,
Sokratis Katsikas
|
|
Pages: 893-896 |
|
doi>10.1145/1134285.1134445 |
|
Full text: PDF
|
|
Source code author identification deals with the task of identifying the most likely author of a computer program, given a set of predefined author candidates. This is usually .based on the analysis of other program samples of undisputed authorship by ...
Source code author identification deals with the task of identifying the most likely author of a computer program, given a set of predefined author candidates. This is usually .based on the analysis of other program samples of undisputed authorship by the same programmer. There are several cases where the application of such a method could be of a major benefit, such as authorship disputes, proof of authorship in court, tracing the source of code left in the system after a cyber attack, etc. We present a new approach, called the SCAP (Source Code Author Profiles) approach, based on byte-level n-gram profiles in order to represent a source code author's style. Experiments on data sets of different programming-language (Java or C++) and varying difficulty (6 to 30 candidate authors) demonstrate the effectiveness of the proposed approach.A comparison with a previous source code authorship identification study based on more complicated information shows that the SCAP approach is language independent and that n-gram author profiles are better able to capture the idiosyncrasies of the source code authors. Moreover, the SCAP approach is able to deal surprisingly well with cases where only a limited amount of very short programs per programmer is available for training. It is also demonstrated that the effectiveness of the proposed model is not affected by the absence of comments in the source code, a condition usually met in cyber-crime cases. expand
|
|
|
An empirical study on decision making in off-the-shelf component-based development |
| |
Jingyue Li,
Reidar Conradi,
Odd Petter N. Slyngstad,
Christian Bunse,
Marco Torchiano,
Maurizio Morisio
|
|
Pages: 897-900 |
|
doi>10.1145/1134285.1134446 |
|
Full text: PDF
|
|
Component-based software development (CBSD) is becoming more and more important since it promotes reuse to higher levels of abstraction. As a consequence, many components are available being either open-source software (OSS) or commercial-off-the-shelf ...
Component-based software development (CBSD) is becoming more and more important since it promotes reuse to higher levels of abstraction. As a consequence, many components are available being either open-source software (OSS) or commercial-off-the-shelf (COTS). However, it is still unclear how the decision for acquiring OSS or COTS components is made in practice. This paper describes an empirical study on why project decision-makers selected COTS instead of OSS components, or vice versa. The study was performed as an international survey in Norway, Italy and Germany. It focused on decision making on using off-the-shelf (OTS) components. We have gathered answers from 83 projects using only COTS components and 44 projects using only OSS components. Results of this study show significant differences and commonalities of integrating OSS or COTS components. Moreover, the study illustrates several research questions that warrant future research. expand
|
|
|
Understanding software application interfaces via string analysis |
| |
Evan Martin,
Tao Xie
|
|
Pages: 901-904 |
|
doi>10.1145/1134285.1134447 |
|
Full text: PDF
|
|
In software systems, different software applications often interact with each other through specific interfaces by exchanging data in string format. For example, web services interact with each other through XML strings. Database applications interact ...
In software systems, different software applications often interact with each other through specific interfaces by exchanging data in string format. For example, web services interact with each other through XML strings. Database applications interact with a database through strings of SQL statements. Sometimes these interfaces between different software applications are complex and distributed. For example, a table in a database can be accessed by multiple methods in a database application and a single method can access multiple tables. In this paper, we propose an approach to understanding software application interfaces through string analysis. The approach first performs a static analysis of source code to identify interaction points (in the form of interface-method-call sites). We then leverage existing string analysis tools to collect all possible string data that can be sent through these different interaction points. Then we manipulate collected string data by grouping similar data together. For example, we group together all collected SQL statements that access the same table. Then we associate various parts of aggregated data with interaction points in order to show the connections between entities from interacting applications. Our preliminary results show that the approach can help us understand the characteristics of interactions between database applications and databases. We also identify some challenges in this approach for our future work. expand
|
|
|
Using an information retrieval system to retrieve source code samples |
| |
Renuka Sindhgatta
|
|
Pages: 905-908 |
|
doi>10.1145/1134285.1134448 |
|
Full text: PDF
|
|
Software developers often face steep learning curves in using a new framework, library, or new versions of frameworks for developing their piece of software. In large organizations, developers learn and explore use of frameworks, rarely realizing, several ...
Software developers often face steep learning curves in using a new framework, library, or new versions of frameworks for developing their piece of software. In large organizations, developers learn and explore use of frameworks, rarely realizing, several peers may have already explored the same. A tool that helps locate samples of code, demonstrating use of frameworks or libraries would provide benefits of reuse, improved code quality and faster development. This paper describes an approach for locating common samples of source code from a repository by providing extensions to an information retrieval system. The approach improves the existing approaches in two ways. First, it provides the scalability of an information retrieval system, supporting search over thousands of source code files of an organization. Second, it provides more specific search on source code by preprocessing source code files and understanding elements of the code as opposed to considering code as plain text. expand
|
|
|
Ensemble of missing data techniques to improve software prediction accuracy |
| |
Bhekisipho Twala,
Michelle Cartwright,
Martin Shepperd
|
|
Pages: 909-912 |
|
doi>10.1145/1134285.1134449 |
|
Full text: PDF
|
|
Software engineers are commonly faced with the problem of incomplete data. Incomplete data can reduce system performance in terms of predictive accuracy. Unfortunately, rare research has been conducted to systematically explore the impact of missing ...
Software engineers are commonly faced with the problem of incomplete data. Incomplete data can reduce system performance in terms of predictive accuracy. Unfortunately, rare research has been conducted to systematically explore the impact of missing values, especially from the missing data handling point of view. This has made various missing data techniques (MDTs) less significant. This paper describes a systematic comparison of seven MDTs using eight industrial datasets. Our findings from an empirical evaluation suggest listwise deletion as the least effective technique for handling incomplete data while multiple imputation achieves the highest accuracy rates. We further propose and show how a combination of MDTs by randomizing a decision tree building algorithm leads to a significant improvement in prediction performance for missing values up to 50%. expand
|
|
|
A methodology and tool for performance analysis of distributed server systems |
| |
Rukma Prabhu Verlekar,
Varsha Apte
|
|
Pages: 913-916 |
|
doi>10.1145/1134285.1134450 |
|
Full text: PDF
|
|
We present a methodology and tool for performance analysis of distributed server systems, which allows high-level specification of the system, and generates and solves the underlying queueing network model. Our approach is different from the existing ...
We present a methodology and tool for performance analysis of distributed server systems, which allows high-level specification of the system, and generates and solves the underlying queueing network model. Our approach is different from the existing ones in that the specification captures the natural manner in which application servers are deployed on machines and machines are deployed on networks. The model does not impose any strict tiers on the server system. Multiple use case scenarios can be specified, and the tool computes measures such as end-to-end response times for each scenario while taking into account queueing delays at the hardware device, software threads and at the network. The development of the tool is ongoing, and will include detailed network protocol models as well as more flexible distributed system behavior, in the future. expand
|
|
|
SESSION: Emerging results: requirements engineering |
|
|
|
|
The role of asynchronous discussions in increasing the effectiveness of remote synchronous requirements negotiations |
| |
Daniela Damian,
Filippo Lanubile,
Teresa Mallardo
|
|
Pages: 917-920 |
|
doi>10.1145/1134285.1134452 |
|
Full text: PDF
|
|
Important and yet very difficult process in software development, requirements engineering is plagued with additional challenges in the emergent dynamics of geographically distributed software teams. Our hypothesis is that a mix of lean and rich communication ...
Important and yet very difficult process in software development, requirements engineering is plagued with additional challenges in the emergent dynamics of geographically distributed software teams. Our hypothesis is that a mix of lean and rich communication media are needed towards increasing the effectiveness of meetings in reaching mutual agreement when stakeholders are geographically dispersed.We studied tool-supported remote inspections in six educational global project teams in a multicultural software development environment. In this paper we present the preliminary results from comparing the effectiveness of the requirements negotiations when preceded by the asynchronous discussions to those negotiations with no prior asynchronous discussions. expand
|
|
|
"How do I know what I have to do?": the role of the inquiry culture in requirements communication for distributed software development projects |
| |
Vesna Mikulovic,
Michael Heiss
|
|
Pages: 921-925 |
|
doi>10.1145/1134285.1134453 |
|
Full text: PDF
|
|
As software specifications for complex systems are practically never 100% complete and consistent, the recipient of the specification needs domain knowledge in order to decide which parts of the system are specified clearly and which parts are specified ...
As software specifications for complex systems are practically never 100% complete and consistent, the recipient of the specification needs domain knowledge in order to decide which parts of the system are specified clearly and which parts are specified ambiguously and thus need inquiry to achieve a more detailed specification. In this paper we classify 16 different situations (states) of requirements communication and analyze, based on a state diagram, how a mature inquiry culture can help to initiate transitions from undesirable states into more desirable states. In a case study the inquiry practices of a very large software development organization are shown. Knowledge networks within the organization play an important role in building up a mature inquiry culture. expand
|
|
|
Analysis of multi-agent systems based on KAOS modeling |
| |
Hiroyuki Nakagawa,
Takuya Karube,
Shinichi Honiden
|
|
Pages: 926-929 |
|
doi>10.1145/1134285.1134454 |
|
Full text: PDF
|
|
The purpose of this study is to reduce the gap between the requirement analysis and analysis phases of developing multi-agent systems. We utilize KAOS, one of the goal-oriented analysis methodologies, as a requirement analysis method, and propose a model ...
The purpose of this study is to reduce the gap between the requirement analysis and analysis phases of developing multi-agent systems. We utilize KAOS, one of the goal-oriented analysis methodologies, as a requirement analysis method, and propose a model translation into an analysis model for simple and effective development of multi-agent systems. expand
|
|
|
Understanding requirements for computer-aided healthcare workflows: experiences and challenges |
| |
Xiping Song,
Beatrice Hwong,
Gilberto Matos,
Arnold Rudorfer,
Christopher Nelson,
Minmin Han,
Andrei Girenkov
|
|
Pages: 930-934 |
|
doi>10.1145/1134285.1134455 |
|
Full text: PDF
|
|
Medical informatics and software engineering researchers have studied how to use software technologies to define, analyze, automate, and provide decision support for healthcare workflows. We, as the requirement engineering and prototyping group of the ...
Medical informatics and software engineering researchers have studied how to use software technologies to define, analyze, automate, and provide decision support for healthcare workflows. We, as the requirement engineering and prototyping group of the Siemens R&D center, have been involved in the research and development of healthcare workflows. During interactions with the workflow users and developers, we found significant confusion about the terminologies and the purposes of supporting different healthcare workflows. Thus, we are motivated to classify computer-aided healthcare workflows, including their approaches, goals, and major characteristics. This paper also discusses workflow application issues and software challenges based upon our experiences and research. expand
|
|
|
SESSION: Doctoral symposium: presentations |
|
|
|
|
Session details: Doctoral symposium: presentations |
| |
A. Finkelstein,
B. Nuseibeh
|
|
doi>10.1145/3245455 |
|
Full text: PDF
|
|
|
|
|
Automating bug report assignment |
| |
John Anvik
|
|
Pages: 937-940 |
|
doi>10.1145/1134285.1134457 |
|
Full text: PDF
|
|
Open-source development projects typically support an open bug repository to which both developers and users can report bugs. A report that appears in this repository must be triaged to determine if the report is one which requires attention and if it ...
Open-source development projects typically support an open bug repository to which both developers and users can report bugs. A report that appears in this repository must be triaged to determine if the report is one which requires attention and if it is, which developer will be assigned the responsibility of resolving the report. Large open-source developments are burdened by the rate at which new bug reports appear in the bug repository. The thesis of this work is that the task of triage can be eased by using a semi-automated approach to assign bug reports to developers. The approach consists of constructing a recommender for bug assignments; examined are both a range of algorithms that can be used and the various kinds of information provided to the algorithms. The proposed work seeks to determine through human experimentation a sufficient level of precision for the recommendations, and to analytically determine the trade-offs of the various algorithmic and information choices. expand
|
|
|
P2P file sharing analysis for a better performance |
| |
Martha-Rocio Ceballos,
Juan-Luis Gorricho
|
|
Pages: 941-944 |
|
doi>10.1145/1134285.1134458 |
|
Full text: PDF
|
|
The so-called second generation P2P file-sharing applications have with no doubt a better performance than the first implementations. The most remarkable difference is due to the file division into smaller pieces, where a receiving peer of any piece ...
The so-called second generation P2P file-sharing applications have with no doubt a better performance than the first implementations. The most remarkable difference is due to the file division into smaller pieces, where a receiving peer of any piece automatically becomes a new source to other peers. But a new question arises on how we distribute all the pieces provided by a seed peer to minimize the global and presumably individual download times. In this paper we summarize part of the work we have developed up until now to answer this general question, in particular, we will analyze how close the present second generation P2P file-sharing applications remain from an ideal solution with the theoretical best performance, that is, where all peers are interconnected with each other and all peers have an altruistic behavior always uploading its contents at any chance. Successive modifications of the ideal solution will lead us to more realistic scenarios. We will estimate the performance on each case and finally present the current studies we are carrying out to improve the overall capacity. expand
|
|
|
Resolving component deployment & configuration challenges for enterprise DRE systems via frameworks & generative techniques |
| |
Gan Deng
|
|
Pages: 945-948 |
|
doi>10.1145/1134285.1134459 |
|
Full text: PDF
|
|
Component-based software engineering (CBSE) is increasingly being adopted for large-scale software systems, particularly for enterprise distributed real-time and embedded (DRE) systems. One of the most challenging-and often most neglected-problems ...
Component-based software engineering (CBSE) is increasingly being adopted for large-scale software systems, particularly for enterprise distributed real-time and embedded (DRE) systems. One of the most challenging-and often most neglected-problems in CBSE for enterprise DRE systems is the system (re)deployment and (re)configuration (D&C) process, where the increasing heterogeneity and versatility of application domains requires supports for an unprecedented level of configurability and adaptability. Existing D&C technologies suffer from two major problems: (1) insufficient module-level reusability and ability to evolve in the face of functionality evolution and diversification due to the interaction of too many orthogonal concerns imposed by a wide range of application requirements and (2) significant inherent and accidental complexities stemming from inadequate design tools. To address these problems, my research focuses on improving both computing performance and human productivity associated with the D&C of component-based enterprise DRE systems. To improve computing performance, my research has systematically identified bottlenecks with conventional D&C approaches and provides an aspect-oriented approach to decouple "extrinsic" orthogonal D&C concerns from "intrinsic" core D&C infrastructure, thereby enabling different crosscutting D&C concerns to be weaved independently to create a light-weight, highly optimized and extensible D&C infrastructure. To improve human performance, my research provides model-driven tools and analysis techniques to alleviate key inherent and accidental complexities in the D&C process. expand
|
|
|
A framework for modelling and analysis of software systems scalability |
| |
Leticia Duboc,
David S. Rosenblum,
Tony Wicks
|
|
Pages: 949-952 |
|
doi>10.1145/1134285.1134460 |
|
Full text: PDF
|
|
Scalability is a widely-used term in scientific papers, technical magazines and software descriptions. Its use in the most varied contexts contribute to a general confusion about what the term really means. This lack of consensus is a potential source ...
Scalability is a widely-used term in scientific papers, technical magazines and software descriptions. Its use in the most varied contexts contribute to a general confusion about what the term really means. This lack of consensus is a potential source of problems, as assumptions are made in the face of a scalability claim. A clearer and widely-accepted understanding of scalability is required to restore the usefulness of the term. This research investigates commonly found definitions of scalability and attempts to capture its essence in a systematic framework. Its expected contribution is in assisting software developers to reason, characterize, communicate and adjust the scalability of software systems. expand
|
|
|
Refactoring-aware version control |
| |
Tammo Freese
|
|
Pages: 953-956 |
|
doi>10.1145/1134285.1134461 |
|
Full text: PDF
|
|
Today, refactorings are supported in some integrated development environments (IDEs). The refactoring operations can only work correctly if all source code that needs to be changed is available to the IDE. However, this precondition neither holds for ...
Today, refactorings are supported in some integrated development environments (IDEs). The refactoring operations can only work correctly if all source code that needs to be changed is available to the IDE. However, this precondition neither holds for application programming interface (API) evolution, nor in team development. The research presented in this paper aims to support refactoring in API evolution and team development by extending IDE and version control to allow refactoring-aware merging and migration. expand
|
|
|
Testing-based interactive fault localization |
| |
Dan Hao
|
|
Pages: 957-960 |
|
doi>10.1145/1134285.1134462 |
|
Full text: PDF
|
|
|
|
|
Unanticipated reuse of large-scale software features |
| |
Reid Holmes
|
|
Pages: 961-964 |
|
doi>10.1145/1134285.1134463 |
|
Full text: PDF
|
|
Software reuse has been endorsed as a way to reduce development times and costs while increasing software quality and reliability. Techniques designed to encourage software reuse have concentrated on creating reusable software in the form of frameworks, ...
Software reuse has been endorsed as a way to reduce development times and costs while increasing software quality and reliability. Techniques designed to encourage software reuse have concentrated on creating reusable software in the form of frameworks, reuse repositories, and component libraries. These approaches do not help a developer who wants to leverage, from an existing system, a complex feature that was not designed to be reusable. We propose an approach that allows developers to investigate the reuse potential of a feature within an existing system, to create a plan for reusing the feature, and to support the transformation of the feature to the developer's project. We believe that by providing explicit support for the reuse of large-scale source code features, the reuse process---and its benefits---can be made accessible to developers. expand
|
|
|
Improving the customer configuration update process by explicitly managing software knowledge |
| |
Slinger Jansen
|
|
Pages: 965-968 |
|
doi>10.1145/1134285.1134464 |
|
Full text: PDF
|
|
The implementation and continuous support of a software product at a customer with evolving requirements is a complex task for a product software vendor. There are many customers for the vendor to serve, all of whom might require their own version or ...
The implementation and continuous support of a software product at a customer with evolving requirements is a complex task for a product software vendor. There are many customers for the vendor to serve, all of whom might require their own version or variant of the application. Furthermore, the software application itself will consist of many (software) components that depend on each other to function correctly. On top of that, these components will evolve over time to meet the changing needs of customers. To alleviate this problem we propose to alleviate the software release and deployment effort and reduce risks associated with it. This will be achieved by explicitly managing typical knowledge about the software product, such as configuration and dependency information, thereby allowing software vendors to improve the customer configuration updating process. The proposed solution of knowledge management at both the customer and vendor site, is validated through industrial case studies. expand
|
|
|
Visual languages for event integration specification |
| |
Na Liu
|
|
Pages: 969-972 |
|
doi>10.1145/1134285.1134465 |
|
Full text: PDF
|
|
We are exploring existing approaches and developing new techniques for visual event-based system integration. We are using domain-specific visual languages with different high-level visual metaphors (including Tool Abstraction, Event-Query-Filter-Action ...
We are exploring existing approaches and developing new techniques for visual event-based system integration. We are using domain-specific visual languages with different high-level visual metaphors (including Tool Abstraction, Event-Query-Filter-Action and Spreadsheet) to specify event-handling support and provide backend processing tool support for event integration specification and visualisation of event propagation. We aim to generalise from three exemplar visual event-driven system metaphors and develop a new, generic visual event handling metaphor. From this we will build a visual environment for specifying event-based system integration. The visual metaphor we are developing should adapt the event-based communication model to a wide range of application domains, and also should support complex and intelligent system design and implementation. expand
|
|
|
XML conceptual modeling with XUML |
| |
HongXing Liu,
YanSheng Lu,
Qing Yang
|
|
Pages: 973-976 |
|
doi>10.1145/1134285.1134466 |
|
Full text: PDF
|
|
As XML has become the standard format for representing structured and semi-structured data on the Web, the methods for designing XML schemas is becoming more and more important. XML schemas represent the logical models of the documents. In order to design ...
As XML has become the standard format for representing structured and semi-structured data on the Web, the methods for designing XML schemas is becoming more and more important. XML schemas represent the logical models of the documents. In order to design or integrate XML schemas, it is necessary to first design the conceptual structures with a proper conceptual model. We thus specify a XML conceptual model, XUML, which has following characteristics comparing with the existing XML conceptual models: 1) expressing the containment semantics more explicitly; 2) supporting the concept of Business Components; 3) specifying the data dependencies in multiple contexts. XUML is defined based on the UML2 standard, so it will be more friendly and practical for those who have had knowledge and experience on UML. Based on XUML, we further present a framework of the methodology which is dedicated to the design of XML documents and XML databases. In this paper, the focus is put on XUML. expand
|
|
|
Experimental program analysis: a new paradigm for program analysis |
| |
Joseph R. Ruthruff
|
|
Pages: 977-980 |
|
doi>10.1145/1134285.1134467 |
|
Full text: PDF
|
|
Program analysis techniques are used by software engineers to deduce and infer targeted characteristics of software systems for tasks such as testing, debugging, maintenance, and program comprehension. Recently, some program analysis techniques have ...
Program analysis techniques are used by software engineers to deduce and infer targeted characteristics of software systems for tasks such as testing, debugging, maintenance, and program comprehension. Recently, some program analysis techniques have been designed to leverage characteristics of traditional experimentation in order to analyze software systems. We believe that the use of experimentation for program analysis constitutes a new program analysis paradigm: experimental program analysis. This research seeks to accomplish four goals: to precisely define experimental program analysis, to provide a means for classifying experimental program analysis techniques, to identify existing experimental program analysis techniques in the research literature, and to enhance the use of experimental program analysis by improving existing, and by creating new, experimental program analysis techniques. expand
|
|
|
The echo approach to formal verification |
| |
Xiang Yin
|
|
Pages: 981-984 |
|
doi>10.1145/1134285.1134468 |
|
Full text: PDF
|
|
In this research abstract, we propose Echo: a general formal verification approach that combines theorem proving, model checking, and code-level tools to show an implementation's compliance with its formal specification. We believe that this approach ...
In this research abstract, we propose Echo: a general formal verification approach that combines theorem proving, model checking, and code-level tools to show an implementation's compliance with its formal specification. We believe that this approach is novel since the major proof step is carried out between two abstract specification models, thus avoiding or mitigating the difficulty of the direct compliance proof of a concrete implementation against an abstract formal specification in traditional Floyd-Hoare verification. We present our prototype design and implementation of the major components of the approach and we instantiate the approach to verify SPARK Ada implementations against PVS specifications. We conducted an initial experiment to determine the feasibility of the approach using a hypothetical avionics system. expand
|
|
|
A new approach for software testability analysis |
| |
Liang Zhao
|
|
Pages: 985-988 |
|
doi>10.1145/1134285.1134469 |
|
Full text: PDF
|
|
Software testability analysis has been an important research direction since 1990s and becomes more pervasive when entering 21st century. In this paper, we summarize problems in existing research work. We propose to use beta distribution to indicate ...
Software testability analysis has been an important research direction since 1990s and becomes more pervasive when entering 21st century. In this paper, we summarize problems in existing research work. We propose to use beta distribution to indicate software testability. When incorporating testing effectiveness information, we theoretically prove that the distribution can express testing effort and test value at the same time. We conduct experiment and validate our results on Siemens programs. Future work concentrate on deducing a prior estimation of the distribution for given software and testing criterion pair from program slicing and semantic analysis. expand
|
|
|
POSTER SESSION: Doctoral symposium: posters |
|
|
|
|
Debugging by asking questions about program output |
| |
Andrew Ko
|
|
Pages: 989-992 |
|
doi>10.1145/1134285.1134471 |
|
Full text: PDF
|
|
One reason debugging is the most time-consuming part of software development is because developers struggle to map their questions about a program's behavior onto debugging tools' limited support for analyzing code. Interrogative debugging is a new debugging ...
One reason debugging is the most time-consuming part of software development is because developers struggle to map their questions about a program's behavior onto debugging tools' limited support for analyzing code. Interrogative debugging is a new debugging paradigm that allows developers to ask questions directly about their programs' output, helping them to more efficiently and accurately determine what parts of the system to understand. An interrogative debugging prototype called the Whyline is described, which has been shown to reduce debugging time by a factor of eight. Several extensions and generalizations to it are proposed, including plans for evaluating their effectiveness. expand
|
|
|
Improving the quality of UML models in practice |
| |
Christian F. J. Lange
|
|
Pages: 993-996 |
|
doi>10.1145/1134285.1134472 |
|
Full text: PDF
|
|
The importance of UML models in software engineering is increasing. Inherent to the UML is its lack of a formal semantics, its risk for inconsistency and completeness defects and the absence of modeling norms. These properties are sources for poor model ...
The importance of UML models in software engineering is increasing. Inherent to the UML is its lack of a formal semantics, its risk for inconsistency and completeness defects and the absence of modeling norms. These properties are sources for poor model quality and defects. To find out to which extent defects occur and what types of defects occur in practice we empirically investigate the state-of-the-practice of quality in UML models using a practitioners survey and a series of industrial case studies. Additionally we analyze the effects of defects in UML models experimentally. Based on this experiment we present an objective classification of UML defects which allows for prioritizing defects and thus allocate resources for defect removal. We aim at building a rule-set, metrics and visualization techniques to improve the quality of UML models during development. We propose a quality model that is specific for UML models. Finally, we propose modeling conventions, similar to coding conventions, to prevent for defects and to assure uniformity of modeling within an organization. We aim at empirically validating our techniques to provide pragmatic technology that can be transferred to industrial practice. expand
|
|
|
Developing cost-effective model-based techniques for GUI testing |
| |
Qing Xie
|
|
Pages: 997-1000 |
|
doi>10.1145/1134285.1134473 |
|
Full text: PDF
|
|
Most of today's software users interact with the software through a graphical user interface (GUI). While GUIs have become ubiquitous, testing of GUIs has remained until recently, a neglected research area. Existing GUI testing techniques are extremely ...
Most of today's software users interact with the software through a graphical user interface (GUI). While GUIs have become ubiquitous, testing of GUIs has remained until recently, a neglected research area. Existing GUI testing techniques are extremely resource intensive primarily because GUIs have very large input spaces. This research proposes to advance the state-of-the-art in GUI testing by empirically studying GUI faults, interactions between GUI events, why certain event interactions lead to faults, and use the results of these studies to develop cost-effective model-based GUI testing techniques. The novel feature of this research will be a reduced model of the GUI's event-interaction space. The model will be derived automatically from the GUI; it will be used to automatically generate specialized GUI test cases that are effective at detecting GUI faults. The model will be extended to develop new test oracles, new coverage criteria for GUIs, and new regression testing techniques. Moreover, this research will empirically evaluate the developed techniques. expand
|
|
|
Taking lessons from history |
| |
Thomas Zimmermann
|
|
Pages: 1001-1005 |
|
doi>10.1145/1134285.1134474 |
|
Full text: PDF
|
|
Mining of software repositories has become an active research area. However, most past research considered any change to software as beneficial. This thesis will show how we can benefit from a classification into good and bad changes. The knowledge of ...
Mining of software repositories has become an active research area. However, most past research considered any change to software as beneficial. This thesis will show how we can benefit from a classification into good and bad changes. The knowledge of bad changes will improve defect prediction and localization. Furthermore, we will describe how to learn project-specific error patterns that will help reducing future errors. expand
|
|
|
WORKSHOP SESSION: Workshops |
|
|
|
|
Session details: Workshops |
| |
F. Paulisch
|
|
doi>10.1145/3245456 |
|
Full text: PDF
|
|
|
|
|
Software engineering for secure systems |
| |
Danilo Bruschi,
Bart De Win,
Mattia Monga
|
|
Pages: 1007-1008 |
|
doi>10.1145/1134285.1134476 |
|
Full text: PDF
|
|
|
|
|
Second international workshop on interdisciplinary software engineering research (WISER) |
| |
Nikolay Mehandjiev,
Pearl Brereton,
John Hosking
|
|
Pages: 1009-1010 |
|
doi>10.1145/1134285.1134477 |
|
Full text: PDF
|
|
WISER is a series of international workshops that focus on identifying and transferring techniques from other disciplines that might usefully be applied to software engineering research and practice.The workshops address this topic through presentations ...
WISER is a series of international workshops that focus on identifying and transferring techniques from other disciplines that might usefully be applied to software engineering research and practice.The workshops address this topic through presentations and discussions of both actual case studies and of ways in which potentially useful approaches can be identified, adapted and adopted within software engineering. expand
|
|
|
Third international summit on software engineering education (SSEE III): bridging the university/industry gap |
| |
J. Barrie Thompson,
Helen M. Edwards
|
|
Pages: 1011-1012 |
|
doi>10.1145/1134285.1134478 |
|
Full text: PDF
|
|
Innovative University/Industry interactions are examined in this open event with the aim of providing inputs to an international project that is being funded through the United Kingdom's Teaching Fellowship Scheme. These inputs will support the first ...
Innovative University/Industry interactions are examined in this open event with the aim of providing inputs to an international project that is being funded through the United Kingdom's Teaching Fellowship Scheme. These inputs will support the first stage of the project which is concerned with gaining knowledge of industrial Software Engineering practices and the development of a framework that can be used in the classification and evaluation of such practices. expand
|
|
|
Early aspects at ICSE: workshop in aspect-oriented requirements engineering and architecture design |
| |
Paul C. Clements
|
|
Pages: 1013-1014 |
|
doi>10.1145/1134285.1134479 |
|
Full text: PDF
|
|
This paper summarizes the Workshop in Aspect-Oriented Requirements Engineering and Architecture Design.
This paper summarizes the Workshop in Aspect-Oriented Requirements Engineering and Architecture Design. expand
|
|
|
Software engineering for adaptive and self-managing systems |
| |
Betty H. C. Cheng,
David Garlan,
Rogério de Lemos,
Jeff Magee,
Richard Taylor,
Stephen Fickas,
Hausi Müller
|
|
Pages: 1015-1016 |
|
doi>10.1145/1134285.1134480 |
|
Full text: PDF
|
|
The objective of this workshop is to consolidate the interest in the software engineering community on autonomic, self-managing, self-healing, self-optimizing, self-configuring, and self-adaptive systems. The workshop will provide a forum for researchers ...
The objective of this workshop is to consolidate the interest in the software engineering community on autonomic, self-managing, self-healing, self-optimizing, self-configuring, and self-adaptive systems. The workshop will provide a forum for researchers to share new results, raise awareness of new adaptive concerns, and promote collaboration among the community. This workshop will be the first of several to assess progress and identify challenges in this important area. expand
|
|
|
The role of abstraction in software engineering |
| |
Jeff Kramer,
Orit Hazzan
|
|
Pages: 1017-1018 |
|
doi>10.1145/1134285.1134481 |
|
Full text: PDF
|
|
This workshop explores the concept of abstraction in software engineering at the individual, team and organization level. The aim is to explore the role of abstraction in dealing with complexity in the software engineering process, to discuss how the ...
This workshop explores the concept of abstraction in software engineering at the individual, team and organization level. The aim is to explore the role of abstraction in dealing with complexity in the software engineering process, to discuss how the use of different levels of abstraction may facilitate performance of different activities, and to examine whether or not abstraction skills can be taught. expand
|
|
|
Workshop description of 4th workshop on software quality (WOSQ) |
| |
Sunita Chulani,
Barry Boehm,
June Verner,
Bernard Wong
|
|
Pages: 1019-1020 |
|
doi>10.1145/1134285.1134482 |
|
Full text: PDF
|
|
Cost, schedule and quality are highly correlated factors in software development. They basically form three sides of the same triangle. Beyond a certain point (the "Quality is Free" point), it is difficult to increase the quality without increasing either ...
Cost, schedule and quality are highly correlated factors in software development. They basically form three sides of the same triangle. Beyond a certain point (the "Quality is Free" point), it is difficult to increase the quality without increasing either cost or schedule or both for the software under development. As products and applications mature, users expect higher quality products. They want IT organizations to be responsible and accountable for the quality claims made by the product marketing teams. In the last couple decades, much software engineering research has focussed on standards, methodologies and techniques for improving software quality, measuring software quality and software quality assurance. Most of this research is focused on the internal/development view of quality. More recent studies done in conjunction with the marketing groups have made attempts to understand the customer view of quality. All of these different ongoing activities to understand quality from the various perspectives have made the field even more enriching and exciting. The Fourth Workshop on Software Quality aims to bring together academic, industrial and commercial communities interested in software quality topics to discuss the different technologies being defined and used in the software quality area. expand
|
|
|
MSR 2006: the 3rd international workshop on mining software repositories |
| |
Stephan Diehl,
Harald Gall,
Martin Pinzger,
Ahmed E. Hassan
|
|
Pages: 1021-1021 |
|
doi>10.1145/1134285.1134483 |
|
Full text: PDF
|
|
|
|
|
Fifth workshop on software engineering for large-scale multi-agent systems (SELMAS) |
| |
Ricardo Choren,
Ho-fung Leung,
Alessandro Garcia,
Carlos Lucena,
Holger Giese,
Alexander Romanovsky
|
|
Pages: 1022-1023 |
|
doi>10.1145/1134285.1134484 |
|
Full text: PDF
|
|
Software is becoming present in every aspect of our lives, pushing us inevitably towards a world of ambient computing systems. Multi-agent systems (MAS) are a prominent technology which facilitates modeling and development of large-scale distributed ...
Software is becoming present in every aspect of our lives, pushing us inevitably towards a world of ambient computing systems. Multi-agent systems (MAS) are a prominent technology which facilitates modeling and development of large-scale distributed systems. In recent years, software engineering research has focused on methodologies and techniques for improving MAS design and implementation. However, making large MAS dependable is still an open issue. The Fifth Workshop on Software Engineering for Large-Scale Multi-Agent Systems (SELMAS 2006) aims to bring together academic, industrial and commercial communities interested in agent-oriented software engineering topics to discuss the different technologies being defined and used in the development of dependable MAS. expand
|
|
|
Workshop on technology transfer in software engineering |
| |
Warren Harrison,
R. J. Wieringa
|
|
Pages: 1024-1025 |
|
doi>10.1145/1134285.1134485 |
|
Full text: PDF
|
|
In many industries, the adoption of technology developed at universities and independent research labs is the prevalent paradigm. However, in the software space, this is a relatively rare occurrence. In many cases, academic software engineering tends ...
In many industries, the adoption of technology developed at universities and independent research labs is the prevalent paradigm. However, in the software space, this is a relatively rare occurrence. In many cases, academic software engineering tends to lag rather than lead commercial developments. The goal of this Workshop is to open a dialouge between researchers and practitioners to address this problem. expand
|
|
|
First international workshop on global integrated model management |
| |
Jean Bézivin,
Jean-Marie Favre,
Bernhard Rumpe
|
|
Pages: 1026-1027 |
|
doi>10.1145/1134285.1134486 |
|
Full text: PDF
|
|
|
|
|
The first international workshop on automation of software test |
| |
Hong Zhu,
Joseph R. Horgan,
S. C. Cheung,
J. Jenny Li
|
|
Pages: 1028-1029 |
|
doi>10.1145/1134285.1134487 |
|
Full text: PDF
|
|
|
|
|
2nd international workshop on advances and applications of problem frames |
| |
Jon G. Hall,
Lucia Rapanotti,
Karl Cox,
Zhi Jin
|
|
Pages: 1030-1031 |
|
doi>10.1145/1134285.1134488 |
|
Full text: PDF
|
|
Software problems originate from real world problems. A software solution must address its real world problem in a satisfactory way. A software engineer must therefore understand the real world problem that their software intends to address. To be able ...
Software problems originate from real world problems. A software solution must address its real world problem in a satisfactory way. A software engineer must therefore understand the real world problem that their software intends to address. To be able to do this, the software engineer must understand the problem context and how it is to be affected by the proposed software, expressed as the requirements. Without this knowledge the engineer can only hope to chance upon the right solution for the problem. Application of the Problem Frames approach may well be a way of meeting this need. expand
|
|
|
Global software development for the practitioner |
| |
Philippe Kruchten,
Yvonne Hsieh,
Eve MacGregor,
Deependra Moitra,
Wolfgang Strigel,
Christof Ebert
|
|
Pages: 1032-1033 |
|
doi>10.1145/1134285.1134489 |
|
Full text: PDF
|
|
This International Workshop on Global Software Development for the Practitioner (GSD2006) was held in conjunction with the 28th International Conference on Software Engineering (ICSE 2006) on May 23rd, 2006 in Shanghai, China. The ...
This International Workshop on Global Software Development for the Practitioner (GSD2006) was held in conjunction with the 28th International Conference on Software Engineering (ICSE 2006) on May 23rd, 2006 in Shanghai, China. The workshop was motivated by the industry trend towards developing software in globally distributed settings: geographically distributed teams, or outsourcing parts of the software development to other organizations in other parts of the world. Topics presented and discussed in the workshop focused on grounded, practical strategies and techniques that address the geographic, temporal, organizational, and cultural boundaries inherent in global software projects. expand
|
|
|
3rd international workshop on software engineering for automotive systems - SEAS 2006 |
| |
Martin Rappl,
Alexander Pretschner,
Christian Salzmann,
Thomas Stauner
|
|
Pages: 1034-1034 |
|
doi>10.1145/1134285.1134490 |
|
Full text: PDF
|
|
This workshop summary presents an overview of the one-day International Workshop on Software Engineering for Automotive Systems (SEAS 2006), held in conjunction with the 28th International Conference on Software Engineering (ICSE'06). Details ...
This workshop summary presents an overview of the one-day International Workshop on Software Engineering for Automotive Systems (SEAS 2006), held in conjunction with the 28th International Conference on Software Engineering (ICSE'06). Details about SEAS 2006 may be found at: http://www.inf.ethz.ch/personal/pretscha/events/seas06/. expand
|
|
|
Fourth international workshop on dynamic analysis (WODA 2006) |
| |
Neelam Gupta,
Andy Podgurski
|
|
Pages: 1035-1035 |
|
doi>10.1145/1134285.1134491 |
|
Full text: PDF
|
|
Dynamic analysis techniques reason over program executions and deal with data produced at program execution time. Dynamic analysis and static analysis techniques complement each other. Hence, a key focus of the workshop is dynamic analysis of software ...
Dynamic analysis techniques reason over program executions and deal with data produced at program execution time. Dynamic analysis and static analysis techniques complement each other. Hence, a key focus of the workshop is dynamic analysis of software systems with an emphasis on research that integrates static and dynamic analyses. expand
|
|
|
International workshop on service oriented software engineering (IW-SOSE'06) |
| |
Elisabetta Di Nitto,
Robert J. Hall,
Jun Han,
Yanbo Han,
Andrea Polini,
Kurt Sandkuhl,
Andrea Zisman
|
|
Pages: 1036-1037 |
|
doi>10.1145/1134285.1134492 |
|
Full text: PDF
|
|
|
|
|
The 8th international workshop on economics-driven software engineering research |
| |
Rick Kazman,
Kevin Sullivan
|
|
Pages: 1038-1038 |
|
doi>10.1145/1134285.1134493 |
|
Full text: PDF
|
|
This paper presents the 8th International Workshop on Economics-Driven Software Engineering Research (EDSER-8).
This paper presents the 8th International Workshop on Economics-Driven Software Engineering Research (EDSER-8). expand
|
|
|
Workshop description of 5th intl. workshop on scenarios and state machines: models-algorithms-and tools (SCESM) |
| |
Jon Whittle,
Leif Geiger,
Michael Meisinger
|
|
Pages: 1039-1040 |
|
doi>10.1145/1134285.1134494 |
|
Full text: PDF
|
|
SCESM '06 is the 5th International Workshop on Scenarios and State Machines: Models, Algorithms and Tools. It is a one day ICSE '06 workshop. Details about SCESM '06 may be found at http://ise.gmu.edu/scesm06/.
SCESM '06 is the 5th International Workshop on Scenarios and State Machines: Models, Algorithms and Tools. It is a one day ICSE '06 workshop. Details about SCESM '06 may be found at http://ise.gmu.edu/scesm06/. expand
|
|
|
TUTORIAL SESSION: Tutorials: full day tutorials |
|
|
|
|
Session details: Tutorials: full day tutorials |
| |
S. C. Cheung,
S. Easterbrook
|
|
doi>10.1145/3245457 |
|
Full text: PDF
|
|
|
|
|
Software engineering themes for the future |
| |
Gerhard Fischer
|
|
Pages: 1043-1044 |
|
doi>10.1145/1134285.1134496 |
|
Full text: PDF
|
|
The objective of this tutorial is to provide the participants with opportunities to think differently about future challenges facing software engineering research and practice. Collaborative design, social creativity, and meta-design are identified ...
The objective of this tutorial is to provide the participants with opportunities to think differently about future challenges facing software engineering research and practice. Collaborative design, social creativity, and meta-design are identified as themes that will be of great importance in the years to come. The concept of design is used very broadly affecting all aspects of the process of creating, using, and evolving software-intensive systems. Stakeholders coming from different disciplines and engaging in collaborative design can contribute to social creativity by exploring new approaches, new problems, and new visions. Meta-design is a methodology empowering users to act not only as passive consumers but as active contributors and designers, thereby facilitating and supporting social creativity.The themes of the tutorial will be illustrated with specific theoretical frameworks and innovative systems. The relevance of these themes has been demonstrated by their desirability and importance on research, education, and design practices in companies, educational institutions, and research organizations. expand
|
|
|
Case studies for software engineers |
| |
Dewayne E. Perry,
Susan Elliott Sim,
Steve Easterbrook
|
|
Pages: 1045-1046 |
|
doi>10.1145/1134285.1134497 |
|
Full text: PDF
|
|
The topic of this full-day tutorial was the correct use and interpretation of case studies as an empirical research method. Using an equal blend of lecture and discussion, it gave attendees a foundation for conducting, reviewing, and reading case studies. ...
The topic of this full-day tutorial was the correct use and interpretation of case studies as an empirical research method. Using an equal blend of lecture and discussion, it gave attendees a foundation for conducting, reviewing, and reading case studies. There were lessons for software engineers as researchers who conduct and report case studies, reviewers who evaluate papers, and practitioners who are attempting to apply results from papers. The main resource for the course was the book Case Study Research: Design and Methods by Robert K. Yin. This text was supplemented with positive and negative examples from the literature. expand
|
|
|
Engineering safety-related requirements for software-intensive systems |
| |
Donald Firesmith
|
|
Pages: 1047-1048 |
|
doi>10.1145/1134285.1134498 |
|
Full text: PDF
|
|
Many software-intensive systems have significant safety ramifications and need to have their associated safety-related requirements properly engineered. It has been observed by multiple consultants, researchers, and authors that inadequate requirements ...
Many software-intensive systems have significant safety ramifications and need to have their associated safety-related requirements properly engineered. It has been observed by multiple consultants, researchers, and authors that inadequate requirements are a major cause of accidents involving software-intensive systems. Yet in practice, there is very little interaction between the requirements and safety disciplines and little collaboration between their respective communities. Most requirements engineers know little about safety engineering, and most safety engineers know little about requirements engineering. Also, safety engineering typically concentrates on architectures and designs rather than requirements because hazard analysis typically depends on the identification of hardware and software components, the failure of which can cause accidents. This leads to safety-related requirements that are often ambiguous, incomplete, and even missing. The tutorial begins with a single common realistic example of a safety critical system that will be used throughout to provide good examples of safety-related requirements. The tutorial then provides an introduction to requirements engineering for safety engineers and an introduction to safety engineering for requirements engineers. The tutorial then provides clear definitions and descriptions of the different kinds of safety-related requirements and finishes with a practical process for producing them. expand
|
|
|
Variability management in software product line engineering |
| |
Klaus Pohl,
Andreas Metzger
|
|
Pages: 1049-1050 |
|
doi>10.1145/1134285.1134499 |
|
Full text: PDF
|
|
By explicitly modeling and managing variability, software product line engineering provides a systematic approach for creating a diversity of similar products at low cost, in short time, and with high quality. This tutorial focuses on the two principle ...
By explicitly modeling and managing variability, software product line engineering provides a systematic approach for creating a diversity of similar products at low cost, in short time, and with high quality. This tutorial focuses on the two principle differences of software product line engineering when compared to single systems development: The differentiation of two key development processes (domain engineering and application engineering) and the explicit representation and management of variability. We characterize the two processes and their main activities and introduce the orthogonal variability modeling approach (OVM). We further illustrate the OVM approach in the product line requirements engineering and product line testing activities. expand
|
|
|
Performing systematic literature reviews in software engineering |
| |
David Budgen,
Pearl Brereton
|
|
Pages: 1051-1052 |
|
doi>10.1145/1134285.1134500 |
|
Full text: PDF
|
|
Context: Making best use of the growing number of empirical studies in Software Engineering, for making decisions and formulating research questions, requires the ability to construct an objective summary of available research evidence. Adopting ...
Context: Making best use of the growing number of empirical studies in Software Engineering, for making decisions and formulating research questions, requires the ability to construct an objective summary of available research evidence. Adopting a systematic approach to assessing and aggregating the outcomes from a set of empirical studies is also particularly important in Software Engineering, given that such studies may employ very different experimental forms and be undertaken in very different experimental contexts.Objectives: To provide an introduction to the role, form and processes involved in performing Systematic Literature Reviews. After the tutorial, participants should be able to read and use such reviews, and have gained the knowledge needed to conduct systematic reviews of their own.Method: We will use a blend of information presentation (including some experiences of the problems that can arise in the Software Engineering domain), and also of interactive working, using review material prepared in advance. expand
|
|
|
Cost-effective engineering of web applications pragmatic reuse: building web application product lines |
| |
Stan Jarzabek,
Ulf Pettersson
|
|
Pages: 1053-1054 |
|
doi>10.1145/1134285.1134501 |
|
Full text: PDF
|
|
Web Applications (WA) are developed and maintained under tight schedules. Much similarity across WAs creates opportunities for cutting development cost and easing evolution via reuse. This tutorial shows a practical way to exploit similarity patterns ...
Web Applications (WA) are developed and maintained under tight schedules. Much similarity across WAs creates opportunities for cutting development cost and easing evolution via reuse. This tutorial shows a practical way to exploit similarity patterns - at architecture and code levels - to simplify the design of WAs, helping to meet the unique challenges of Web engineering. expand
|
|
|
Software evolution: analysis and visualization |
| |
Harald C. Gall,
Michele Lanza
|
|
Pages: 1055-1056 |
|
doi>10.1145/1134285.1134502 |
|
Full text: PDF
|
|
Gaining higher level evolutionary information about large software systems is a key challenge in dealing with increasing complexity and decreasing software quality. Software repositories such as modifications, changes, or release information are rich ...
Gaining higher level evolutionary information about large software systems is a key challenge in dealing with increasing complexity and decreasing software quality. Software repositories such as modifications, changes, or release information are rich sources for distinctive kinds of analyses: They reflect the reasons and effects of particular changes made to the software system over a certain period of time. If we can analyze these repositories in an effective way, we get a clearer picture of the status of the software. Software repositories can be analyzed to provide information about the problems concerning a particular feature or a set of features. Hidden dependencies of structurally unrelated but over time logically coupled files exhibit a high potential to illustrate software evolution and possible architectural deterioration. In this tutorial, we describe the investigation of software evolution by taking a step towards reflecting the analysis results against software quality attributes. Different kinds of analyses (from architecture to code) and their interpretation will be presented and discussed in relation to quality attributes. This will show our vision of where such evolution investigations can lead and how they can support development. For that, the tutorial will touch issues such as meta-models for evolution data, data analysis and history mining, software quality attributes, as well as visualization of analysis results. expand
|
|
|
Agile methods: moving towards the mainstream of the software industry |
| |
Frank Maurer,
Grigori Melnik
|
|
Pages: 1057-1058 |
|
doi>10.1145/1134285.1134503 |
|
Full text: PDF
|
|
A fleet of emerging agile methods of software development (with eXtreme Programming and Scrum being the most broadly used) is both gaining popularity and generating lots of controversy. This high-level tutorial gives an overview of agile methods and ...
A fleet of emerging agile methods of software development (with eXtreme Programming and Scrum being the most broadly used) is both gaining popularity and generating lots of controversy. This high-level tutorial gives an overview of agile methods and provides background to understand how agile teams are trying to address modern software development challenges. Analysis of initial empirical evidence is used to discuss strengths and limitations of agile methods in various contexts. The participants are introduced to the innovation diffusion models and environments, and discuss what is needed for agile methods to cross the chasm and move into the mainstream of software development. expand
|
|
|
Designing concurrent, distributed, and real-time applications with UML |
| |
Hassan Gomaa
|
|
Pages: 1059-1060 |
|
doi>10.1145/1134285.1134504 |
|
Full text: PDF
|
|
Object-oriented concepts are crucial in software design because they address fundamental issues of adaptation and evolution. With the proliferation of object-oriented notations and methods, the Unified Modeling Language (UML) has emerged to provide a ...
Object-oriented concepts are crucial in software design because they address fundamental issues of adaptation and evolution. With the proliferation of object-oriented notations and methods, the Unified Modeling Language (UML) has emerged to provide a standardized notation for describing object-oriented models. However, for the UML notation to be effectively applied, it needs to be used with an object-oriented analysis and design method. This tutorial describes the COMET method for designing real-time and distributed applications, which integrates object-oriented and concurrency concepts and uses UML. expand
|
|
|
TUTORIAL SESSION: Tutorials: half-day tutorials |
|
|
|
|
Aspect-oriented software development beyond programming |
| |
Awais Rashid,
Alessandro Garcia,
Ana Moreira
|
|
Pages: 1061-1062 |
|
doi>10.1145/1134285.1134506 |
|
Full text: PDF
|
|
This tutorial focuses on applying aspect-oriented software development (AOSD) concepts beyond the programming stage of the software development life cycle. Using concrete methods, tools, techniques and notations we discuss how to use AOSD techniques ...
This tutorial focuses on applying aspect-oriented software development (AOSD) concepts beyond the programming stage of the software development life cycle. Using concrete methods, tools, techniques and notations we discuss how to use AOSD techniques to systematically treat crosscutting concerns during requirements engineering (RE), architecture design and detailed design as well as the mapping between aspects at these stages. With a clear focus on composition, modelling, trade-off analysis and assessment methods, the tutorial imparts an engineering ethos for translation into day-to-day processes and practices. expand
|
|
|
From semantic web to expressive software specifications: a modeling languages spectrum |
| |
Jin Song Dong
|
|
Pages: 1063-1064 |
|
doi>10.1145/1134285.1134507 |
|
Full text: PDF
|
|
Many researchers at W3C currently focus on developing the next generation of the Web --- the Semantic Web. The development of the Web ontology languages, RDF, OWL and SWRL, is reminiscent of the early development of system specification languages in ...
Many researchers at W3C currently focus on developing the next generation of the Web --- the Semantic Web. The development of the Web ontology languages, RDF, OWL and SWRL, is reminiscent of the early development of system specification languages in software engineering communities. Indeed, from the expressiveness point of view, Web ontology languages are subsets of Alloy, UML/OCL, VDM, Z and Object-Z. One can futher predict that the modeling languages for capturing the behaviours of the Semantic Web Services and Agents can be drawn from the rich collections of software dynamic modeling techniques, i.e., state machines, process algebra and integrated design methods. This tutorial will present a concise Modeling Languages Spectrum that includes a few key representative modeling languages ranging from simple static Web Ontology modeling techniques to expressive dynamic integrated modeling techniques. Comparisons and transformations between those languages will be discussed. Furthermore, based on transformation approaches, the latest research results on applying software modeling techniques and tools to the Semantic Web domain will be also demonstrated. expand
|
|
|
Software architectures for dependable systems: a software engineering perspective |
| |
Rogério de Lemos
|
|
Pages: 1065-1066 |
|
doi>10.1145/1134285.1134508 |
|
Full text: PDF
|
|
Although there is a large body of research in dependability, architectural level reasoning about dependability is only just emerging as an important theme in software development. This is due to the fact that dependability concerns are often left until ...
Although there is a large body of research in dependability, architectural level reasoning about dependability is only just emerging as an important theme in software development. This is due to the fact that dependability concerns are often left until too late in the process of development. In addition, the complexity of emerging applications and the trend of building trustworthy systems from existing untrustworthy components are urging dependability concerns to be considered at the architectural level. This tutorial will present the current challenges and promising solutions for structuring dependable systems at the architectural level. In addition of providing basic concepts related to dependability and software architectures, the rest of the tutorial is presented in the context of the dependability technologies. Throughout the tutorial, case studies will be used to exemplify the key concepts. expand
|
|
|
Tutorial: towards dynamic web services |
| |
Luciano Baresi,
Sam Guinea
|
|
Pages: 1067-1068 |
|
doi>10.1145/1134285.1134509 |
|
Full text: PDF
|
|
This tutorial introduces dynamic web services as a solution to cope with the dynamism and flexibility required by many modern software systems. Current technologies (WSDL, WS-BPEL, etc.) have proven insufficient in addressing these issues; however, ...
This tutorial introduces dynamic web services as a solution to cope with the dynamism and flexibility required by many modern software systems. Current technologies (WSDL, WS-BPEL, etc.) have proven insufficient in addressing these issues; however, they remain a good starting point for the analysis of the current situation and for building for the future.The core part of the tutorial analyzes ---by looking at available technologies and prominent research proposals---the deployment and execution of these applications within three separate phases: a composition phase, to discover available services and implement the desired behavior, a monitoring phase, to understand if a given service is behaving correctly (with respect to both functional and non-functional requirements), and a recovery phase, to react to anomalies by means of suitable replanning or recovery strategies.In conclusion, the tutorial summarizes the main topics, presents a list of still-to-be-solved problems, and highlights possible directions for future research. expand
|
|
|
Tutorial: an overview of UML 2 |
| |
Bran Selic
|
|
Pages: 1069-1070 |
|
doi>10.1145/1134285.1134510 |
|
Full text: PDF
|
|
This half-day tutorial covers the salient features of the first major revision of the Unified Modeling Language - UML 2. This short note summarizes the major topics covered by the tutorial.
This half-day tutorial covers the salient features of the first major revision of the Unified Modeling Language - UML 2. This short note summarizes the major topics covered by the tutorial. expand
|
|
|
Web service orchestration with BPEL |
| |
Liang Chen,
Bruno Wassermann,
Wolfgang Emmerich,
Howard Foster
|
|
Pages: 1071-1072 |
|
doi>10.1145/1134285.1134511 |
|
Full text: PDF
|
|
|
|
|
Creative requirements: invention and its role in requirements engineering |
| |
Neil Maiden,
Suzanne Robertson,
James Robertson
|
|
Pages: 1073-1074 |
|
doi>10.1145/1134285.1134512 |
|
Full text: PDF
|
|
Requirements is too often seen as a "stenographer's task", one where the requirements engineer passively listens and records while the stakeholders state their needs. However, this approach relies on stakeholders knowing what they need, and what they ...
Requirements is too often seen as a "stenographer's task", one where the requirements engineer passively listens and records while the stakeholders state their needs. However, this approach relies on stakeholders knowing what they need, and what they want. Experience tells us that except for rare visionaries, people do not know what they want until they see it. Many of the useful products that we take for granted today, did not come about from the stakeholders' imagination, but from an invention. In this tutorial we explain and illustrate how to use creative techniques to invent requirements that result in more useful, usable and competitive products. We provide a guide for invention, and show participants how to use this guide to invent innovative requirements for a familiar system. expand
|
|
|
Testing concurrent java components |
| |
Paul Strooper,
Luke Wildman
|
|
Pages: 1075-1076 |
|
doi>10.1145/1134285.1134513 |
|
Full text: PDF
|
|
Testing concurrent software is notoriously difficult due to problems with non-determinism and synchronisation. While tools and techniques for the testing of sequential components are well-understood and widely used, similar tools and techniques for concurrent ...
Testing concurrent software is notoriously difficult due to problems with non-determinism and synchronisation. While tools and techniques for the testing of sequential components are well-understood and widely used, similar tools and techniques for concurrent components are not commonly available. This tutorial will look at the problems associated with testing concurrent components and propose techniques for dealing with these problems. The ConAn (Concurrency Analyser) testing tool supports these techniques for the testing of concurrent Java components and will be discussed and demonstrated in the tutorial. The limitations of the techniques and ConAn, as well as additional V&V tools and techniques to address these limitations will be presented. expand
|
|
|
Modeling of component based systems |
| |
Weizhong Shao,
Gang Huang,
Haiyan Zhao
|
|
Pages: 1077-1078 |
|
doi>10.1145/1134285.1134514 |
|
Full text: PDF
|
|
Component based software development (CBSD) becomes a popular paradigm for Internet based systems. Compared to other popular paradigms, CBSD supports the development from reusable components other than the development from the scratch. Consequently, ...
Component based software development (CBSD) becomes a popular paradigm for Internet based systems. Compared to other popular paradigms, CBSD supports the development from reusable components other than the development from the scratch. Consequently, modeling becomes more important than programming and the modeling techniques in traditional paradigms have to be changed more or less. Particularly, improper selection and misuse of modeling techniques would prevent the target system from benefiting from CBSD and even make the project fail. For helping researchers and practitioners to equip with CBSD, this tutorial will provide basic knowledge and skill of modeling component based systems systematically. Firstly, we will introduce the technical and non-technical motivations of CBSD with emphasis on software reuse which puts a significant impact on modeling. Secondly, we will present a systematic approach to modeling component based systems with a set of existing well-proved modeling techniques, including feature modeling for requirements specification, architecture modeling for abstract design, and object oriented modeling for detailed design. These modeling techniques and a real-life project will be discussed in details in the rest of the tutorial. expand
|
|
|
How to integrate usability into the software development process |
| |
Natalia Juristo,
Xavier Ferre
|
|
Pages: 1079-1080 |
|
doi>10.1145/1134285.1134515 |
|
Full text: PDF
|
|
Usability is increasingly recognized as a quality attribute that one has to explicitly deal with during development. Nevertheless, usability techniques, when applied, are decoupled from the software development process. The host of techniques offered ...
Usability is increasingly recognized as a quality attribute that one has to explicitly deal with during development. Nevertheless, usability techniques, when applied, are decoupled from the software development process. The host of techniques offered by the HCI (Human-Computer Interaction) field make the task of selecting the most appropriate ones for a given project and organization a difficult task. Project managers and developers aiming to integrate usability practices into their software process have to face important challenges, as the techniques are not described in the frame of a software process as it is understood in SE (Software Engineering). Even when HCI experts (either in-house or from an external organization) are involved in the integration process, it is also a tough endeavour due to the strong differences in terminology and overall approach to software development between HCI and SE. In this tutorial we will present, from a SE viewpoint, which usability techniques can be most valuable to development teams with little or no previous usability experience, how a particular set of techniques can be selected according to the specific characteristics of the organization and project, and how usability techniques match with the activity groups in the development process. expand
|
|
|
Software component models |
| |
Kung-Kiu Lau
|
|
Pages: 1081-1082 |
|
doi>10.1145/1134285.1134516 |
|
Full text: PDF
|
|
Component-based Development (CBD) is an important emerging topic in Software Engineering, promising long sought after benefits like increased reuse and reduced time-to-market (and hence software production cost). However, there are at present many obstacles ...
Component-based Development (CBD) is an important emerging topic in Software Engineering, promising long sought after benefits like increased reuse and reduced time-to-market (and hence software production cost). However, there are at present many obstacles to overcome before CBD can succeed. For one thing, CBD success is predicated on a standardised market place for software components, which does not yet exist. In fact currently CBD even lacks a universally accepted terminology. Existing component models adopt different component definitions and composition operators. Therefore much research remains to be done. We believe that the starting point for this endeavour should be a thorough study of current component models, identifying their key characteristics and comparing their strengths and weaknesses. A desirable side-effect would be clarifying and unifying the CBD terminology. In this tutorial, we present a clear and concise exposition of all the current major software component models, including a taxonomy. The purpose is to distill and present knowledge of current software component models, as well as to present an analysis of their properties with respect to commonly accepted criteria for CBD. The taxonomy also provides a starting point for a unified terminology. expand
|