Theses (Computing)
http://hdl.handle.net/10059/37
2016-07-24T22:34:00ZCombining search strategies for distributed constraint satisfaction.
http://hdl.handle.net/10059/1374
Combining search strategies for distributed constraint satisfaction.
Magaji, Amina Sambo-Muhammad
Many real-life problems such as distributed meeting scheduling, mobile frequency allocation
and resource allocation can be solved using multi-agent paradigms. Distributed constraint satisfaction
problems (DisCSPs) is a framework for describing such problems in terms of related
subproblems, called a complex local problem (CLP), which are dispersed over a number of locations,
each with its own constraints on the values their variables can take. An agent knows the
variables in its CLP plus the variables (and their current value) which are directly related to one
of its own variables and the constraints relating them. It knows little about the rest of the problem.
Thus, each CLP is solved by an agent which cooperates with other agents to solve the overall
problem.
Algorithms for solving DisCSPs can be classified as either systematic or local search with
the former being complete and the latter incomplete. The algorithms generally assume that each
agent has only one variable as they can solve DisCSP with CLPs using “virtual” agents. However,
in large DisCSPs where it is appropriate to trade completeness off against timeliness, systematic
search algorithms can be expensive when compared to local search algorithms which generally
converge quicker to a solution (if a solution is found) when compared to systematic algorithms. A
major drawback of local search algorithms is getting stuck at local optima. Significant researches
have focused on heuristics which can be used in an attempt to either escape or avoid local optima.
This thesis makes significant contributions to local search algorithms for DisCSPs. Firstly, we
present a novel combination of heuristics in DynAPP (Dynamic Agent Prioritisation with Penalties),
which is a distributed synchronous local search algorithm for solving DisCSPs having one
variable per agent. DynAPP combines penalties on values and dynamic agent prioritisation heuristics
to escape local optima. Secondly, we develop a divide and conquer approach that handles
DisCSP with CLPs by exploiting the structure of the problem. The divide and conquer approach
prioritises the finding of variable instantiations which satisfy the constraints between agents which
are often more expensive to satisfy when compared to constraints within an agent. The approach
also exploits concurrency and combines the following search strategies: (i) both systematic and
local searches; (ii) both centralised and distributed searches; and (iii) a modified compilation strategy.
We also present an algorithm that implements the divide and conquer approach in Multi-DCA
(Divide and Conquer Algorithm for Agents with CLPs).
DynAPP and Multi-DCA were evaluated on several benchmark problems and compared to the
leading algorithms for DisCSPs and DisCSPs with CLPs respectively. The results show that at the
region of difficult problems, combining search heuristics and exploiting problem structure in distributed
constraint satisfaction achieve significant benefits (i.e. generally used less computational
time and communication costs) over existing competing methods.
2015-08-01T00:00:00ZAutonomic behavioural framework for structural parallelism over heterogeneous multi-core systems.
http://hdl.handle.net/10059/1373
Autonomic behavioural framework for structural parallelism over heterogeneous multi-core systems.
Goli, Mehdi
With the continuous advancement in hardware technologies, significant research has been devoted
to design and develop high-level parallel programming models that allow programmers
to exploit the latest developments in heterogeneous multi-core/many-core architectures.
Structural programming paradigms propose a viable solution for e ciently programming
modern heterogeneous multi-core architectures equipped with one or more programmable Graphics
Processing Units (GPUs). Applying structured programming paradigms, it is possible to
subdivide a system into building blocks (modules, skids or components) that can be independently
created and then used in di erent systems to derive multiple functionalities.
Exploiting such systematic divisions, it is possible to address extra-functional features such
as application performance, portability and resource utilisations from the component level in
heterogeneous multi-core architecture. While the computing function of a building block can
vary for di erent applications, the behaviour (semantic) of the block remains intact. Therefore,
by understanding the behaviour of building blocks and their structural compositions in
parallel patterns, the process of constructing and coordinating a structured application can be
automated.
In this thesis we have proposed Structural Composition and Interaction Protocol (SKIP)
as a systematic methodology to exploit the structural programming paradigm (Building block
approach in this case) for constructing a structured application and extracting/injecting information
from/to the structured application. Using SKIP methodology, we have designed and
developed Performance Enhancement Infrastructure (PEI) as a SKIP compliant autonomic behavioural
framework to automatically coordinate structured parallel applications based on the
extracted extra-functional properties related to the parallel computation patterns.
We have used 15 di erent PEI-based applications (from large scale applications with heavy
input workload that take hours to execute to small-scale applications which take seconds to execute)
to evaluate PEI in terms of overhead and performance improvements. The experiments
have been carried out on 3 di erent Heterogeneous (CPU/GPU) multi-core architectures (including
one cluster machine with 4 symmetric nodes with one GPU per node and 2 single
machines with one GPU per machine). Our results demonstrate that with less than 3% overhead,
we can achieve up to one order of magnitude speed-up when using PEI for enhancing
application performance.
2015-05-01T00:00:00ZAn application of genetic algorithms to chemotherapy treatment.
http://hdl.handle.net/10059/1259
An application of genetic algorithms to chemotherapy treatment.
Petrovski, Andrei
The present work investigates methods for optimising cancer chemotherapy within the bounds
of clinical acceptability and making this optimisation easily accessible to oncologists. Clinical
oncologists wish to be able to improve existing treatment regimens in a systematic, effective
and reliable way. In order to satisfy these requirements a novel approach to chemotherapy
optimisation has been developed, which utilises Genetic Algorithms in an intelligent search
process for good chemotherapy treatments.
The following chapters consequently address various issues related to this approach. Chapter 1
gives some biomedical background to the problem of cancer and its treatment. The complexity
of the cancer phenomenon, as well as the multi-variable and multi-constrained nature of
chemotherapy treatment, strongly support the use of mathematical modelling for predicting and
controlling the development of cancer. Some existing mathematical models, which describe
the proliferation process of cancerous cells and the effect of anti-cancer drugs on this process,
are presented in Chapter 2. Having mentioned the control of cancer development, the
relevance of optimisation and optimal control theory becomes evident for achieving the optimal
treatment outcome subject to the constraints of cancer chemotherapy. A survey of traditional
optimisation methods applicable to the problem under investigation is given in Chapter 3 with
the conclusion that the constraints imposed on cancer chemotherapy and general non-linearity
of the optimisation functionals associated with the objectives of cancer treatment often make
these methods of optimisation ineffective. Contrariwise, Genetic Algorithms (GAs), featuring the
methods of evolutionary search and optimisation, have recently demonstrated in many practical
situations an ability to quickly discover useful solutions to highly-constrained, irregular and
discontinuous problems that have been difficult to solve by traditional optimisation methods.
Chapter 4 presents the essence of Genetic Algorithms, as well as their salient features and
properties, and prepares the ground for the utilisation of Genetic Algorithms for optimising
cancer chemotherapy treatment. The particulars of chemotherapy optimisation using Genetic Algorithms are given in Chapter 5
and Chapter 6, which present the original work of this thesis. In Chapter 5 the optimisation
problem of single-drug chemotherapy is formulated as a search task and solved by several
numerical methods. The results obtained from different optimisation methods are used to
assess the quality of the GA solution and the effectiveness of Genetic Algorithms as a whole.
Also, in Chapter 5 a new approach to tuning GA factors is developed, whereby the optimisation
performance of Genetic Algorithms can be significantly improved. This approach is based on
statistical inference about the significance of GA factors and on regression analysis of the GA
performance. Being less computationally intensive compared to the existing methods of GA
factor adjusting, the newly developed approach often gives better tuning results. Chapter 6 deals with the optimisation of multi-drug chemotherapy, which is a more practical and
challenging problem. Its practicality can be explained by oncologists' preferences to administer
anti-cancer drugs in various combinations in order to better cope with the occurrence of drug
resistant cells. However, the imposition of strict toxicity constraints on combining various anticancer
drugs together, makes the optimisation problem of multi-drug chemotherapy very difficult
to solve, especially when complex treatment objectives are considered. Nevertheless, the
experimental results of Chapter 6 demonstrate that this problem is tractable to Genetic
Algorithms, which are capable of finding good chemotherapeutic regimens in different treatment
situations. On the basis of these results a decision has been made to encapsulate Genetic
Algorithms into an independent optimisation module and to embed this module into a more
general and user-oriented environment - the Oncology Workbench. The particulars of this
encapsulation and embedding are also given in Chapter 6.
Finally, Chapter 7 concludes the present work by summarising the contributions made to the
knowledge of the subject treated and by outlining the directions for further investigations. The
main contributions are: (1) a novel application of the Genetic Algorithm technique in the field of
cancer chemotherapy optimisation, (2) the development of a statistical method for tuning the
values of GA factors, and (3) the development of a robust and versatile optimisation utility for a
clinically usable decision support system. The latter contribution of this thesis creates an
opportunity to widen the application domain of Genetic Algorithms within the field of drug
treatments and to allow more clinicians to benefit from utilising the GA optimisation.
1998-12-01T00:00:00ZIntrospective knowledge acquisition for case retrieval networks in textual case base reasoning.
http://hdl.handle.net/10059/1247
Introspective knowledge acquisition for case retrieval networks in textual case base reasoning.
Chakraborti, Sutanu
Textual Case Based Reasoning (TCBR) aims at effective reuse of information
contained in unstructured documents. The key advantage of TCBR over traditional
Information Retrieval systems is its ability to incorporate domain-specific knowledge
to facilitate case comparison beyond simple keyword matching. However, substantial
human intervention is needed to acquire and transform this knowledge into a form
suitable for a TCBR system. In this research, we present automated approaches that
exploit statistical properties of document collections to alleviate this knowledge
acquisition bottleneck. We focus on two important knowledge containers: relevance
knowledge, which shows relatedness of features to cases, and similarity knowledge,
which captures the relatedness of features to each other. The terminology is derived
from the Case Retrieval Network (CRN) retrieval architecture in TCBR, which is used
as the underlying formalism in this thesis applied to text classification.
Latent Semantic Indexing (LSI) generated concepts are a useful resource for
relevance knowledge acquisition for CRNs. This thesis introduces a supervised LSI
technique called "sprinkling" that exploits class knowledge to bias LSI's concept
generation. An extension of this idea, called Adaptive Sprinkling has been proposed to
handle inter-class relationships in complex domains like hierarchical (e.g. Yahoo
directory) and ordinal (e.g. product ranking) classification tasks. Experimental
evaluation results show the superiority of CRNs created with sprinkling and AS, not
only over LSI on its own, but also over state-of-the-art classifiers like Support Vector
Machines (SVM).
Current statistical approaches based on feature co-occurrences can be utilized to
mine similarity knowledge for CRNs. However, related words often do not co-occur in
the same document, though they co-occur with similar words. We introduce an
algorithm to efficiently mine such indirect associations, called higher order
associations. Empirical results show that CRNs created with the acquired similarity
knowledge outperform both LSI and SVM.
Incorporating acquired knowledge into the CRN transforms it into a densely
connected network. While improving retrieval effectiveness, this has the unintended
effect of slowing down retrieval. We propose a novel retrieval formalism called the
Fast Case Retrieval Network (FCRN) which eliminates redundant run-time
computations to improve retrieval speed. Experimental results show FCRN's ability to
scale up over high dimensional textual casebases.
Finally, we investigate novel ways of visualizing and estimating complexity of
textual casebases that can help explain performance differences across casebases.
Visualization provides a qualitative insight into the casebase, while complexity is a
quantitative measure that characterizes classification or retrieval hardness intrinsic to a
dataset. We study correlations of experimental results from the proposed approaches
against complexity measures over diverse casebases.
2007-08-01T00:00:00Z