Computinghttp://hdl.handle.net/10059/132016-02-03T18:21:12Z2016-02-03T18:21:12ZCombining search strategies for distributed constraint satisfaction.Magaji, Amina Sambo-Muhammadhttp://hdl.handle.net/10059/13742015-12-14T22:00:50Z2015-08-01T00:00:00ZCombining search strategies for distributed constraint satisfaction.
Magaji, Amina Sambo-Muhammad
Many real-life problems such as distributed meeting scheduling, mobile frequency allocation
and resource allocation can be solved using multi-agent paradigms. Distributed constraint satisfaction
problems (DisCSPs) is a framework for describing such problems in terms of related
subproblems, called a complex local problem (CLP), which are dispersed over a number of locations,
each with its own constraints on the values their variables can take. An agent knows the
variables in its CLP plus the variables (and their current value) which are directly related to one
of its own variables and the constraints relating them. It knows little about the rest of the problem.
Thus, each CLP is solved by an agent which cooperates with other agents to solve the overall
problem.
Algorithms for solving DisCSPs can be classified as either systematic or local search with
the former being complete and the latter incomplete. The algorithms generally assume that each
agent has only one variable as they can solve DisCSP with CLPs using “virtual” agents. However,
in large DisCSPs where it is appropriate to trade completeness off against timeliness, systematic
search algorithms can be expensive when compared to local search algorithms which generally
converge quicker to a solution (if a solution is found) when compared to systematic algorithms. A
major drawback of local search algorithms is getting stuck at local optima. Significant researches
have focused on heuristics which can be used in an attempt to either escape or avoid local optima.
This thesis makes significant contributions to local search algorithms for DisCSPs. Firstly, we
present a novel combination of heuristics in DynAPP (Dynamic Agent Prioritisation with Penalties),
which is a distributed synchronous local search algorithm for solving DisCSPs having one
variable per agent. DynAPP combines penalties on values and dynamic agent prioritisation heuristics
to escape local optima. Secondly, we develop a divide and conquer approach that handles
DisCSP with CLPs by exploiting the structure of the problem. The divide and conquer approach
prioritises the finding of variable instantiations which satisfy the constraints between agents which
are often more expensive to satisfy when compared to constraints within an agent. The approach
also exploits concurrency and combines the following search strategies: (i) both systematic and
local searches; (ii) both centralised and distributed searches; and (iii) a modified compilation strategy.
We also present an algorithm that implements the divide and conquer approach in Multi-DCA
(Divide and Conquer Algorithm for Agents with CLPs).
DynAPP and Multi-DCA were evaluated on several benchmark problems and compared to the
leading algorithms for DisCSPs and DisCSPs with CLPs respectively. The results show that at the
region of difficult problems, combining search heuristics and exploiting problem structure in distributed
constraint satisfaction achieve significant benefits (i.e. generally used less computational
time and communication costs) over existing competing methods.
2015-08-01T00:00:00ZAutonomic behavioural framework for structural parallelism over heterogeneous multi-core systems.Goli, Mehdihttp://hdl.handle.net/10059/13732015-12-14T22:00:49Z2015-05-01T00:00:00ZAutonomic behavioural framework for structural parallelism over heterogeneous multi-core systems.
Goli, Mehdi
With the continuous advancement in hardware technologies, significant research has been devoted
to design and develop high-level parallel programming models that allow programmers
to exploit the latest developments in heterogeneous multi-core/many-core architectures.
Structural programming paradigms propose a viable solution for e ciently programming
modern heterogeneous multi-core architectures equipped with one or more programmable Graphics
Processing Units (GPUs). Applying structured programming paradigms, it is possible to
subdivide a system into building blocks (modules, skids or components) that can be independently
created and then used in di erent systems to derive multiple functionalities.
Exploiting such systematic divisions, it is possible to address extra-functional features such
as application performance, portability and resource utilisations from the component level in
heterogeneous multi-core architecture. While the computing function of a building block can
vary for di erent applications, the behaviour (semantic) of the block remains intact. Therefore,
by understanding the behaviour of building blocks and their structural compositions in
parallel patterns, the process of constructing and coordinating a structured application can be
automated.
In this thesis we have proposed Structural Composition and Interaction Protocol (SKIP)
as a systematic methodology to exploit the structural programming paradigm (Building block
approach in this case) for constructing a structured application and extracting/injecting information
from/to the structured application. Using SKIP methodology, we have designed and
developed Performance Enhancement Infrastructure (PEI) as a SKIP compliant autonomic behavioural
framework to automatically coordinate structured parallel applications based on the
extracted extra-functional properties related to the parallel computation patterns.
We have used 15 di erent PEI-based applications (from large scale applications with heavy
input workload that take hours to execute to small-scale applications which take seconds to execute)
to evaluate PEI in terms of overhead and performance improvements. The experiments
have been carried out on 3 di erent Heterogeneous (CPU/GPU) multi-core architectures (including
one cluster machine with 4 symmetric nodes with one GPU per node and 2 single
machines with one GPU per machine). Our results demonstrate that with less than 3% overhead,
we can achieve up to one order of magnitude speed-up when using PEI for enhancing
application performance.
2015-05-01T00:00:00ZStructural coherence of problem and algorithm: an analysis for EDAs on all 2-bit and 3-bit problems.Browlnee, Alexander Edward IanMcCall, JohnChristie, Lee A.http://hdl.handle.net/10059/13622015-12-10T22:00:14Z2015-05-01T00:00:00ZStructural coherence of problem and algorithm: an analysis for EDAs on all 2-bit and 3-bit problems.
Browlnee, Alexander Edward Ian; McCall, John; Christie, Lee A.
Metaheuristics assume some kind of coherence
between decision and objective spaces. Estimation of Distribution
algorithms approach this by constructing an explicit probabilistic
model of high fitness solutions, the structure of which is intended
to reflect the structure of the problem. In this context,
“structure” means the dependencies or interactions between
problem variables in a probabilistic graphical model. There are
many approaches to discovering these dependencies, and existing
work has already shown that often these approaches discover
“unnecessary” elements of structure - that is, elements which
are not needed to correctly rank solutions. This work performs
an exhaustive analysis of all 2 and 3 bit problems, grouped into
classes based on mononotic invariance. It is shown in [1] that each
class has a minimal Walsh structure that can be used to solve
the problem. We compare the structure discovered by different
structure learning approaches to the minimal Walsh structure for
each class, with summaries of which interactions are (in)correctly
identified. Our analysis reveals a large number of symmetries that
may be used to simplify problem solving. We show that negative
selection can result in improved coherence between discovered
and necessary structure, and conclude with some directions for
a general programme of study building on this work. CONFERENCE HELD IN SENDAI, JAPAN, 25-28 MAY 2015.
2015-05-01T00:00:00ZA scalable expressive ensemble learning using random prism: a mapreduce approach.Stahl, FredericMay, DavidMills, HugoBramer, MaxGaber, Mohamed Medhathttp://hdl.handle.net/10059/13602015-11-23T22:00:36Z2015-03-01T00:00:00ZA scalable expressive ensemble learning using random prism: a mapreduce approach.
Stahl, Frederic; May, David; Mills, Hugo; Bramer, Max; Gaber, Mohamed Medhat
The induction of classification rules from previously unseen
examples is one of the most important data mining tasks in science as
well as commercial applications. In order to reduce the infl
uence of noise
in the data, ensemble learners are often applied. However, most ensemble
learners are based on decision tree classifiers which are affected by noise.
The Random Prism classifier has recently been proposed as an alterna
tive to the popular Random Forests classifier, which is based on decision
trees. Random Prism is based on the Prism family of algorithms, which
is more robust to noise. However, like most ensemble classification ap
proaches, Random Prism also does not scale well on large training data.
This paper presents a thorough discussion of Random Prism and a re
cently proposed parallel version of it called Parallel Random Prism. Par
allel Random Prism is based on the MapReduce programming paradigm.
The paper provides, for the first time, novel theoretical analysis of the
proposed technique and in-depth experimental study that show that Par
allel Random Prism scales well on a large number of training examples,
a large number of data features and a large number of processors. Ex
pressiveness of decision rules that our technique produces makes it a
natural choice for Big Data applications where informed decision mak
ing increases the user's trust in the system.
2015-03-01T00:00:00Z