Theses and Dissertations (Decision Sciences)
http://hdl.handle.net/10500/2789
2016-10-26T23:12:56ZMetaheuristic approaches to realistic portfolio optimisation
http://hdl.handle.net/10500/16224
Metaheuristic approaches to realistic portfolio optimisation
Busetti, Franco Raoul
In this thesis we investigate the application of two heuristic methods, genetic
algorithms and tabu/scatter search, to the optimisation of realistic portfolios. The
model is based on the classical mean-variance approach, but enhanced with floor and
ceiling constraints, cardinality constraints and nonlinear transaction costs which
include a substantial illiquidity premium, and is then applied to a large I 00-stock
portfolio.
It is shown that genetic algorithms can optimise such portfolios effectively and within
reasonable times, without extensive tailoring or fine-tuning of the algorithm. This
approach is also flexible in not relying on any assumed or restrictive properties of the
model and can easily cope with extensive modifications such as the addition of
complex new constraints, discontinuous variables and changes in the objective
function.
The results indicate that that both floor and ceiling constraints have a substantial
negative impact on portfolio performance and their necessity should be examined
critically relative to their associated administration and monitoring costs.
Another insight is that nonlinear transaction costs which are comparable in magnitude
to forecast returns will tend to diversify portfolios; the effect of these costs on
portfolio risk is, however, ambiguous, depending on the degree of diversification
required for cost reduction. Generally, the number of assets in a portfolio invariably
increases as a result of constraints, costs and their combination.
The implementation of cardinality constraints is essential for finding the bestperforming
portfolio. The ability of the heuristic method to deal with cardinality
constraints is one of its most powerful features.
2000-06-01T00:00:00Z'n Masjienleerbenadering tot woordafbreking in Afrikaans
http://hdl.handle.net/10500/13326
'n Masjienleerbenadering tot woordafbreking in Afrikaans
Fick, Machteld
Die doel van hierdie studie was om te bepaal tot watter mate ’n suiwer patroongebaseerde benadering tot woordafbreking bevredigende resultate lewer. Die masjienleertegnieke kunsmatige neurale netwerke, beslissingsbome en die TEX-algoritme is ondersoek aangesien dit met letterpatrone uit woordelyste afgerig kan word om lettergreep- en saamgesteldewoordverdeling te doen.
’n Leksikon van Afrikaanse woorde is uit ’n korpus van elektroniese teks genereer. Om lyste vir lettergreep- en saamgesteldewoordverdeling te kry, is woorde in die leksikon in lettergrepe verdeel en saamgestelde woorde is in hul samestellende dele verdeel. Uit elkeen van hierdie lyste van ±183 000 woorde is ±10 000 woorde as toetsdata gereserveer terwyl die res as afrigtingsdata gebruik is.
’n Rekursiewe algoritme is vir saamgesteldewoordverdeling ontwikkel. In hierdie algoritme word alle ooreenstemmende woorde uit ’n verwysingslys (die leksikon) onttrek deur stringpassing van die begin en einde van woorde af. Verdelingspunte word dan op grond van woordlengte uit die
samestelling van begin- en eindwoorde bepaal. Die algoritme is uitgebrei deur die tekortkominge
van hierdie basiese prosedure aan te spreek.
Neurale netwerke en beslissingsbome is afgerig en variasies van beide tegnieke is ondersoek om
die optimale modelle te kry. Patrone vir die TEX-algoritme is met die OPatGen-program
gegenereer. Tydens toetsing het die TEX-algoritme die beste op beide lettergreep- en saamgesteldewoordverdeling
presteer met 99,56% en 99,12% akkuraatheid, respektiewelik. Dit kan
dus vir woordafbreking gebruik word met min risiko vir afbrekingsfoute in gedrukte teks. Die neurale netwerk met 98,82% en 98,42% akkuraatheid op lettergreep- en saamgesteldewoordverdeling, respektiewelik, is ook bruikbaar vir lettergreepverdeling, maar dis meer riskant. Ons het bevind dat beslissingsbome te riskant is om vir lettergreepverdeling en veral vir woordverdeling te gebruik, met 97,91% en 90,71% akkuraatheid, respektiewelik.
’n Gekombineerde algoritme is ontwerp waarin saamgesteldewoordverdeling eers met die TEXalgoritme gedoen word, waarna die resultate van lettergreepverdeling deur beide die TEXalgoritme en die neurale netwerk gekombineer word. Die algoritme het 1,3% minder foute as die TEX-algoritme gemaak. ’n Toets op gepubliseerde Afrikaanse teks het getoon dat die risiko vir woordafbrekingsfoute in teks met gemiddeld tien woorde per re¨el ±0,02% is.; The aim of this study was to determine the level of success achievable with a purely pattern
based approach to hyphenation in Afrikaans. The machine learning techniques artificial neural
networks, decision trees and the TEX algorithm were investigated since they can be trained
with patterns of letters from word lists for syllabification and decompounding.
A lexicon of Afrikaans words was extracted from a corpus of electronic text. To obtain lists
for syllabification and decompounding, words in the lexicon were respectively syllabified and
compound words were decomposed. From each list of ±183 000 words, ±10 000 words were
reserved as testing data and the rest was used as training data.
A recursive algorithm for decompounding was developed. In this algorithm all words corresponding
with a reference list (the lexicon) are extracted by string fitting from beginning and
end of words. Splitting points are then determined based on the length of reassembled words.
The algorithm was expanded by addressing shortcomings of this basic procedure.
Artificial neural networks and decision trees were trained and variations of both were examined
to find optimal syllabification and decompounding models. Patterns for the TEX algorithm
were generated by using the program OPatGen. Testing showed that the TEX algorithm
performed best on both syllabification and decompounding tasks with 99,56% and 99,12% accuracy,
respectively. It can therefore be used for hyphenation in Afrikaans with little risk of
hyphenation errors in printed text. The performance of the artificial neural network was lower,
but still acceptable, with 98,82% and 98,42% accuracy for syllabification and decompounding,
respectively. The decision tree with accuracy of 97,91% on syllabification and 90,71% on
decompounding was found to be too risky to use for either of the tasks
A combined algorithm was developed where words are first decompounded by using the TEX
algorithm before syllabifying them with both the TEX algoritm and the neural network and
combining the results. This algoritm reduced the number of errors made by the TEX algorithm
by 1,3% but missed more hyphens. Testing the algorithm on Afrikaans publications showed the risk for hyphenation errors to be ±0,02% for text assumed to have an average of ten words per
line.
Text in Afrikaans
2013-06-01T00:00:00ZSatisticing solutions for multiobjective stochastic linear programming problems
http://hdl.handle.net/10500/5703
Satisticing solutions for multiobjective stochastic linear programming problems
Adeyefa, Segun Adeyemi
Multiobjective Stochastic Linear Programming is a relevant topic. As a matter of fact,
many real life problems ranging from portfolio selection to water resource management
may be cast into this framework.
There are severe limitations in objectivity in this field due to the simultaneous presence
of randomness and conflicting goals. In such a turbulent environment, the mainstay of
rational choice does not hold and it is virtually impossible to provide a truly scientific
foundation for an optimal decision.
In this thesis, we resort to the bounded rationality and chance-constrained principles to
define satisficing solutions for Multiobjective Stochastic Linear Programming problems.
These solutions are then characterized for the cases of normal, exponential, chi-squared
and gamma distributions.
Ways for singling out such solutions are discussed and numerical examples provided for
the sake of illustration.
Extension to the case of fuzzy random coefficients is also carried out.
2011-06-01T00:00:00ZLocal times of Brownian motion
http://hdl.handle.net/10500/3781
Local times of Brownian motion
Mukeru, Safari
After a review of the notions of Hausdorff and Fourier dimensions from fractal geometry
and Fourier analysis and the properties of local times of Brownian motion, we study the
Fourier structure of Brownian level sets. We show that if δa(X) is the Dirac measure
of one-dimensional Brownian motion X at the level a, that is the measure defined by
the Brownian local time La at level a, and μ is its restriction to the random interval
[0, L−1
a (1)], then the Fourier transform of μ is such that, with positive probability, for all
0 ≤ β < 1/2, the function u → |u|β|μ(u)|2, (u ∈ R), is bounded. This growth rate is the
best possible. Consequently, each Brownian level set, reduced to a compact interval, is
with positive probability, a Salem set of dimension 1/2. We also show that the zero set
of X reduced to the interval [0, L−1
0 (1)] is, almost surely, a Salem set. Finally, we show
that the restriction μ of δ0(X) to the deterministic interval [0, 1] is such that its Fourier
transform satisfies E (|ˆμ(u)|2) ≤ C|u|−1/2, u 6= 0 and C > 0.
Key words: Hausdorff dimension, Fourier dimension, Salem sets, Brownian motion,
local times, level sets, Fourier transform, inverse local times.
2010-09-01T00:00:00Z