This page contains several research ideas that students can pursue under my supervision. They can be adapted for a BSc part-time project, or for a full MSc dissertation. All ideas should result in (at least) one publication.
Per-Class-Per-Instance Cost-Aware Classifiers
Existing Machine Learning techniques are oblivious to the cost of choosing a wrong class for a particular instance. Instead, a cost-matrix is used that assumes that all instances have a static cost when misclassified with the wrong class. But for the purposes of finding a suitable transporter, misclassifying a small dog as a cat is better than misclassifying a large dog as a cat. The goal is to create a Evolutionary approach for multi-class classification that considers costs per-instance and class.
Programming Languages and Parallel Programming
Using Dependent Types to Improve Genetic Programming
The Aeon programming language supports dependent types and allows for programmers to define methods using ellipsis (…). In these cases, the compiler will automatically generate the best implementation. This is done using Genetic Programming, type-checking to validate whether candidate programs are valid or not.
Strongly Typed Genetic Programming has been shown to outperform Standard GP. I believe that Dependent Typed Genetic Programming can have better performance, specially in Program Synthesis.
A Cooperative Time Cost Model Type System
Cost-Models are necessary for efficient parallelization of programs. The cost-model of an AST expression can be obtained by composing the cost-model of children nodes. However, in function invocations, it is necessary to infer the cost-model of function invocations. This is very similar to how bidirectional type systems work. This work consists on designing a secondary type system for a programming language that expresses the complexity of functions. This information is then used for evaluating whether a function call will execute asynchronously or not in the context of automatic parallelization.
An Automatic GPU-CPU Load Balancer for Julia
Julia is a high-performant high-level programming language. Think python with the speed of C (thanks to the LLVM JIT compiler). Julia has support for GPU computing but requires explicit identification of GPU operations. This idea intends to create a back-end agnostic API that schedules to the GPU or CPU based on the computation features. The approach should be similar to AeminiumGPU.
A Unified OS-level Task Manager
Nowadays most computers are built on multicore processors. Multicore processors scale from high-end servers with 32 hardware threads to pocket-size quad-core smartphones. Thus, programmers must write parallel programs to speed up their programs.
If we look at production systems, several applications already support parallelism. Some examples are web-servers (Apache, Nginx, ...) and databases (Oracle, Postgres, ...). These applications distribute their work across all threads of the machine, to improve performance. The problem is that they consider as available targets for threads all the processors on the machine, regardless of other applications running on the same machine. For instance: On a quad-core machine running both Nginx and Postgres, both applications take 4 threads, when the best would be for each one to occupy only 2, avoiding context-switching between threads.
This idea consists on developing a kernel-level task scheduler and an API for informing each program of the available parallel processors.
Evolutionary Turtles All The Way Down: an Automatic Configurable Parallel Genetic Algorithm Framework
Evolutionary Algorithms have several operators that can be used in many different configurations. Finding the best configuration is hard. One way to find it is to use another Evolutionary Algorithm (Meta-Heuristic).
Evaluating the performance of a evolutionary algorithm can take time, so it should be parallelizable. However, there are several configurations for parallelization. Another Evolutionary Algorithm can be used to find the best.
This is a joint project with Nuno Lourenço.