Actions

Compilers Research Questions

From Modelado Foundation

Below are the questions addressed in the Compiler Research Panel. Please add your comments (with you name) after each question.

Programming Language HW Architecture Features

If a rewrite will help in bringing software to exascale, what requirements and opportunities flow to the programming language level in order to reach performance goals and also increase portability to future hardware architectures? Can the benefits of language features tuned to new automatic mapping bring language adoption to critical mass?

HW Architecture Compiler Techniques

What compiler techniques (new, old) are available to address the new requirements of the hardware architectures above? Where are there gaps?

Exascale Algorithms

What compiler techniques (new, old) are available to address the changing landscape of exascale algorithms? Where are there gaps?

Runtime and HW Flowdowns

What requirements and opportunities flow to hardware and runtimes based on new compiler technologies? For example, with automatic communication generation optimizations are available, does this imply hardware or runtime should support richer communication primitives?

Compiler vs Tool Manipulation Tools

What high level manipulation tools should be in the compiler vs tool driven by the user vs hand programmed/explicit code, and how can this relate to portability?

Data, Task, and Code Location Support

What kinds of compiler support could there be to deal with data and code location with relocatable tasks? == Intrinsics vs Annotations vs Autogeneration What is the right mix between intrinsics vs annotations vs autogenerating for these new architectures/runtimes?

Dependence Semantics

What new dependence semantics do you think are important, should the compiler take care of them or relegate to the runtime ?

Optimization Parameter vs. Code Variant Autotuning

Autotuning typically derives a search space from a collection of different values for an optimization parameter (such as parallelism granularity or tile size), or from code variants that represent different implementations of the same computation. How can a system unify autotuning of code variants and optimization parameters? More generally, how might we unify the various search and learning algorithms into a common framework?

Online vs. Offline Autotuning

Much work on auto tuning focuses on offline search, which can be quite time-intensive and unsuitable for production runs. But exascale architectures are likely to have very dynamic behavior that must be considered in making tuning choices. Further, properties of input data, not known until run time, may influence tuning choices. What are the roles of offline and online auto tuning phases in an exascale regime?

Expert Users and Autotuning

What is the appropriate role for expert users in building autotuning systems?

Autotuning for Multiple Objectives

If performance is just one optimization criterion, how might an autotuning system support tuning for multiple objectives?