Skip to content
Aurel edited this page May 22, 2020 · 4 revisions

This is an incomplete list of ideas that are often brought up but have not yet become proposals due to a lack of detailed implementation details.

Typed metadata

Metadata are prone to typos and type errors. For metadata which take arguments it is sometimes not clear what exactly the arguments should be. This is not only the case for the compiler, which at least has a list with --help-metas, but also for third-party libraries.

A possible solution is to have a way to declare metadata along with its argument types. Then whenever metadata is applied, check if it was declared and do a proper type check.

Unresolved questions:

  • exact syntax for declaring typed metadata
    • as module-level functions?
    • as typedefs to a special parametrised type in the haxe package? (typedef MyMeta = haxe.metadata.Build<arg1, arg2>)
    • as statically imported functions?
    • declared via an initialisation macro?
  • exact syntax for applying typed metadata - @:foo like it is done now, or e.g. @!foo (both have different backward compatibility-related issues)
  • should untyped metadata (eventually) be deprecated?

Null-safe traversal operator ?.

As a further step towards null safety.

var foo:Null<SomeObject> = getFooOrNull();

foo?.name;
// equivalent to
(foo != null ? foo.name : null);

foo?.bar?.baz?.hello;
// equivalent to
(foo != null && foo.bar != null && foo.bar.baz != null ? foo.bar.baz.hello : null);

Unresolved questions:

  • how to parse this? (might conflict with the tertiary operator ?: as well as some ambiguity with floating-point numbers, because .1 is the same as 0.1)

Explicit type parameters for function calls

Allow specifying type parameters when calling a parametrised function.

class Foo {
  public static function example<T, U>(a:Null<T>, b:Null<U>):Void {
    // do stuff
  }
}

// then
Foo.example<String, Int>(null, null);

Unresolved questions:

  • how to parse this? (the manual suggests this conflicts with comparison operators)

Fix @:multiType

@:multiType is messy, which makes it difficult to implement anything that uses Map generically.

See #8746

Field-level dependencies

Currently dependencies between modules are tracked on a per-module basis. Knowing which fields depend on which could help with:

  • compiler cache (fewer cache invalidations needed)
  • more robust static initialisation order (might still be target dependent)
  • slightly better DCE?

Dependent types, HKTs

Fancier type system improvements.

Dependent types would allow properly implementing e.g. arbitrarily-sized matrix operations without having separate classes for each matrix dimension, because type parameters could be used to express the dimensions of the matrix, when known statically.

Higher-kinded types would allow fancier collections, and more functional programming constructs.

A type system more completely modeled after Hindley-Milner should resolve some unification issues (where explicit type hints are currently required in Haxe) and potentially allow us to get rid of @:multiType.

Bottom type

Adding a type that represents a function (or expression) that does not return. Examples include: Sys.exit(0), throw "foo", while (true) { ... }. Such a type would better represent some APIs. It could also fix some problems with function inlining, e.g.:

function a():Int {
  if (Math.random() > .5) return 5;
  else b();
}
inline function b():Void {
  throw "no luck";
}

Currently the above only compiles if b is inlined, otherwise the compiler thinks that a does not return (or terminate) in all cases. If b were instead typed as function b():Bottom, the types should check out and the compiler should be happy even without inlining.

Parser-level macros

Allowing macros defined in Haxe code to (partially) modify the behaviour of the parser itself. Would allow total control for the syntax of embedded DSLs, as well as library-enabled syntax extensions and syntactic sugar without clunky metadata.

Unresolved questions:

  • how to interact with the existing parser?
  • when are parser macros introduced? only in HXMLs like initialisation macros?
  • how to deal with this for LSP, IDE integration, or syntax highlighting?

Function contracts

(Possibly as an external tool.)

Contract programming refers to designing functions by specifying their pre-conditions (expressing the requirements for their arguments and the state before entering the function) and their post-conditions (expressing the requirements for their return value and the state after exiting the function). Given a rich enough specification language, many important aspects of an API could be modeled. Such a contract can then be used with a static analysis tool and a logic solver (such as Z3) to verify that functions are used correctly according to their contracts, and show compile-time errors or warnings if problems are detected. There are existing solutions for other languages, e.g. the many frontends of the Viper framework which has its own specification language and takes care of interacting with a logic solver.

As an additional benefit, the Haxe static analyser could perform some more aggressive optimisations based on the contracts of a function.

Clone this wiki locally