This is necessary so that when reflect.Type is converted from a concrete
type to an interface type, the errors package can still be interpreted.
Without this change, basically every program would grow in size by a few
bytes.
The errors package has a call like this in the package initializer. This
commit adds support for running it at compile time, avoiding the call at
runtime.
This doesn't always help (the call is already optimized away in many
small programs) but it does help to shave off some binary size in larger
programs. Perhaps more importantly, it will avoid a penalty in code size
when the reflect package will convert reflect.Type from a regular type
to an interface type.
Previously there was code to avoid impossible type asserts but it wasn't
great and in fact was too aggressive when combined with reflection.
This commit improves this by checking all types that exist in the
program that may appear in an interface (even struct fields and the
like) but without creating runtime.typecodeID objects with the type
assert. This has two advantages:
* As mentioned, it optimizes impossible type asserts away.
* It allows methods on types that were only asserted on (in
runtime.typeAssert) but never used in an interface to be optimized
away using GlobalDCE. This may have a cascading effect so that other
parts of the code can be further optimized.
This sometimes massively improves code size and mostly negates the code
size regression of the previous commit.
This distinction was useful before when reflect wasn't properly
supported. Back then it made sense to only include method sets that were
actually used in an interface. But now that it is possible to get to
other values (for example, by extracting fields from structs) and it is
possible to turn them back into interfaces, it is necessary to preserve
all method sets that can possibly be used in the program in a type
assert, interface assert or interface method call.
In the future, this logic will need to be revisited again when
reflect.New or reflect.Zero gets implemented.
Code size increases a bit in some cases, but usually in a very limited
way (except for one outlier in the drivers smoke tests). The next commit
will improve the situation significantly.
GetElementPtr would not work on values that weren't pointers. Because
fixed addresses (often used in memory-mapped I/O) are integers rather
than pointers in interp, it would return an error.
This resulted in the teensy40 target not compiling correctly since the
interp package rewrite. This commit should fix that.
For a full explanation, see interp/README.md. In short, this rewrite is
a redesign of the partial evaluator which improves it over the previous
partial evaluator. The main functional difference is that when
interpreting a function, the interpretation can be rolled back when an
unsupported instruction is encountered (for example, an actual unknown
instruction or a branch on a value that's only known at runtime). This
also means that it is no longer necessary to scan functions to see
whether they can be interpreted: instead, this package now just tries to
interpret it and reverts when it can't go further.
This new design has several benefits:
* Most errors coming from the interp package are avoided, as it can
simply skip the code it can't handle. This has long been an issue.
* The memory model has been improved, which means some packages now
pass all tests that previously didn't pass them.
* Because of a better design, it is in fact a bit faster than the
previous version.
This means the following packages now pass tests with `tinygo test`:
* hash/adler32: previously it would hang in an infinite loop
* math/cmplx: previously it resulted in errors
This also means that the math/big package can be imported. It would
previously fail with a "interp: branch on a non-constant" error.