In LLVM 8, the AVR backend has moved all function pointers to address
space 1 by default. Much of the code still assumes function pointers
live in address space 0, leading to assertion failures.
This commit fixes this problem by autodetecting function pointers and
avoiding them in interface pseudo-calls.
Without this, the following code would not panic:
func getInt(i int) { return i }
make([][1<<18], getInt(1<<18))
Or this code would be allowed to compile for 32-bit systems:
make([][1<<18], 1<<18)
Previously, this would have resulted in a LLVM verification error
because runtime.sliceBoundsCheckMake would not accept 64-bit integers on
these platforms.
The math package uses routines written in Go assembly language which
LLVM/Clang cannot parse. Additionally, not all instruction sets are
supported.
Redirect all math functions written in assembly to their Go equivalent.
This is not the fastest option, but it gets packages requiring math
functions to work.
Some instructions emitted by LLVM (like movaps) expect 16-byte
alignment, while the allocator assumed that 8-byte alignment is good
enough.
TODO: this issue came to light with LLVM optimizing a complex128 store
to the movaps instruction, which must be aligned. It looks like this
means that the <2 x double> IR type is actually 16-byte aligned instead
of 8-byte like a double. If this is the case, the alignment of complex
numbers needs to be updated in the whole compiler.
Use stringIterator.byteindex as the loop index, and remove
stringIterator.rangeindex, as "the index of the loop is the starting
position of the current rune, measured in bytes". This patch also fixes
the current loop index returned by stringNext, using `it.byteindex'
before - not after - `length' is added.
* machine/uart: add core support for multiple UARTs by allowing for multiple RingBuffers
* machine/uart: complete core support for multiple UARTs
* machine/uart: no need to store pointer to UART, better to treat like I2C and SPI
* machine/uart: increase ring buffer size to 128 bytes
* machine/uart: improve godocs comments and use comma-ok idiom for buffer Put/Get methods
Support for channels is not complete. The following pieces are missing:
* Channels with values bigger than int. An int in TinyGo can always
contain at least a pointer, so pointers are okay to send.
* Buffered channels.
* The select statement.
Before this commit, goroutine support was spread through the compiler.
This commit changes this support, so that the compiler itself only
generates simple intrinsics and leaves the real support to a compiler
pass that runs as one of the TinyGo-specific optimization passes.
The biggest change, that was done together with the rewrite, was support
for goroutines in WebAssembly for JavaScript. The challenge in
JavaScript is that in general no blocking operations are allowed, which
means that programs that call time.Sleep() but do not start goroutines
also have to be scheduled by the scheduler.
This was necessary before, when only some functions had a context
parameter. Now this is not necessary anymore because all functions
(independent of signature) have a context paramter at the end so the
signature always automatically matches.
This commit does two things:
* It adds support for the GOOS and GOARCH environment variables. They
fall back to runtime.GO* only when not available.
* It adds support for 3 new architectures: 386, arm, and arm64. For
now, this is Linux-only.
This reduces complexity in the compiler without affecting binary sizes
too much.
Cortex-M0: no changes
Linux x64: no changes
WebAssembly: some testcases (calls, coroutines, map) are slightly bigger
Implement defer in a different way, which results in smaller binaries.
The binary produced from testdata/calls.go (the only test case with
defer) is reduced a bit in size, but the savings in bytes greatly vary
by architecture:
Cortex-M0: -96 .text / flash
WebAssembly: -215 entire file
Linux x64: -32 .text
Deferred functions in TinyGo were implemented by creating a linked list
of struct objects that contain a function pointer to a thunk, a pointer
to the next object, and a list of parameters. When it was time to run
deferred functions, a helper runtime function called each function
pointer (the thunk) with the struct pointer as a parameter. This thunk
would then in turn extract the saved function parameter from the struct
and call the real function.
What this commit changes, is that the loop to call deferred functions is
moved into the end of the function (practically inlining it) and
replacing the thunks with direct calls inside this loop. This makes it
much easier for LLVM to perform all kinds of optimizations like inlining
and dead argument elimination.
The default priority is 0 (highest) which is reserved by the SoftDevice.
For normal operation the exact priority level doesn't matter, only the
relative priority matters. So this change makes the code compatible with
the SoftDevice without actually changing the behavior.
This commit changes many things:
* Most interface-related operations are moved into an optimization
pass for more modularity. IR construction creates pseudo-calls which
are lowered in this pass.
* Type codes are assigned in this interface lowering pass, after DCE.
* Type codes are sorted by usage: types more often used in type
asserts are assigned lower numbers to ease jump table construction
during machine code generation.
* Interface assertions are optimized: they are replaced by constant
false, comparison against a constant, or a typeswitch with only
concrete types in the general case.
* Interface calls are replaced with unreachable, direct calls, or a
concrete type switch with direct calls depending on the number of
implementing types. This hopefully makes some interface patterns
zero-cost.
These changes lead to a ~0.5K reduction in code size on Cortex-M for
testdata/interface.go. It appears that a major cause for this is the
replacement of function pointers with direct calls, which are far more
susceptible to optimization. Also, not having a fixed global array of
function pointers greatly helps dead code elimination.
This change also makes future optimizations easier, like optimizations
on interface value comparisons.
This can be used in the future to trigger garbage collection. For now,
it provides a more useful error message in case the heap is completely
filled up.
Make sure every to-be-implemented GC can use the same interface. As a
result, a 1MB chunk of RAM is allocated on Unix systems on init instead
of allocating on demand.
* Use 64-bit integers on 64-bit platforms, just like gc and gccgo:
https://golang.org/doc/go1.1#int
* Do not use a separate length type. Instead, use uintptr everywhere a
length is expected.
Let the standard library think that it is compiling for js/wasm.
The most correct way of supporting bare metal Cortex-M targets would be
using the 'arm' build tag and specifying no OS or an 'undefined' OS
(perhaps GOOS=noos?). However, there is no build tag for specifying no
OS at all, the closest possible is GOOS=js which makes very few
assumptions.
Sadly GOOS=js also makes some assumptions: it assumes to be running with
GOARCH=wasm. This would not be such a problem, just add js, wasm and arm
as build tags. However, having two GOARCH build tags leads to an error
in internal/cpu: it defines variables for both architectures which then
conflict.
To work around these problems, the 'arm' target has been renamed to
'tinygo.arm', which should work around these problems. In the future, a
GOOS=noos (or similar) should be added which can work with any
architecture and doesn't implement OS-specific stuff.