Instead of storing the value to send/receive in the coroutine promise,
store only a pointer in the promise. This simplifies the code a lot and
allows larger value sizes to be sent across a channel.
Unfortunately, this new system has a code size impact. For example,
compiling testdata/channel.go for the BBC micro:bit, there is an
increase in code size from 4776 bytes to 4856 bytes. However, the
improved flexibility and simplicity of the code should be worth it. If
this becomes an issue, we can always refactor the code at a later time.
This is implemented as follows:
* The parent coroutine allocates space for the return value in its
frame and stores a pointer to this frame in the parent coroutine
handle.
* The child coroutine obtains the alloca from its parent using the
parent coroutine handle. It then stores the result value there.
* The parent value reads the data from the alloca on resumption.
The generated wasm is 575 bytes when compiled with -no-debug (and
works), which is a much better first experience for new users than
the 20KB+ added (atm) just from including fmt.
Adds another example showing the simple case
of executing main, adds a README explaining how
everything fits together and how to execute the compiled
code in the browser. Include a minimal webserver for
local testing.
This has several advantages, among them:
- Many passes (heap-to-stack, dead arg elimination, inlining) do not
work with function pointer calls. Making them normal function calls
improves their effectiveness.
- Goroutine lowering to LLVM coroutines does not currently support
function pointers. By eliminating function pointers, coroutine
lowering gets support for them for free.
This is especially useful for WebAssembly.
Because of the second point, this work is currently only enabled for the
WebAssembly target.
Previously, when casting an integer to a bigger integer, the destination
signedness was used. This is problematic when casting a negative int16
to uint32, for example, because it would cause zero-extension.
Go 1.12 switched to using libSystem.dylib for system calls, because
Apple recommends against doing direct system calls that Go 1.11 and
earlier did. For more information, see:
https://github.com/golang/go/issues/17490https://developer.apple.com/library/archive/qa/qa1118/_index.html
While the old syscall package was relatively easy to support in TinyGo
(just implement syscall.Syscall*), this got a whole lot harder with Go
1.12 as all syscalls now go through CGo magic to call the underlying
libSystem functions. Therefore, this commit overrides the stdlib syscall
package with a custom package that performs calls with libc (libSystem).
This may be useful not just for darwin but for other platforms as well
that do not place the stable ABI at the syscall boundary like Linux but
at the libc boundary.
Only a very minimal part of the syscall package has been implemented, to
get the tests to pass. More calls can easily be added in the future.
So far, we've pretended to be js/wasm in baremetal targets to make the
stdlib happy. Unfortunately, this has various problems because
syscall/js (a dependency of many stdlib packages) thinks it can do JS
calls, and emulating them gets quite hard with all changes to the
syscall/js packages in Go 1.12.
This commit does a few things:
* It lets baremetal targets pretend to be linux/arm instead of
js/wasm.
* It lets the loader only select particular packages from the src
overlay, instead of inserting them just before GOROOT. This makes it
possible to pick which packages to overlay for a given target.
* It adds a baremetal-only syscall package that stubs out almost all
syscalls.
Implement two trivial uses of the select statement.
Always blocking:
select {}
No-op:
select {
default:
}
Go 1.12 added a `select {}` instruction to syscall/js, so this is needed
for Go 1.12 support. More complete support for select will be added in
the future.
This function returns the current timestamp, or 0 at compile time.
runtime.nanotime is used at package initialization by the time package
starting with Go 1.12.
This commit implements nil checks for all platforms. These nil checks
can be optimized on systems with a MMU, but since a major target is
systems without MMU, keep it this way for now.
It implements three checks:
* Nil checks before dereferencing a pointer.
* Nil checks before calculating an address (*ssa.FieldAddr and
*ssa.IndexAddr)
* Nil checks before calling a function pointer.
The first check has by far the biggest impact, with around 5% increase
in code size. The other checks only trigger in only some test cases and
have a minimal impact on code size.
This first nil check is also the one that is easiest to avoid on systems
with MMU, if necessary.
In LLVM 8, the AVR backend has moved all function pointers to address
space 1 by default. Much of the code still assumes function pointers
live in address space 0, leading to assertion failures.
This commit fixes this problem by autodetecting function pointers and
avoiding them in interface pseudo-calls.
Without this, the following code would not panic:
func getInt(i int) { return i }
make([][1<<18], getInt(1<<18))
Or this code would be allowed to compile for 32-bit systems:
make([][1<<18], 1<<18)
Previously, this would have resulted in a LLVM verification error
because runtime.sliceBoundsCheckMake would not accept 64-bit integers on
these platforms.