Previously, the machine.UART0 object had two meanings:
- it was the first UART on the chip
- it was the default output for println
These two meanings conflict, and resulted in workarounds like:
- Defining UART0 to refer to the USB-CDC interface (atsamd21,
atsamd51, nrf52840), even though that clearly isn't an UART.
- Defining NRF_UART0 to avoid a conflict with UART0 (which was
redefined as a USB-CDC interface).
- Defining aliases like UART0 = UART1, which refer to the same
hardware peripheral (stm32).
This commit changes this to use a new machine.Serial object for the
default serial port. It might refer to the first or second UART
depending on the board, or even to the USB-CDC interface. Also, UART0
now really refers to the first UART on the chip, no longer to a USB-CDC
interface.
The changes in the runtime package are all just search+replace. The
changes in the machine package are a mixture of search+replace and
manual modifications.
This commit does not affect binary size, in fact it doesn't affect the
resulting binary at all.
This means that machine.UART0, machine.UART1, etc are of type
*machine.UART, not machine.UART. This makes them easier to pass around
and avoids surprises when they are passed around by value while they
should be passed around by reference.
There is a small code size impact in some cases, but it is relatively
minor.
Make the GC globals scan phase conservative instead of precise on
WebAssembly. This reduces code size at the risk of introducing some
false positives.
This is a stopgap measure to mitigate an issue with the precise scanning
of globals that doesn't track all pointers. It works for regular globals
but globals created in the interp package don't always have a type and
therefore may be missed by the AddGlobalsBitmap pass.
The same issue is present on Linux and macOS, but is not as noticeable
there.
There is no reason to specialize this per chip as it is only ever used
for JavaScript. Not only that, it is causing confusion and is yet
another quirk to learn when porting the runtime to a new
microcontroller.
This commit improves the timers on various microcontrollers to better
deal with counter wraparound. The result is a reduction in RAM size of
around 12 bytes and a small effect (sometimes positive, sometimes
negative) on flash consumption. But perhaps more importantly: getting
the current time is now interrupt-safe (it previously could result in a
race condition) and the timer will now be correct when the timer isn't
retrieved for a long duration. Before this commit, a call to `time.Now`
more than 8 minutes after the previous call could result in an incorrect
time.
For more details, see:
https://www.eevblog.com/forum/microcontrollers/correct-timing-by-timer-overflow-count/msg749617/#msg749617
Instead, leave args at its default value (which provides a fake argv[0] as it has for a long time).
linux and mac do not seem affected.
Fixes#1862 (tinygo apps after v0.17.0-113-g7b761fa crash if run without argv[0])
This commit does two things:
1. It makes it possible to grow the heap on Linux and MacOS by
allocating 1GB of virtual memory on startup and then slowly using it
as necessary, when running out of available heap space.
2. It switches the default GC to be the conservative GC (previously
extalloc). This is good for consistency with other platforms that
all use this same GC.
This makes the extalloc GC unused by default.
This heap allocation would normally be optimized away, but with -opt=0
perhaps not. This is a problem if the conservative GC is used, because
the conservative GC needs to be initialized before use.
On some boards the FPU is already enabled on startup, probably as part
of the bootloader. On other chips it was enabled as part of the runtime
startup code. In all these cases, enabling the FPU is currently
unsupported: the automatic stack sizing of goroutines assumes that the
processor won't need to reserve space for FPU registers. Enabling the
FPU therefore can lead to a stack overflow.
This commit either removes the code that enables the FPU, or simply
disables it in startup code. A future change should fully enable the FPU
so that operations on float32 can be performed by the FPU instead of in
software, greatly speeding up such code.
- Add some extra fields: FPUPresent, CPU and NVICPrioBits which may
come in handy at a later time (and are easy to add).
- Rename DEVICE to Device, to match Go style.
This is in preparation to the next commit, which requires the FPUPresent
flag.
This improves compatibility between the regular browser target
(-target=wasm) and the WASI target (-target=wasi). Specifically, it
allows running WASI tests like this:
tinygo test -target=wasi encoding/base32
This currently doesn't work with `tinygo flash` yet (even with
`-programmer=openocd`), you can use pyocd instead. For example, from the
Bluetooth package:
tinygo build -o test.hex -target=microbit-v2-s113v7 ./examples/advertisement/
pyocd flash --target=nrf52 test.hex
I intend to add support for pyocd to work around this issue, so that a simple
`tinygo flash` suffices.
There is no good reason for func values to refer to interface type
codes. The only thing they need is a stable identifier for function
signatures, which is easily created as a new kind of globals. Decoupling
makes it easier to change interface related code.
This fixes a type system loophole. The following program would
incorrectly run in TinyGo, while it would trigger a panic in Go:
package main
import "reflect"
func main() {
v := reflect.ValueOf(struct {
x int
}{})
x := v.Field(0).Interface()
println("x:", x.(int))
}
Playground link: https://play.golang.org/p/nvvA18XFqFC
The panic in Go is the following:
panic: reflect.Value.Interface: cannot return value obtained from unexported field or method
I've shortened it in TinyGo to save a little bit of space.
This matches the main Go implementation and (among others) fixes a
compatibility issue with the encoding/json package. The encoding/json
package compares reflect.Type variables against nil, which does not work
as long as reflect.Type is of integer type.
This also adds a reflect.RawType() function (like reflect.Type()) that
makes it easier to avoid working with interfaces in the runtime package.
It is internal only, but exported to let the runtime package use it.
This change introduces a small code size increase when working with the
reflect package, but I've tried to keep it to a minimum. Most programs
that don't make extensive use of the reflect package (and don't use
package like fmt) should not be impacted by this.
Previously there was code to avoid impossible type asserts but it wasn't
great and in fact was too aggressive when combined with reflection.
This commit improves this by checking all types that exist in the
program that may appear in an interface (even struct fields and the
like) but without creating runtime.typecodeID objects with the type
assert. This has two advantages:
* As mentioned, it optimizes impossible type asserts away.
* It allows methods on types that were only asserted on (in
runtime.typeAssert) but never used in an interface to be optimized
away using GlobalDCE. This may have a cascading effect so that other
parts of the code can be further optimized.
This sometimes massively improves code size and mostly negates the code
size regression of the previous commit.
This distinction was useful before when reflect wasn't properly
supported. Back then it made sense to only include method sets that were
actually used in an interface. But now that it is possible to get to
other values (for example, by extracting fields from structs) and it is
possible to turn them back into interfaces, it is necessary to preserve
all method sets that can possibly be used in the program in a type
assert, interface assert or interface method call.
In the future, this logic will need to be revisited again when
reflect.New or reflect.Zero gets implemented.
Code size increases a bit in some cases, but usually in a very limited
way (except for one outlier in the drivers smoke tests). The next commit
will improve the situation significantly.
This package does not implement any methods, which is of course not
useful. However, by creating this package in advance it's possible to
see the next issue that's preventing something from building in TinyGo.
Motivated by: https://github.com/tinygo-org/tinygo/issues/1634
Since https://github.com/tinygo-org/tinygo/pull/1571 (in particular, the first
commit that sets the main package path), the main package is always named
"main". This makes the callMain() workaround in the runtime unnecessary and
allows directly calling the main.main function with a //go:linkname pragma.
On WebAssembly it is possible to grow the heap with the memory.grow
instruction. This commit implements this feature and with that also
removes the -heap-size flag that was reportedly broken (I haven't
verified that). This should make it easier to use TinyGo for
WebAssembly, where there was no good reason to use a fixed heap size.
This commit has no effect on baremetal targets with optimizations
enabled.
This commit swaps the layout of the heap. Previously, the metadata was
at the start and the data blocks (the actual heap memory) followed
after. This commit swaps those, so that the heap area starts with the
data blocks followed by the heap metadata.
This arrangement is not very relevant for baremetal targets that always
have all RAM allocated, but it is an important improvement for other
targets such as WebAssembly where growing the heap is possible but
starting with a small heap is a good idea. Because the metadata lives at
the end, and because the metadata does not contain pointers, it can
easily be moved. The data itself cannot be moved as the conservative GC
does not know all the pointer locations, plus moving the data could be
very expensive.
This commit changes the number of wait states for the stm32f103 chip to
2 instead of 4. This gets it back in line with the datasheet, but it
also has the side effect of breaking I2C. Therefore, another (seemingly
unrelated) change is needed: the i2cTimeout constant must be increased
to a higher value to adjust to the lower flash wait states - presumably
because the lower number of wait states allows the chip to run code
faster.
For a full explanation, see interp/README.md. In short, this rewrite is
a redesign of the partial evaluator which improves it over the previous
partial evaluator. The main functional difference is that when
interpreting a function, the interpretation can be rolled back when an
unsupported instruction is encountered (for example, an actual unknown
instruction or a branch on a value that's only known at runtime). This
also means that it is no longer necessary to scan functions to see
whether they can be interpreted: instead, this package now just tries to
interpret it and reverts when it can't go further.
This new design has several benefits:
* Most errors coming from the interp package are avoided, as it can
simply skip the code it can't handle. This has long been an issue.
* The memory model has been improved, which means some packages now
pass all tests that previously didn't pass them.
* Because of a better design, it is in fact a bit faster than the
previous version.
This means the following packages now pass tests with `tinygo test`:
* hash/adler32: previously it would hang in an infinite loop
* math/cmplx: previously it resulted in errors
This also means that the math/big package can be imported. It would
previously fail with a "interp: branch on a non-constant" error.
This has been a *lot* of work, trying to understand the Xtensa windowed
registers ABI. But in the end I managed to come up with a very simple
implementation that so far seems to work very well.
I tested this with both blinky examples (with blinky2 slightly edited)
and ./testdata/coroutines.go to verify that it actually works.
Most development happened on the ESP32 QEMU fork from Espressif
(https://github.com/espressif/qemu/wiki) but I also verified that it
works on a real ESP32.