* Make ast module private
Signed-off-by: Nick Cameron <nrc@ncameron.org>
* Make most other modules private
Signed-off-by: Nick Cameron <nrc@ncameron.org>
* Expand API to support CLI, Python bindings, and LSP crate
Signed-off-by: Nick Cameron <nrc@ncameron.org>
---------
Signed-off-by: Nick Cameron <nrc@ncameron.org>
* Add ts_rs feature to work with indexmap
* Add feature for schemars to work with indexmap
* Add module ID to intern module paths
* Update code to use new source range with three fields
* Update generated files
* Update docs
* Fix wasm
* Fix TS code to use new SourceRange
* Fix TS tests to use new SourceRange and moduleId
* Fix formatting
* Fix to filter errors and source ranges to only show the top-level module
* Fix to reuse module IDs
* Fix to disallow empty path for import
* Revert unneeded Self change
* Rename field to be clearer
* Fix parser tests
* Update snapshots
* Change to not serialize module_id of 0
* Update snapshots after adding default module_id
* Move module_id functions to separate module
* Fix tests for console errors
* Proposal: module ID = 0 gets skipped when serializing tokens too (#4422)
Just like in AST nodes.
Also I think "is_top_level" communicates intention better than is_default
---------
Co-authored-by: Adam Chalmers <adam.chalmers@zoo.dev>
* Add lsystem.kcl to tests
* Reduce iterations
* Fix the user settings flake shit (NOTE TO ALL FUTURE PEOPLE MODELING-APP DOES NOT WAIT FOR I/O IN SOME CASES BEFORE ROUTER NAVIGATION)
* Rename cargo-criterion to cargo-bench
* Use iai not criterion in CI
We want to benchmark the KCL parser and tokenizer to make sure we don't
accidentally slow them down. Generally Rust projects use Criterion to
benchmark code. Criterion runs your functions a few thousand times to
get reliable wall-clock measurements.
This is good for locally benchmarking but bad for benchmarking in CI.
Why? Because in CI, you're running a container on some shared VM, so
wall-clock time might have a lot of interference from noisy neighbours.
Also, your benchmarks take a long time to run and eat up paid CI minutes.
A better approach for benchmarking in CI is to just count the number of
CPU instructions executed. This correlates with wall-clock time, but it
only needs to run the function once, so it takes much less time. It also
isn't changed by any noisy neighbours running on the same VM or hardware.
This PR adds a new benchmark suite which counts instructions using `iai`,
from the creator of criterion. He says iai and criterion complement each
other nicely. We can run criterion locally and run iai in CI.
* Update image in markdown docs