KCL: New simulation test pipeline (#4351)
The idea behind this is to test all the various stages of executing KCL
separately, i.e.
- Start with a program
- Tokenize it
- Parse those tokens into an AST
- Recast the AST
- Execute the AST, outputting
- a PNG of the rendered model
- serialized program memory
Each of these steps reads some input and writes some output to disk.
The output of one step becomes the input to the next step. These
intermediate artifacts are also snapshotted (like expectorate or 2020)
to ensure we're aware of any changes to how KCL works. A change could
be a bug, or it could be harmless, or deliberate, but keeping it checked
into the repo means we can easily track changes.
Note: UUIDs sent back by the engine are currently nondeterministic, so
they would break all the snapshot tests. So, the snapshots use a regex
filter and replace anything that looks like a uuid with [uuid] when
writing program memory to a snapshot. In the future I hope our UUIDs will
be seedable and easy to make deterministic. At that point, we can stop
filtering the UUIDs.
We run this pipeline on many different KCL programs. Each keeps its
inputs (KCL programs), outputs (PNG, program memory snapshot) and
intermediate artifacts (AST, token lists, etc) in that directory.
I also added a new `just` command to easily generate these tests.
You can run `just new-sim-test gear $(cat gear.kcl)` to set up a new
gear test directory and generate all the intermediate artifacts for the
first time. This doesn't need any macros, it just appends some new lines
of normal Rust source code to `tests.rs`, so it's easy to see exactly
what the code is doing.
This uses `cargo insta` for convenient snapshot testing of artifacts
as JSON, and `twenty-twenty` for snapshotting PNGs.
This was heavily inspired by Predrag Gruevski's talk at EuroRust 2024
about deterministic simulation testing, and how it can both reduce bugs
and also reduce testing/CI time. Very grateful to him for chatting with
me about this over the last couple of weeks.
2024-10-30 12:14:17 -05:00
|
|
|
---
|
2025-03-01 16:32:46 -08:00
|
|
|
source: kcl-lib/src/simulation_tests.rs
|
2025-02-13 11:59:57 +13:00
|
|
|
description: Variables in memory after executing cube.kcl
|
KCL: New simulation test pipeline (#4351)
The idea behind this is to test all the various stages of executing KCL
separately, i.e.
- Start with a program
- Tokenize it
- Parse those tokens into an AST
- Recast the AST
- Execute the AST, outputting
- a PNG of the rendered model
- serialized program memory
Each of these steps reads some input and writes some output to disk.
The output of one step becomes the input to the next step. These
intermediate artifacts are also snapshotted (like expectorate or 2020)
to ensure we're aware of any changes to how KCL works. A change could
be a bug, or it could be harmless, or deliberate, but keeping it checked
into the repo means we can easily track changes.
Note: UUIDs sent back by the engine are currently nondeterministic, so
they would break all the snapshot tests. So, the snapshots use a regex
filter and replace anything that looks like a uuid with [uuid] when
writing program memory to a snapshot. In the future I hope our UUIDs will
be seedable and easy to make deterministic. At that point, we can stop
filtering the UUIDs.
We run this pipeline on many different KCL programs. Each keeps its
inputs (KCL programs), outputs (PNG, program memory snapshot) and
intermediate artifacts (AST, token lists, etc) in that directory.
I also added a new `just` command to easily generate these tests.
You can run `just new-sim-test gear $(cat gear.kcl)` to set up a new
gear test directory and generate all the intermediate artifacts for the
first time. This doesn't need any macros, it just appends some new lines
of normal Rust source code to `tests.rs`, so it's easy to see exactly
what the code is doing.
This uses `cargo insta` for convenient snapshot testing of artifacts
as JSON, and `twenty-twenty` for snapshotting PNGs.
This was heavily inspired by Predrag Gruevski's talk at EuroRust 2024
about deterministic simulation testing, and how it can both reduce bugs
and also reduce testing/CI time. Very grateful to him for chatting with
me about this over the last couple of weeks.
2024-10-30 12:14:17 -05:00
|
|
|
---
|
|
|
|
{
|
2025-02-13 11:59:57 +13:00
|
|
|
"cube": {
|
2025-03-13 11:13:33 -07:00
|
|
|
"type": "Function"
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
"myCube": {
|
|
|
|
"type": "Solid",
|
|
|
|
"value": {
|
|
|
|
"type": "Solid",
|
|
|
|
"id": "[uuid]",
|
|
|
|
"artifactId": "[uuid]",
|
|
|
|
"value": [
|
2025-02-12 10:22:56 +13:00
|
|
|
{
|
2025-02-13 11:59:57 +13:00
|
|
|
"faceId": "[uuid]",
|
|
|
|
"id": "[uuid]",
|
2025-03-20 11:06:27 +13:00
|
|
|
"sourceRange": [],
|
2025-02-13 11:59:57 +13:00
|
|
|
"tag": null,
|
|
|
|
"type": "extrudePlane"
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"faceId": "[uuid]",
|
|
|
|
"id": "[uuid]",
|
2025-03-20 11:06:27 +13:00
|
|
|
"sourceRange": [],
|
2025-02-13 11:59:57 +13:00
|
|
|
"tag": null,
|
|
|
|
"type": "extrudePlane"
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"faceId": "[uuid]",
|
|
|
|
"id": "[uuid]",
|
2025-03-20 11:06:27 +13:00
|
|
|
"sourceRange": [],
|
2025-02-13 11:59:57 +13:00
|
|
|
"tag": null,
|
|
|
|
"type": "extrudePlane"
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"faceId": "[uuid]",
|
|
|
|
"id": "[uuid]",
|
2025-03-20 11:06:27 +13:00
|
|
|
"sourceRange": [],
|
2025-02-13 11:59:57 +13:00
|
|
|
"tag": null,
|
|
|
|
"type": "extrudePlane"
|
|
|
|
}
|
|
|
|
],
|
|
|
|
"sketch": {
|
|
|
|
"type": "Sketch",
|
|
|
|
"id": "[uuid]",
|
|
|
|
"paths": [
|
|
|
|
{
|
|
|
|
"__geoMeta": {
|
|
|
|
"id": "[uuid]",
|
2025-03-20 11:06:27 +13:00
|
|
|
"sourceRange": []
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
"from": [
|
|
|
|
-20.0,
|
|
|
|
-20.0
|
|
|
|
],
|
|
|
|
"tag": null,
|
|
|
|
"to": [
|
|
|
|
-20.0,
|
|
|
|
20.0
|
|
|
|
],
|
2025-02-20 10:12:37 +13:00
|
|
|
"type": "ToPoint",
|
|
|
|
"units": {
|
|
|
|
"type": "Mm"
|
|
|
|
}
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
{
|
|
|
|
"__geoMeta": {
|
|
|
|
"id": "[uuid]",
|
2025-03-20 11:06:27 +13:00
|
|
|
"sourceRange": []
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
"from": [
|
|
|
|
-20.0,
|
|
|
|
20.0
|
|
|
|
],
|
|
|
|
"tag": null,
|
|
|
|
"to": [
|
|
|
|
20.0,
|
|
|
|
20.0
|
|
|
|
],
|
2025-02-20 10:12:37 +13:00
|
|
|
"type": "ToPoint",
|
|
|
|
"units": {
|
|
|
|
"type": "Mm"
|
|
|
|
}
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
{
|
|
|
|
"__geoMeta": {
|
|
|
|
"id": "[uuid]",
|
2025-03-20 11:06:27 +13:00
|
|
|
"sourceRange": []
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
"from": [
|
|
|
|
20.0,
|
|
|
|
20.0
|
|
|
|
],
|
|
|
|
"tag": null,
|
|
|
|
"to": [
|
|
|
|
20.0,
|
|
|
|
-20.0
|
|
|
|
],
|
2025-02-20 10:12:37 +13:00
|
|
|
"type": "ToPoint",
|
|
|
|
"units": {
|
|
|
|
"type": "Mm"
|
|
|
|
}
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
{
|
|
|
|
"__geoMeta": {
|
|
|
|
"id": "[uuid]",
|
2025-03-20 11:06:27 +13:00
|
|
|
"sourceRange": []
|
2025-02-12 10:22:56 +13:00
|
|
|
},
|
2025-02-13 11:59:57 +13:00
|
|
|
"from": [
|
|
|
|
20.0,
|
|
|
|
-20.0
|
|
|
|
],
|
|
|
|
"tag": null,
|
|
|
|
"to": [
|
|
|
|
-20.0,
|
|
|
|
-20.0
|
|
|
|
],
|
2025-02-20 10:12:37 +13:00
|
|
|
"type": "ToPoint",
|
|
|
|
"units": {
|
|
|
|
"type": "Mm"
|
|
|
|
}
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
{
|
|
|
|
"__geoMeta": {
|
|
|
|
"id": "[uuid]",
|
2025-03-20 11:06:27 +13:00
|
|
|
"sourceRange": []
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
"from": [
|
|
|
|
-20.0,
|
|
|
|
-20.0
|
|
|
|
],
|
|
|
|
"tag": null,
|
|
|
|
"to": [
|
|
|
|
-20.0,
|
|
|
|
-20.0
|
|
|
|
],
|
2025-02-20 10:12:37 +13:00
|
|
|
"type": "ToPoint",
|
|
|
|
"units": {
|
|
|
|
"type": "Mm"
|
|
|
|
}
|
2025-02-13 11:59:57 +13:00
|
|
|
}
|
|
|
|
],
|
|
|
|
"on": {
|
|
|
|
"type": "plane",
|
|
|
|
"id": "[uuid]",
|
|
|
|
"artifactId": "[uuid]",
|
|
|
|
"value": "XY",
|
|
|
|
"origin": {
|
|
|
|
"x": 0.0,
|
|
|
|
"y": 0.0,
|
2025-04-14 05:58:19 -04:00
|
|
|
"z": 0.0,
|
|
|
|
"units": {
|
|
|
|
"type": "Mm"
|
|
|
|
}
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
"xAxis": {
|
|
|
|
"x": 1.0,
|
|
|
|
"y": 0.0,
|
2025-04-14 05:58:19 -04:00
|
|
|
"z": 0.0,
|
|
|
|
"units": {
|
|
|
|
"type": "Mm"
|
|
|
|
}
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
"yAxis": {
|
|
|
|
"x": 0.0,
|
|
|
|
"y": 1.0,
|
2025-04-14 05:58:19 -04:00
|
|
|
"z": 0.0,
|
|
|
|
"units": {
|
|
|
|
"type": "Mm"
|
|
|
|
}
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
"zAxis": {
|
|
|
|
"x": 0.0,
|
|
|
|
"y": 0.0,
|
2025-04-14 05:58:19 -04:00
|
|
|
"z": 1.0,
|
|
|
|
"units": {
|
|
|
|
"type": "Mm"
|
|
|
|
}
|
2025-03-13 11:13:33 -07:00
|
|
|
}
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
"start": {
|
|
|
|
"from": [
|
|
|
|
-20.0,
|
|
|
|
-20.0
|
|
|
|
],
|
|
|
|
"to": [
|
|
|
|
-20.0,
|
|
|
|
-20.0
|
|
|
|
],
|
2025-02-20 10:12:37 +13:00
|
|
|
"units": {
|
|
|
|
"type": "Mm"
|
|
|
|
},
|
2025-02-13 11:59:57 +13:00
|
|
|
"tag": null,
|
|
|
|
"__geoMeta": {
|
|
|
|
"id": "[uuid]",
|
2025-03-20 11:06:27 +13:00
|
|
|
"sourceRange": []
|
2025-02-12 10:22:56 +13:00
|
|
|
}
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
"artifactId": "[uuid]",
|
|
|
|
"originalId": "[uuid]",
|
|
|
|
"units": {
|
|
|
|
"type": "Mm"
|
2025-03-13 11:13:33 -07:00
|
|
|
}
|
2025-02-13 11:59:57 +13:00
|
|
|
},
|
|
|
|
"height": 40.0,
|
|
|
|
"startCapId": "[uuid]",
|
|
|
|
"endCapId": "[uuid]",
|
|
|
|
"units": {
|
|
|
|
"type": "Mm"
|
2025-03-13 11:13:33 -07:00
|
|
|
}
|
KCL: New simulation test pipeline (#4351)
The idea behind this is to test all the various stages of executing KCL
separately, i.e.
- Start with a program
- Tokenize it
- Parse those tokens into an AST
- Recast the AST
- Execute the AST, outputting
- a PNG of the rendered model
- serialized program memory
Each of these steps reads some input and writes some output to disk.
The output of one step becomes the input to the next step. These
intermediate artifacts are also snapshotted (like expectorate or 2020)
to ensure we're aware of any changes to how KCL works. A change could
be a bug, or it could be harmless, or deliberate, but keeping it checked
into the repo means we can easily track changes.
Note: UUIDs sent back by the engine are currently nondeterministic, so
they would break all the snapshot tests. So, the snapshots use a regex
filter and replace anything that looks like a uuid with [uuid] when
writing program memory to a snapshot. In the future I hope our UUIDs will
be seedable and easy to make deterministic. At that point, we can stop
filtering the UUIDs.
We run this pipeline on many different KCL programs. Each keeps its
inputs (KCL programs), outputs (PNG, program memory snapshot) and
intermediate artifacts (AST, token lists, etc) in that directory.
I also added a new `just` command to easily generate these tests.
You can run `just new-sim-test gear $(cat gear.kcl)` to set up a new
gear test directory and generate all the intermediate artifacts for the
first time. This doesn't need any macros, it just appends some new lines
of normal Rust source code to `tests.rs`, so it's easy to see exactly
what the code is doing.
This uses `cargo insta` for convenient snapshot testing of artifacts
as JSON, and `twenty-twenty` for snapshotting PNGs.
This was heavily inspired by Predrag Gruevski's talk at EuroRust 2024
about deterministic simulation testing, and how it can both reduce bugs
and also reduce testing/CI time. Very grateful to him for chatting with
me about this over the last couple of weeks.
2024-10-30 12:14:17 -05:00
|
|
|
}
|
2025-02-13 11:59:57 +13:00
|
|
|
}
|
KCL: New simulation test pipeline (#4351)
The idea behind this is to test all the various stages of executing KCL
separately, i.e.
- Start with a program
- Tokenize it
- Parse those tokens into an AST
- Recast the AST
- Execute the AST, outputting
- a PNG of the rendered model
- serialized program memory
Each of these steps reads some input and writes some output to disk.
The output of one step becomes the input to the next step. These
intermediate artifacts are also snapshotted (like expectorate or 2020)
to ensure we're aware of any changes to how KCL works. A change could
be a bug, or it could be harmless, or deliberate, but keeping it checked
into the repo means we can easily track changes.
Note: UUIDs sent back by the engine are currently nondeterministic, so
they would break all the snapshot tests. So, the snapshots use a regex
filter and replace anything that looks like a uuid with [uuid] when
writing program memory to a snapshot. In the future I hope our UUIDs will
be seedable and easy to make deterministic. At that point, we can stop
filtering the UUIDs.
We run this pipeline on many different KCL programs. Each keeps its
inputs (KCL programs), outputs (PNG, program memory snapshot) and
intermediate artifacts (AST, token lists, etc) in that directory.
I also added a new `just` command to easily generate these tests.
You can run `just new-sim-test gear $(cat gear.kcl)` to set up a new
gear test directory and generate all the intermediate artifacts for the
first time. This doesn't need any macros, it just appends some new lines
of normal Rust source code to `tests.rs`, so it's easy to see exactly
what the code is doing.
This uses `cargo insta` for convenient snapshot testing of artifacts
as JSON, and `twenty-twenty` for snapshotting PNGs.
This was heavily inspired by Predrag Gruevski's talk at EuroRust 2024
about deterministic simulation testing, and how it can both reduce bugs
and also reduce testing/CI time. Very grateful to him for chatting with
me about this over the last couple of weeks.
2024-10-30 12:14:17 -05:00
|
|
|
}
|