Files
modeling-app/rust/kcl-lib/src/test_server.rs

132 lines
4.3 KiB
Rust
Raw Normal View History

//! Types used to send data to the test server.
use std::path::PathBuf;
use crate::{
engine::new_zoo_client,
errors::ExecErrorWithState,
execution::{EnvironmentRef, ExecState, ExecutorContext, ExecutorSettings},
settings::types::UnitLength,
ConnectionError, ExecError, KclError, KclErrorWithOutputs, Program,
};
#[derive(serde::Deserialize, serde::Serialize)]
pub struct RequestBody {
pub kcl_program: String,
#[serde(default)]
pub test_name: String,
}
/// Executes a kcl program and takes a snapshot of the result.
/// This returns the bytes of the snapshot.
pub async fn execute_and_snapshot(
code: &str,
units: UnitLength,
current_file: Option<PathBuf>,
) -> Result<image::DynamicImage, ExecError> {
let ctx = new_context(units, true, current_file).await?;
let program = Program::parse_no_errs(code).map_err(KclErrorWithOutputs::no_outputs)?;
let res = do_execute_and_snapshot(&ctx, program)
.await
.map(|(_, _, snap)| snap)
.map_err(|err| err.error);
ctx.close().await;
res
return errors back to user (#4075) * Log any Errors to stderr This isn't perfect -- in fact, this is maybe not even very good at all, but it's better than what we have today. Currently, when we get an Erorr back from the WebSocket, we drop it in kcl-lib. The web-app logs these to the console (I can't find my commit doing that off the top of my head, but I remember doing it) -- so this is some degree of partity. This won't be very useful at all for wasm usage, but it will fix issues with the zoo cli silently breaking with a "WebSocket Closed" error -- which is the same issue I was solving for in the desktop app too. In the future perhaps this can be a real Error? I'm not totally sure yet, since we can't align to the request-id, so we can't really tie it to a specific call (yet). * add to responses Signed-off-by: Jess Frazelle <github@jessfraz.com> * A snapshot a day keeps the bugs away! 📷🐛 (OS: ubuntu-latest) * add a test Signed-off-by: Jess Frazelle <github@jessfraz.com> * clippy[ Signed-off-by: Jess Frazelle <github@jessfraz.com> * A snapshot a day keeps the bugs away! 📷🐛 (OS: ubuntu-latest) * empty * fix error Signed-off-by: Jess Frazelle <github@jessfraz.com> * updates tests Signed-off-by: Jess Frazelle <github@jessfraz.com> * docs Signed-off-by: Jess Frazelle <github@jessfraz.com> --------- Signed-off-by: Jess Frazelle <github@jessfraz.com> Co-authored-by: Jess Frazelle <github@jessfraz.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Jess Frazelle <jessfraz@users.noreply.github.com>
2024-10-03 01:05:12 -04:00
}
KCL: New simulation test pipeline (#4351) The idea behind this is to test all the various stages of executing KCL separately, i.e. - Start with a program - Tokenize it - Parse those tokens into an AST - Recast the AST - Execute the AST, outputting - a PNG of the rendered model - serialized program memory Each of these steps reads some input and writes some output to disk. The output of one step becomes the input to the next step. These intermediate artifacts are also snapshotted (like expectorate or 2020) to ensure we're aware of any changes to how KCL works. A change could be a bug, or it could be harmless, or deliberate, but keeping it checked into the repo means we can easily track changes. Note: UUIDs sent back by the engine are currently nondeterministic, so they would break all the snapshot tests. So, the snapshots use a regex filter and replace anything that looks like a uuid with [uuid] when writing program memory to a snapshot. In the future I hope our UUIDs will be seedable and easy to make deterministic. At that point, we can stop filtering the UUIDs. We run this pipeline on many different KCL programs. Each keeps its inputs (KCL programs), outputs (PNG, program memory snapshot) and intermediate artifacts (AST, token lists, etc) in that directory. I also added a new `just` command to easily generate these tests. You can run `just new-sim-test gear $(cat gear.kcl)` to set up a new gear test directory and generate all the intermediate artifacts for the first time. This doesn't need any macros, it just appends some new lines of normal Rust source code to `tests.rs`, so it's easy to see exactly what the code is doing. This uses `cargo insta` for convenient snapshot testing of artifacts as JSON, and `twenty-twenty` for snapshotting PNGs. This was heavily inspired by Predrag Gruevski's talk at EuroRust 2024 about deterministic simulation testing, and how it can both reduce bugs and also reduce testing/CI time. Very grateful to him for chatting with me about this over the last couple of weeks.
2024-10-30 12:14:17 -05:00
/// Executes a kcl program and takes a snapshot of the result.
/// This returns the bytes of the snapshot.
pub async fn execute_and_snapshot_ast(
ast: Program,
KCL: New simulation test pipeline (#4351) The idea behind this is to test all the various stages of executing KCL separately, i.e. - Start with a program - Tokenize it - Parse those tokens into an AST - Recast the AST - Execute the AST, outputting - a PNG of the rendered model - serialized program memory Each of these steps reads some input and writes some output to disk. The output of one step becomes the input to the next step. These intermediate artifacts are also snapshotted (like expectorate or 2020) to ensure we're aware of any changes to how KCL works. A change could be a bug, or it could be harmless, or deliberate, but keeping it checked into the repo means we can easily track changes. Note: UUIDs sent back by the engine are currently nondeterministic, so they would break all the snapshot tests. So, the snapshots use a regex filter and replace anything that looks like a uuid with [uuid] when writing program memory to a snapshot. In the future I hope our UUIDs will be seedable and easy to make deterministic. At that point, we can stop filtering the UUIDs. We run this pipeline on many different KCL programs. Each keeps its inputs (KCL programs), outputs (PNG, program memory snapshot) and intermediate artifacts (AST, token lists, etc) in that directory. I also added a new `just` command to easily generate these tests. You can run `just new-sim-test gear $(cat gear.kcl)` to set up a new gear test directory and generate all the intermediate artifacts for the first time. This doesn't need any macros, it just appends some new lines of normal Rust source code to `tests.rs`, so it's easy to see exactly what the code is doing. This uses `cargo insta` for convenient snapshot testing of artifacts as JSON, and `twenty-twenty` for snapshotting PNGs. This was heavily inspired by Predrag Gruevski's talk at EuroRust 2024 about deterministic simulation testing, and how it can both reduce bugs and also reduce testing/CI time. Very grateful to him for chatting with me about this over the last couple of weeks.
2024-10-30 12:14:17 -05:00
units: UnitLength,
current_file: Option<PathBuf>,
) -> Result<(ExecState, EnvironmentRef, image::DynamicImage), ExecErrorWithState> {
let ctx = new_context(units, true, current_file).await?;
Rust artifact graph (#5068) * Start porting artifact graph creation to Rust * Add most of artifact graph creation * Add handling loft command from recent PR * Refactor artifact merge code so that it errors when a new artifact type is added * Add sweep subtype * Finish implementation of build artifact graph * Fix wasm.ts to use new combined generated ts-rs file * Fix Rust lints * Fix lints * Fix up replacement code * Add artifact graph to WASM outcome * Add artifact graph to simulation test output * Add new artifact graph output snapshots * Fix wall field and reduce unreachable code * Change field order for subtype * Change subtype to be determined from the request, like the TS * Fix plane sweep_id * Condense code * Change ID types to be properly optional * Change to favor the new ID, the same as TS * Fix to make error impossible * Rename artifact type tag values to match TS * Fix name of field on Cap * Update outputs * Change to use Rust source range * Update output snapshots * Add conversion to mermaid mind map and add to snapshot tests * Add new mermaid mind map output * Add flowchart * Remove raw artifact graph from tests * Remove JSON artifact graph output * Update output file with header * Update output after adding flowchart * Fix flowchart to not have duplicate edges, one in each direction * Fix not not output duplicate edges in flowcharts * Change flowchart edge style to be more obvious when a direction is missing * Update output after deduplication of edges * Fix not not skip sketch-on-face artifacts * Add docs * Fix edge iteration order to be stable * Update output after fixing order * Port TS artifactGraph.test.ts tests to simulation tests * Add grouping segments and solid2ds with their path * Update output flowcharts since grouping paths * Remove TS artifactGraph tests * Remove unused d3 dependencies * Fix to track loft ID on paths * Add command ID to error messages * Move artifact graph test code to a separate file since it's a large file * Reduce function visibility * Remove TS artifact graph code * Fix spelling error with serde * Add TODO for edge cut consumed ID * Add comment about mermaid edge rank * Fix mermaid flowchart edge cuts to appear as children of their edges * Update output since fixing flowchart order * Fix to always build the artifact graph even when there's a KCL error * Add artifact graph to error output * Change optional ID merge to match TS * Remove redundant SourceRange definition * Remove Rust-flavored default source range function * Add helper for source range creation * Update doc comment for the website * Update docs after doc comment change * Fix to save engine responses in execution cache * Remove unused import * Fix to not call WASM function before beforeAll callback is run * Remove more unused imports
2025-01-17 14:34:36 -05:00
let res = do_execute_and_snapshot(&ctx, ast).await;
ctx.close().await;
res
return errors back to user (#4075) * Log any Errors to stderr This isn't perfect -- in fact, this is maybe not even very good at all, but it's better than what we have today. Currently, when we get an Erorr back from the WebSocket, we drop it in kcl-lib. The web-app logs these to the console (I can't find my commit doing that off the top of my head, but I remember doing it) -- so this is some degree of partity. This won't be very useful at all for wasm usage, but it will fix issues with the zoo cli silently breaking with a "WebSocket Closed" error -- which is the same issue I was solving for in the desktop app too. In the future perhaps this can be a real Error? I'm not totally sure yet, since we can't align to the request-id, so we can't really tie it to a specific call (yet). * add to responses Signed-off-by: Jess Frazelle <github@jessfraz.com> * A snapshot a day keeps the bugs away! 📷🐛 (OS: ubuntu-latest) * add a test Signed-off-by: Jess Frazelle <github@jessfraz.com> * clippy[ Signed-off-by: Jess Frazelle <github@jessfraz.com> * A snapshot a day keeps the bugs away! 📷🐛 (OS: ubuntu-latest) * empty * fix error Signed-off-by: Jess Frazelle <github@jessfraz.com> * updates tests Signed-off-by: Jess Frazelle <github@jessfraz.com> * docs Signed-off-by: Jess Frazelle <github@jessfraz.com> --------- Signed-off-by: Jess Frazelle <github@jessfraz.com> Co-authored-by: Jess Frazelle <github@jessfraz.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Jess Frazelle <jessfraz@users.noreply.github.com>
2024-10-03 01:05:12 -04:00
}
pub async fn execute_and_snapshot_no_auth(
code: &str,
units: UnitLength,
current_file: Option<PathBuf>,
) -> Result<(image::DynamicImage, EnvironmentRef), ExecError> {
let ctx = new_context(units, false, current_file).await?;
let program = Program::parse_no_errs(code).map_err(KclErrorWithOutputs::no_outputs)?;
let res = do_execute_and_snapshot(&ctx, program)
.await
.map(|(_, env_ref, snap)| (snap, env_ref))
.map_err(|err| err.error);
ctx.close().await;
res
KCL: New simulation test pipeline (#4351) The idea behind this is to test all the various stages of executing KCL separately, i.e. - Start with a program - Tokenize it - Parse those tokens into an AST - Recast the AST - Execute the AST, outputting - a PNG of the rendered model - serialized program memory Each of these steps reads some input and writes some output to disk. The output of one step becomes the input to the next step. These intermediate artifacts are also snapshotted (like expectorate or 2020) to ensure we're aware of any changes to how KCL works. A change could be a bug, or it could be harmless, or deliberate, but keeping it checked into the repo means we can easily track changes. Note: UUIDs sent back by the engine are currently nondeterministic, so they would break all the snapshot tests. So, the snapshots use a regex filter and replace anything that looks like a uuid with [uuid] when writing program memory to a snapshot. In the future I hope our UUIDs will be seedable and easy to make deterministic. At that point, we can stop filtering the UUIDs. We run this pipeline on many different KCL programs. Each keeps its inputs (KCL programs), outputs (PNG, program memory snapshot) and intermediate artifacts (AST, token lists, etc) in that directory. I also added a new `just` command to easily generate these tests. You can run `just new-sim-test gear $(cat gear.kcl)` to set up a new gear test directory and generate all the intermediate artifacts for the first time. This doesn't need any macros, it just appends some new lines of normal Rust source code to `tests.rs`, so it's easy to see exactly what the code is doing. This uses `cargo insta` for convenient snapshot testing of artifacts as JSON, and `twenty-twenty` for snapshotting PNGs. This was heavily inspired by Predrag Gruevski's talk at EuroRust 2024 about deterministic simulation testing, and how it can both reduce bugs and also reduce testing/CI time. Very grateful to him for chatting with me about this over the last couple of weeks.
2024-10-30 12:14:17 -05:00
}
KCL: New simulation test pipeline (#4351) The idea behind this is to test all the various stages of executing KCL separately, i.e. - Start with a program - Tokenize it - Parse those tokens into an AST - Recast the AST - Execute the AST, outputting - a PNG of the rendered model - serialized program memory Each of these steps reads some input and writes some output to disk. The output of one step becomes the input to the next step. These intermediate artifacts are also snapshotted (like expectorate or 2020) to ensure we're aware of any changes to how KCL works. A change could be a bug, or it could be harmless, or deliberate, but keeping it checked into the repo means we can easily track changes. Note: UUIDs sent back by the engine are currently nondeterministic, so they would break all the snapshot tests. So, the snapshots use a regex filter and replace anything that looks like a uuid with [uuid] when writing program memory to a snapshot. In the future I hope our UUIDs will be seedable and easy to make deterministic. At that point, we can stop filtering the UUIDs. We run this pipeline on many different KCL programs. Each keeps its inputs (KCL programs), outputs (PNG, program memory snapshot) and intermediate artifacts (AST, token lists, etc) in that directory. I also added a new `just` command to easily generate these tests. You can run `just new-sim-test gear $(cat gear.kcl)` to set up a new gear test directory and generate all the intermediate artifacts for the first time. This doesn't need any macros, it just appends some new lines of normal Rust source code to `tests.rs`, so it's easy to see exactly what the code is doing. This uses `cargo insta` for convenient snapshot testing of artifacts as JSON, and `twenty-twenty` for snapshotting PNGs. This was heavily inspired by Predrag Gruevski's talk at EuroRust 2024 about deterministic simulation testing, and how it can both reduce bugs and also reduce testing/CI time. Very grateful to him for chatting with me about this over the last couple of weeks.
2024-10-30 12:14:17 -05:00
async fn do_execute_and_snapshot(
ctx: &ExecutorContext,
program: Program,
) -> Result<(ExecState, EnvironmentRef, image::DynamicImage), ExecErrorWithState> {
let mut exec_state = ExecState::new(&ctx.settings);
let result = ctx
.run_with_ui_outputs(&program, &mut exec_state)
.await
.map_err(|err| ExecErrorWithState::new(err.into(), exec_state.clone()))?;
for e in exec_state.errors() {
if e.severity.is_err() {
return Err(ExecErrorWithState::new(
KclErrorWithOutputs::no_outputs(KclError::Semantic(e.clone().into())).into(),
exec_state.clone(),
));
}
}
let snapshot_png_bytes = ctx
.prepare_snapshot()
.await
.map_err(|err| ExecErrorWithState::new(err, exec_state.clone()))?
.contents
.0;
// Decode the snapshot, return it.
let img = image::ImageReader::new(std::io::Cursor::new(snapshot_png_bytes))
.with_guessed_format()
.map_err(|e| ExecError::BadPng(e.to_string()))
.and_then(|x| x.decode().map_err(|e| ExecError::BadPng(e.to_string())))
.map_err(|err| ExecErrorWithState::new(err, exec_state.clone()))?;
ctx.close().await;
Ok((exec_state, result.0, img))
}
pub async fn new_context(
units: UnitLength,
with_auth: bool,
current_file: Option<PathBuf>,
) -> Result<ExecutorContext, ConnectionError> {
let mut client = new_zoo_client(if with_auth { None } else { Some("bad_token".to_string()) }, None)
.map_err(ConnectionError::CouldNotMakeClient)?;
if !with_auth {
// Use prod, don't override based on env vars.
// We do this so even in the engine repo, tests that need to run with
// no auth can fail in the same way as they would in prod.
client.set_base_url("https://api.zoo.dev".to_string());
}
let mut settings = ExecutorSettings {
units,
highlight_edges: true,
enable_ssao: false,
show_grid: false,
replay: None,
project_directory: None,
current_file: None,
};
if let Some(current_file) = current_file {
settings.with_current_file(current_file);
}
let ctx = ExecutorContext::new(&client, settings)
.await
.map_err(ConnectionError::Establishing)?;
Ok(ctx)
}