Compare commits

...

29 Commits

Author SHA1 Message Date
727a7e87f1 Merge branch 'franknoirot/adhoc/appearance-tweaks-modeling-view' into franknoirot/adhoc/split-sidebar 2025-04-21 13:51:37 -04:00
00b779a767 Merge branch 'main' into franknoirot/adhoc/appearance-tweaks-modeling-view 2025-04-17 21:48:36 -04:00
bd4bad0020 allow sending async commands to engine (#6342)
* start of async

Signed-off-by: Jess Frazelle <github@jessfraz.com>

check at end if the async commands completed

Signed-off-by: Jess Frazelle <github@jessfraz.com>

run at the end of inner_run

Signed-off-by: Jess Frazelle <github@jessfraz.com>

set import as async

Signed-off-by: Jess Frazelle <github@jessfraz.com>

updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

add to the wasm side

Signed-off-by: Jess Frazelle <github@jessfraz.com>

updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

fmt

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fire

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* flake

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fixup for awaiting on import

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fix mock

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fix mock

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fixes

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* add a test where we import then do a bunch of other stuff

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fixup to see

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fixups

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fix tests

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* cross platform time

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fixes

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* another appearance tests

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* new docs and tests

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* dont loop so tight

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fixes

Signed-off-by: Jess Frazelle <github@jessfraz.com>

---------

Signed-off-by: Jess Frazelle <github@jessfraz.com>
2025-04-17 17:22:19 -07:00
124ecc11ba Merge branch 'main' into franknoirot/adhoc/appearance-tweaks-modeling-view 2025-04-17 15:39:00 -04:00
0b9889e313 Double the timeout in sketch editing test (#6376) 2025-04-17 14:32:11 -04:00
ea330c3b2e make React happy with clipRule 2025-04-17 14:19:31 -04:00
dfe7a32f1e Remove all use of bold mono font
A few mono font points are left, mostly for numbers which look better in
monospace.
2025-04-17 14:12:43 -04:00
f9fbaa2298 Add files via upload (#6227)
* Add files via upload

Adding parametric pc fan and bottle

* Update and rename globals.kcl to parameters.kcl

* Update fan-housing.kcl

* Update fan-housing.kcl

* Update fan.kcl

* Update motor.kcl

* Update parameters.kcl

* Update kcl-samples simulation test output

* Update main.kcl

avoiding fn imports

* Update kcl-samples simulation test output

* remove functions

* angledLine kwargs

* tangentalArc kwargs

* Update kcl-samples simulation test output

* Update housing middle

more tweaks because I just can't help myself

* Update kcl-samples simulation test output

* Update kcl-samples simulation test output

---------

Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: jgomez720 <114548659+jgomez720@users.noreply.github.com>
2025-04-17 17:46:56 +00:00
3600354c13 Break out ActionSidebar to right side of screen 2025-04-17 12:54:25 -04:00
1f705ca6df Break out ModelingSidebarButton to its own file 2025-04-17 12:29:24 -04:00
5a1b0777b7 Move getPlatformString out to a util 2025-04-17 12:17:52 -04:00
6f2d127c4f Assemblies: Set translate and rotate via point-and-click (#6167)
* WIP: Add point-and-click Import for geometry
Will eventually fix #6120
Right now the whole loop is there but the codemod doesn't work yet

* Better pathToNOde, log on non-working cm dispatch call

* Add workaround to updateModelingState not working

* Back to updateModelingState with a skip flag

* Better todo

* Change working from Import to Insert, cleanups

* Sister command in kclCommands to populate file options

* Improve path selector

* Unsure: move importAstMod to kclCommands onSubmit 😶

* Add e2e test

* Clean up for review

* Add native file menu entry and test

* No await yo lint said so

* WIP: UX improvements around foreign file imports
Fixes #6152

* WIP: Set translate and rotate via point-and-click on imports. Boilerplate code
Will eventually close #6020

* Full working loop of rotate and translate pipe mutation, including edits, only on module imports. VERY VERBOSE

* Add first e2e test for set transform. Bunch of caveats listed as TODOs

* @lrev-Dev's suggestion to remove a comment

Co-authored-by: Kurt Hutten <k.hutten@protonmail.ch>

* Update to scene.settled(cmdBar)

* Add partNNN default name for alias

* Lint

* Lint

* Fix unit tests

* Add sad path insert test
Thanks @Irev-Dev for the suggestion

* Add step insert test

* Lint

* Add test for second foreign import thru file tree click

* WIP: Add point-and-click Load to copy files from outside the project into the project
Towards #6210

* Move Insert button to modeling toolbar, update menus and toolbars

* Add default value for local name alias

* Aligning tests

* Fix tests

* Add padding for filenames starting with a digit

* Lint

* Lint

* Update snapshots

* Merge branch 'main' into pierremtb/issue6210-Add-point-and-click-Load-to-copy-files-from-outside-the-project-into-the-project

* Add disabled transform subbutton

* Allow start of Transform flow from toolbar with selection

* Merge kcl-samples and local disk load into one 'Load external model' command

* Fix em tests

* Fix test

* Add test for file pick import, better input

* Fix non .kcl loading

* Lint

* Update snapshots

* Fix issue leading to test failure

* Fix clone test

* Add note

* Fix nested clone issue

* Clean up for review

* Add valueSummary for path

* Fix test after path change

* Clean up for review

* Support much wider range for transform

* Set display names

* Bug fixed itself moment...

* Add test for extrude tranform

* Oops missed a thing

* Clean up selection arg

* More tests incl for variable stuff

* Fix imports

* Add supportsTransform: true on all solids returning operations

* Fix edit flow on variables, add test

* Split transform command into translate and rotate

* Clean up and comment

* Clean up operations.ts

* Add comment

* Improve assemblies test

* Support more things

* Typo

* Fix test after unit change on import

* Last clean up for review

* Fix remaining test

---------

Co-authored-by: Kurt Hutten <k.hutten@protonmail.ch>
2025-04-17 15:44:31 +00:00
056a4d4a22 [FIX]: "Connecting to engine" overlay shows too often (#6350)
* Use the engineStreamState that's available right there silly

* fix lint

* Just look at playing/paused

---------

Co-authored-by: Jace Browning <jacebrowning@gmail.com>
2025-04-17 12:49:31 +00:00
bab79331cb #6367 Tangent snapping improvements (#6369)
* fix snapping line being culled

* make snap line more grayed out

* make snapping tolerance smaller
2025-04-17 08:26:05 -04:00
16bbc970ae fix sketch flash part 2 (#6366) 2025-04-17 08:15:54 -04:00
0f1cff316c add toast for selections we don't recognise (#6370)
* ad toast for selections we don't recognise

* remove log
2025-04-17 21:32:13 +10:00
938a2bae13 Fix to not duplicate arg labels on constrained sketches (#6337) 2025-04-17 07:16:55 +00:00
3324835b72 ci: Fix so that just commands regenerate ast output (#6338)
* Fix so that just commands regenerate ast output

* Update just command to include manifest

* Fix overwrite-sim-test kcl_samples to generate manifest

This is the command used in CI.
2025-04-17 06:03:58 +00:00
1628ea86b2 Remove pixel color checks (#6365) 2025-04-17 14:25:45 +10:00
be119248a6 put execution benchmarks behind flag (#6364)
Signed-off-by: Jess Frazelle <github@jessfraz.com>
2025-04-16 21:23:30 -07:00
ac75181f7f fix bug and remove flash in sketch mode (#6346)
* fix bug and remove flash

* add test

* fix tests

* fix tests
2025-04-17 10:10:27 +10:00
d9fe78171f parallelized modules from kcl (#6206)
* wip

* sketch a bit more; going to pull this out of tests next

* wip

* lock start things

* this was a bad idea

* Revert "this was a bad idea"

This reverts commit a2092e7ed6.

* prepare prelude before spawning

* error

* poop

* yike

* :(

* ok

* Reapply "this was a bad idea"

This reverts commit fafdf41093.

* chip away more

* man this is bad

* fix rebase add feature flag

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* get rid of execution kind

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* clippy

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* logs

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* no extra executes

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* race w batch

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* cluppy

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* no printlns

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* no printlns

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fix source ranges

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* batch shit

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fixes

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fix

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fix some bugs

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* fix error

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* cut 1

* preserve mem

* re-ad deep_clone

the helper we were calling was pushing a new call, which was hanging
out. we can skip the middleman since we already have something properly
prepared, just without a stdlib in some cases.

* skip non-kcl

* clean up source range bug

* error message changed

the uuids also changed because the error is hit before execute even
starts.

* typo

* rensnapshot a few

* order things

* MAYBE REVERT LATER:

attempt at an ordering

* snapsnap

* Revert "snapsnap"

This reverts commit 7350b32c7d.

* Revert "MAYBE REVERT LATER:"

This reverts commit ab49f3e85f.

* ugh

* poop

* poop2

* lint

* tranche 1

* more

* more snaps

* snap

* more

* update

* MAYBE REVERT THIS

* cache multi-file

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* addd tests

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* set to false

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* add test outputs

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* clippy

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* kcl-py-bindings uses carwheel

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* update snapshots

Signed-off-by: Jess Frazelle <github@jessfraz.com>

* updates

Signed-off-by: Jess Frazelle <github@jessfraz.com>

---------

Signed-off-by: Jess Frazelle <github@jessfraz.com>
Co-authored-by: Paul R. Tagliamonte <paul@zoo.dev>
Co-authored-by: Paul Tagliamonte <paultag@gmail.com>
2025-04-16 11:52:14 -07:00
5586646a60 Fix insta version in Cargo.toml to match lock file (#6348)
Cargo decided to downgrade `windows-sys` for some reason.
2025-04-16 17:50:39 +00:00
e955e783f4 KCL: Error when user sets a keyword arg multiple times (#6339)
Previously, KCL would silently ignore any duplicated keyword args, setting the parameter to one arbitrarily-chosen argument. Now this is instead a parse error. 

I've tested that
1. The error message makes sense
2. The error is on a reasonable part of the source code
3. The error doesn't prevent other kinds of parse errors being picked up later

Thanks for noticing this one Frank!
2025-04-16 11:18:45 -05:00
9162c4d723 Split out the contributor guide (#6347) 2025-04-16 15:48:36 +00:00
f345a07bcd Update helix e2e test to account for 90deg engine change (#6341)
The engine fixed the start angle for helices, but our e2e test has a
pixel check so it broke. I made the test change the `startAngle` by
`-90` rather than try to hunt around for a new working pixel value.
2025-04-16 00:09:58 +00:00
f5e3b4bbe7 Update helix snapshots (#6340)
Engine fixed a bug where helices were starting at the wrong position. Requires updating the snapshots.
2025-04-15 17:15:37 -05:00
941911ed30 docs: Fix syntax error and deprecated string planes in types.md (#6332) 2025-04-15 17:42:24 +00:00
599bd2d958 Switch to a 6-core profile for mac (#6323)
* Switch to a 6-core profile for mac

* Bump workers percentage to optimize for CPU count (#6329)

* Bump macOS workers percentage to optimize for CPU count

* Handle floor math

---------

Co-authored-by: Jace Browning <jacebrowning@gmail.com>
2025-04-15 13:07:16 -04:00
305 changed files with 5711382 additions and 9895 deletions

View File

@ -50,13 +50,12 @@ jobs:
- name: Build the benchmark target(s)
run: |
cd rust
cargo codspeed build --measurement-mode walltime
cargo codspeed build
- name: Run the benchmarks
uses: CodSpeedHQ/action@v3
with:
working-directory: rust
run: cargo codspeed run
token: ${{ secrets.CODSPEED_TOKEN }}
mode: walltime
env:
KITTYCAD_API_TOKEN: ${{ secrets.KITTYCAD_API_TOKEN }}

View File

@ -285,7 +285,7 @@ jobs:
# TODO: enable namespace-profile-windows-latest once available
os:
- "runs-on=${{ github.run_id }}/family=i7ie.2xlarge/image=ubuntu22-full-x64"
- namespace-profile-macos-8-cores
- namespace-profile-macos-6-cores
- windows-latest-8-cores
shardIndex: [1, 2, 3, 4]
shardTotal: [4]
@ -295,7 +295,7 @@ jobs:
isScheduled:
- ${{ github.event_name == 'schedule' }}
exclude:
- os: namespace-profile-macos-8-cores
- os: namespace-profile-macos-6-cores
isScheduled: true
- os: windows-latest-8-cores
isScheduled: true

432
CONTRIBUTING.md Normal file
View File

@ -0,0 +1,432 @@
# Contributor Guide
## Running a development build
Install a node version manager such as [fnm](https://github.com/Schniz/fnm?tab=readme-ov-#installation).
On Windows, it's also recommended to [upgrade your PowerShell version](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.5#winget), we're using 7.
Then in the repo run the following to install and use the node version specified in `.nvmrc`. You might need to specify your processor architecture with `--arch arm64` or `--arch x64` if it's not autodetected.
```
fnm install --corepack-enabled
fnm use
```
Install the NPM dependencies with:
```
npm install
```
This project uses a lot of Rust compiled to [WASM](https://webassembly.org/) within it. We have package scripts to run rustup, see `package.json` for reference:
```
# macOS/Linux
npm run install:rust
npm run install:wasm-pack:sh
# Windows
npm run install:rust:windows
npm run install:wasm-pack:cargo
```
Then to build the WASM layer, run:
```
# macOS/Linux
npm run build:wasm
# Windows
npm run build:wasm:windows
```
Or if you have the `gh` cli installed and want to download the latest main wasm bundle. Note that on Windows, you need to associate .ps1 files with PowerShell, which can be done via the right click menu, selecting `C:\Program Files\PowerShell\7\pwsh.exe`, and you can install tools like `gh` via `npm run install:tools:windows`.
```
# macOS/Linux
npm run fetch:wasm
# Windows
npm run fetch:wasm:windows
```
That will build the WASM binary and put in the `public` dir (though gitignored).
Finally, to run the web app only, run:
```
npm start
```
If you're not a Zoo employee you won't be able to access the dev environment, you should copy everything from `.env.production` to `.env.development.local` to make it point to production instead, then when you navigate to `localhost:3000` the easiest way to sign in is to paste `localStorage.setItem('TOKEN_PERSIST_KEY', "your-token-from-https://zoo.dev/account/api-tokens")` replacing the with a real token from https://zoo.dev/account/api-tokens of course, then navigate to `localhost:3000` again. Note that navigating to `localhost:3000/signin` removes your token so you will need to set the token again.
### Development environment variables
The Copilot LSP plugin in the editor requires a Zoo API token to run. In production, we authenticate this with a token via cookie in the browser and device auth token in the desktop environment, but this token is inaccessible in the dev browser version because the cookie is considered "cross-site" (from `localhost` to `dev.zoo.dev`). There is an optional environment variable called `VITE_KC_DEV_TOKEN` that you can populate with a dev token in a `.env.development.local` file to not check it into Git, which will use that token instead of other methods for the LSP service.
### Developing in Chrome
Chrome is in the process of rolling out a new default which
[blocks Third-Party Cookies](https://developer.chrome.com/en/docs/privacy-sandbox/third-party-cookie-phase-out/).
If you're having trouble logging into the `modeling-app`, you may need to
enable third-party cookies. You can enable third-party cookies by clicking on
the eye with a slash through it in the URL bar, and clicking on "Enable
Third-Party Cookies".
## Desktop
To spin up the desktop app, `npm install` and `npm run build:wasm` need to have been done before hand then:
```
npm run tron:start
```
This will start the application and hot-reload on changes.
Devtools can be opened with the usual Command-Option-I (macOS) or Ctrl-Shift-I (Linux and Windows).
To package the app for your platform with electron-builder, run `npm run tronb:package:dev` (or `npm run tronb:package:prod` to point to the .env.production variables).
## Checking out commits / Bisecting
Which commands from setup are one off vs. need to be run every time?
The following will need to be run when checking out a new commit and guarantees the build is not stale:
```bash
npm install
npm run build:wasm
npm start # or npm run build:local && npm run serve for slower but more production-like build
```
## Before submitting a PR
Before you submit a contribution PR to this repo, please ensure that:
- There is a corresponding issue for the changes you want to make, so that discussion of approach can be had before work begins.
- You have separated out refactoring commits from feature commits as much as possible
- You have run all of the following commands locally:
- `npm run fmt`
- `npm run tsc`
- `npm run test`
- Here they are all together: `npm run fmt && npm run tsc && npm run test`
## Release a new version
#### 1. Create a 'Cut release $VERSION' issue
It will be used to document changelog discussions and release testing.
https://github.com/KittyCAD/modeling-app/issues/new
#### 2. Push a new tag
Create a new tag and push it to the repo. The `semantic-release.sh` script will automatically bump the minor part, which we use the most. For instance going from `v0.27.0` to `v0.28.0`.
```
VERSION=$(./scripts/semantic-release.sh)
git tag $VERSION
git push origin --tags
```
This will trigger the `build-apps` workflow, set the version, build & sign the apps, and generate release files as well as updater-test artifacts.
The workflow should be listed right away [in this list](https://github.com/KittyCAD/modeling-app/actions/workflows/build-apps.yml?query=event%3Apush)).
#### 3. Manually test artifacts
##### Release builds
The release builds can be found under the `out-{arch}-{platform}` zip files, at the very bottom of the `build-apps` summary page for the workflow (triggered by the tag in 2.).
Manually test against this [list](https://github.com/KittyCAD/modeling-app/issues/3588) across Windows, MacOS, Linux and posting results as comments in the issue.
##### Updater-test builds
The other `build-apps` output in the release `build-apps` workflow (triggered by 2.) is `updater-test-{arch}-{platform}`. It's a semi-automated process: for macOS, Windows, and Linux, download the corresponding updater-test artifact file, install the app, run it, expect an updater prompt to a dummy v0.255.255, install it and check that the app comes back at that version.
The only difference with these builds is that they point to a different update location on the release bucket, with this dummy v0.255.255 always available. This helps ensuring that the version we release will be able to update to the next one available.
If the prompt doesn't show up, start the app in command line to grab the electron-updater logs. This is likely an issue with the current build that needs addressing (or the updater-test location in the storage bucket).
```
# Windows (PowerShell)
& 'C:\Program Files\Zoo Design Studio\Zoo Design Studio.exe'
# macOS
/Applications/Zoo\ Modeling\ App.app/Contents/MacOS/Zoo\ Modeling\ App
# Linux
./Zoo Design Studio-{version}-{arch}-linux.AppImage
```
#### 4. Publish the release
Head over to https://github.com/KittyCAD/modeling-app/releases/new, pick the newly created tag and type it in the _Release title_ field as well.
Hit _Generate release notes_ as a starting point to discuss the changelog in the issue. Once done, make sure _Set as the latest release_ is checked, and hit _Publish release_.
A new `publish-apps-release` will kick in and you should be able to find it [here](https://github.com/KittyCAD/modeling-app/actions?query=event%3Arelease). On success, the files will be uploaded to the public bucket as well as to the GitHub release, and the announcement on Discord will be sent.
#### 5. Close the issue
If everything is well and the release is out to the public, the issue tracking the release shall be closed.
## Fuzzing the parser
Make sure you install cargo fuzz:
```bash
$ cargo install cargo-fuzz
```
```bash
$ cd rust/kcl-lib
# list the fuzz targets
$ cargo fuzz list
# run the parser fuzzer
$ cargo +nightly fuzz run parser
```
For more information on fuzzing you can check out
[this guide](https://rust-fuzz.github.io/book/cargo-fuzz.html).
## Tests
### Playwright tests
You will need a `./e2e/playwright/playwright-secrets.env` file:
```bash
$ touch ./e2e/playwright/playwright-secrets.env
$ cat ./e2e/playwright/playwright-secrets.env
token=<dev.zoo.dev/account/api-tokens>
snapshottoken=<zoo.dev/account/api-tokens>
```
or use `export` to set the environment variables `token` and `snapshottoken`.
#### Snapshot tests (Google Chrome on Ubuntu only)
Only Ubunu and Google Chrome is supported for the set of tests evaluating screenshot snapshots.
If you don't run Ubuntu locally or in a VM, you may use a GitHub Codespace.
```
npm run playwright -- install chrome
npm run test:snapshots
```
You may use `-- --update-snapshots` as needed.
#### Electron flow tests (Chromium on Ubuntu, macOS, Windows)
```
npm run playwright -- install chromium
npm run test:playwright:electron:local
```
You may use `-- -g "my test"` to match specific test titles, or `-- path/to/file.spec.ts` for a test file.
#### Debugger
However, if you want a debugger I recommend using VSCode and the `playwright` extension, as the above command is a cruder debugger that steps into every function call which is annoying.
With the extension you can set a breakpoint after `waitForDefaultPlanesVisibilityChange` in order to skip app loading, then the vscode debugger's "step over" is much better for being able to stay at the right level of abstraction as you debug the code.
If you want to limit to a single browser use `--project="webkit"` or `firefox`, `Google Chrome`
Or comment out browsers in `playwright.config.ts`.
note chromium has encoder compat issues which is why were testing against the branded 'Google Chrome'
You may consider using the VSCode extension, it's useful for running individual threads, but some some reason the "record a test" is locked to chromium with we can't use. A work around is to us the CI `npm run playwright codegen -b wk --load-storage ./store localhost:3000`
<details>
<summary>
Where `./store` should look like this
</summary>
```JSON
{
"cookies": [],
"origins": [
{
"origin": "http://localhost:3000",
"localStorage": [
{
"name": "store",
"value": "{\"state\":{\"openPanes\":[\"code\"]},\"version\":0}"
},
{
"name": "persistCode",
"value": ""
},
{
"name": "TOKEN_PERSIST_KEY",
"value": "your-token"
}
]
}
]
}
```
</details>
However because much of our tests involve clicking in the stream at specific locations, it's code-gen looks `await page.locator('video').click();` when really we need to use a pixel coord, so I think it's of limited use.
### Unit and component tests
If you already haven't, run the following:
```
npm
npm run build:wasm
npm start
```
and finally:
```
npm run test:unit
```
For individual testing:
```
npm run test abstractSyntaxTree -t "unexpected closed curly brace" --silent=false
```
Which will run our suite of [Vitest unit](https://vitest.dev/) and [React Testing Library E2E](https://testing-library.com/docs/react-testing-library/intro) tests, in interactive mode by default.
### Rust tests
**Dependencies**
- `KITTYCAD_API_TOKEN`
- `cargo-nextest`
- `just`
#### Setting KITTYCAD_API_TOKEN
Use the production zoo.dev token, set this environment variable before running the tests
#### Installing cargonextest
```
cd rust
cargo search cargo-nextest
cargo install cargo-nextest
```
#### just
install [`just`](https://github.com/casey/just?tab=readme-ov-file#pre-built-binaries)
#### Running the tests
```bash
# With just
# Make sure KITTYCAD_API_TOKEN=<prod zoo.dev token> is set
# Make sure you installed cargo-nextest
# Make sure you installed just
cd rust
just test
```
```bash
# Without just
# Make sure KITTYCAD_API_TOKEN=<prod zoo.dev token> is set
# Make sure you installed cargo-nextest
cd rust
export RUST_BRACKTRACE="full" && cargo nextest run --workspace --test-threads=1
```
Where `XXX` is an API token from the production engine (NOT the dev environment).
We recommend using [nextest](https://nexte.st/) to run the Rust tests (its faster and is used in CI). Once installed, run the tests using
```
cd rust
KITTYCAD_API_TOKEN=XXX cargo run nextest
```
### Mapping CI CD jobs to local commands
When you see the CI CD fail on jobs you may wonder three things
- Do I have a bug in my code?
- Is the test flaky?
- Is there a bug in `main`?
To answer these questions the following commands will give you confidence to locate the issue.
#### Static Analysis
Part of the CI CD pipeline performs static analysis on the code. Use the following commands to mimic the CI CD jobs.
The following set of commands should get us closer to one and done commands to instantly retest issues.
```
npm run test-setup
```
> Gotcha, are packages up to date and is the wasm built?
```
npm run tsc
npm run fmt:check
npm run lint
npm run test:unit:local
```
> Gotcha: Our unit tests have integration tests in them. You need to run a localhost server to run the unit tests.
#### E2E Tests
**Playwright Electron**
These E2E tests run in electron. There are tests that are skipped if they are ran in a windows, linux, or macos environment. We can use playwright tags to implement test skipping.
```
npm run test:playwright:electron:local
npm run test:playwright:electron:windows:local
npm run test:playwright:electron:macos:local
npm run test:playwright:electron:ubuntu:local
```
> Why does it say local? The CI CD commands that run in the pipeline cannot be ran locally. A single command will not properly setup the testing environment locally.
#### Some notes on CI
The tests are broken into snapshot tests and non-snapshot tests, and they run in that order, they automatically commit new snap shots, so if you see an image commit check it was an intended change. If we have non-determinism in the snapshots such that they are always committing new images, hopefully this annoyance makes us fix them asap, if you notice this happening let Kurt know. But for the odd occasion `git reset --hard HEAD~ && git push -f` is your friend.
How to interpret failing playwright tests?
If your tests fail, click through to the action and see that the tests failed on a line that includes `await page.getByTestId('loading').waitFor({ state: 'detached' })`, this means the test fail because the stream never started. It's you choice if you want to re-run the test, or ignore the failure.
We run on ubuntu and macos, because safari doesn't work on linux because of the dreaded "no RTCPeerConnection variable" error. But linux runs first and then macos for the same reason that we limit the number of parallel tests to 1 because we limit stream connections per user, so tests would start failing we if let them run together.
If something fails on CI you can download the artifact, unzip it and then open `playwright-report/data/<UUID>.zip` with https://trace.playwright.dev/ to see what happened.
#### Getting started writing a playwright test in our app
Besides following the instructions above and using the playwright docs, our app is weird because of the whole stream thing, which means our testing is weird. Because we've just figured out this stuff and therefore docs might go stale quick here's a 15min vid/tutorial
https://github.com/KittyCAD/modeling-app/assets/29681384/6f5e8e85-1003-4fd9-be7f-f36ce833942d
<details>
<summary>
PS: for the debug panel, the following JSON is useful for snapping the camera
</summary>
```JSON
{"type":"modeling_cmd_req","cmd_id":"054e5472-e5e9-4071-92d7-1ce3bac61956","cmd":{"type":"default_camera_look_at","center":{"x":15,"y":0,"z":0},"up":{"x":0,"y":0,"z":1},"vantage":{"x":30,"y":30,"z":30}}}
```
</details>
### Logging
To display logging (to the terminal or console) set `ZOO_LOG=1`. This will log some warnings and simple performance metrics. To view these in test runs, use `-- --nocapture`.
To enable memory metrics, build with `--features dhat-heap`.

447
README.md
View File

@ -1,8 +1,8 @@
![Zoo Design Studio](/public/zma-logomark-outlined.png)
## Zoo Design Studio
# Zoo Design Studio
download at [zoo.dev/modeling-app/download](https://zoo.dev/modeling-app/download)
[zoo.dev/modeling-app](https://zoo.dev/modeling-app)
A CAD application from the future, brought to you by the [Zoo team](https://zoo.dev).
@ -23,7 +23,7 @@ Design Studio is a _hybrid_ user interface for CAD modeling. You can point-and-c
The 3D view in Design Studio is just a video stream from our hosted geometry engine. The app sends new modeling commands to the engine via WebSockets, which returns back video frames of the view within the engine.
## Tools
## Technology
- UI
- [React](https://react.dev/)
@ -38,445 +38,16 @@ The 3D view in Design Studio is just a video stream from our hosted geometry eng
- Modeling
- [KittyCAD TypeScript client](https://github.com/KittyCAD/kittycad.ts)
[Original demo video](https://drive.google.com/file/d/183_wjqGdzZ8EEZXSqZ3eDcJocYPCyOdC/view?pli=1)
## Get Started
[Original demo slides](https://github.com/user-attachments/files/19010536/demo.pdf)
We recommend downloading the latest application binary from our [releases](https://github.com/KittyCAD/modeling-app/releases) page. If you don't see your platform or architecture supported there, please file an issue.
## Get started
If you'd like to try out upcoming changes sooner, you can also download those from our [nightly releases](https://zoo.dev/modeling-app/download/nightly) page.
We recommend downloading the latest application binary from [our Releases page](https://github.com/KittyCAD/modeling-app/releases). If you don't see your platform or architecture supported there, please file an issue.
## Developing
## Running a development build
Install a node version manager such as [fnm](https://github.com/Schniz/fnm?tab=readme-ov-#installation).
On Windows, it's also recommended to [upgrade your PowerShell version](https://learn.microsoft.com/en-us/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.5#winget), we're using 7.
Then in the repo run the following to install and use the node version specified in `.nvmrc`. You might need to specify your processor architecture with `--arch arm64` or `--arch x64` if it's not autodetected.
```
fnm install --corepack-enabled
fnm use
```
Install the NPM dependencies with:
```
npm install
```
This project uses a lot of Rust compiled to [WASM](https://webassembly.org/) within it. We have package scripts to run rustup, see `package.json` for reference:
```
# macOS/Linux
npm run install:rust
npm run install:wasm-pack:sh
# Windows
npm run install:rust:windows
npm run install:wasm-pack:cargo
```
Then to build the WASM layer, run:
```
# macOS/Linux
npm run build:wasm
# Windows
npm run build:wasm:windows
```
Or if you have the `gh` cli installed and want to download the latest main wasm bundle. Note that on Windows, you need to associate .ps1 files with PowerShell, which can be done via the right click menu, selecting `C:\Program Files\PowerShell\7\pwsh.exe`, and you can install tools like `gh` via `npm run install:tools:windows`.
```
# macOS/Linux
npm run fetch:wasm
# Windows
npm run fetch:wasm:windows
```
That will build the WASM binary and put in the `public` dir (though gitignored).
Finally, to run the web app only, run:
```
npm start
```
If you're not a Zoo employee you won't be able to access the dev environment, you should copy everything from `.env.production` to `.env.development.local` to make it point to production instead, then when you navigate to `localhost:3000` the easiest way to sign in is to paste `localStorage.setItem('TOKEN_PERSIST_KEY', "your-token-from-https://zoo.dev/account/api-tokens")` replacing the with a real token from https://zoo.dev/account/api-tokens of course, then navigate to `localhost:3000` again. Note that navigating to `localhost:3000/signin` removes your token so you will need to set the token again.
### Development environment variables
The Copilot LSP plugin in the editor requires a Zoo API token to run. In production, we authenticate this with a token via cookie in the browser and device auth token in the desktop environment, but this token is inaccessible in the dev browser version because the cookie is considered "cross-site" (from `localhost` to `dev.zoo.dev`). There is an optional environment variable called `VITE_KC_DEV_TOKEN` that you can populate with a dev token in a `.env.development.local` file to not check it into Git, which will use that token instead of other methods for the LSP service.
### Developing in Chrome
Chrome is in the process of rolling out a new default which
[blocks Third-Party Cookies](https://developer.chrome.com/en/docs/privacy-sandbox/third-party-cookie-phase-out/).
If you're having trouble logging into the `modeling-app`, you may need to
enable third-party cookies. You can enable third-party cookies by clicking on
the eye with a slash through it in the URL bar, and clicking on "Enable
Third-Party Cookies".
## Desktop
To spin up the desktop app, `npm install` and `npm run build:wasm` need to have been done before hand then:
```
npm run tron:start
```
This will start the application and hot-reload on changes.
Devtools can be opened with the usual Command-Option-I (macOS) or Ctrl-Shift-I (Linux and Windows).
To package the app for your platform with electron-builder, run `npm run tronb:package:dev` (or `npm run tronb:package:prod` to point to the .env.production variables).
## Checking out commits / Bisecting
Which commands from setup are one off vs. need to be run every time?
The following will need to be run when checking out a new commit and guarantees the build is not stale:
```bash
npm install
npm run build:wasm
npm start # or npm run build:local && npm run serve for slower but more production-like build
```
## Before submitting a PR
Before you submit a contribution PR to this repo, please ensure that:
- There is a corresponding issue for the changes you want to make, so that discussion of approach can be had before work begins.
- You have separated out refactoring commits from feature commits as much as possible
- You have run all of the following commands locally:
- `npm run fmt`
- `npm run tsc`
- `npm run test`
- Here they are all together: `npm run fmt && npm run tsc && npm run test`
## Release a new version
#### 1. Create a 'Cut release $VERSION' issue
It will be used to document changelog discussions and release testing.
https://github.com/KittyCAD/modeling-app/issues/new
#### 2. Push a new tag
Create a new tag and push it to the repo. The `semantic-release.sh` script will automatically bump the minor part, which we use the most. For instance going from `v0.27.0` to `v0.28.0`.
```
VERSION=$(./scripts/semantic-release.sh)
git tag $VERSION
git push origin --tags
```
This will trigger the `build-apps` workflow, set the version, build & sign the apps, and generate release files as well as updater-test artifacts.
The workflow should be listed right away [in this list](https://github.com/KittyCAD/modeling-app/actions/workflows/build-apps.yml?query=event%3Apush)).
#### 3. Manually test artifacts
##### Release builds
The release builds can be found under the `out-{arch}-{platform}` zip files, at the very bottom of the `build-apps` summary page for the workflow (triggered by the tag in 2.).
Manually test against this [list](https://github.com/KittyCAD/modeling-app/issues/3588) across Windows, MacOS, Linux and posting results as comments in the issue.
##### Updater-test builds
The other `build-apps` output in the release `build-apps` workflow (triggered by 2.) is `updater-test-{arch}-{platform}`. It's a semi-automated process: for macOS, Windows, and Linux, download the corresponding updater-test artifact file, install the app, run it, expect an updater prompt to a dummy v0.255.255, install it and check that the app comes back at that version.
The only difference with these builds is that they point to a different update location on the release bucket, with this dummy v0.255.255 always available. This helps ensuring that the version we release will be able to update to the next one available.
If the prompt doesn't show up, start the app in command line to grab the electron-updater logs. This is likely an issue with the current build that needs addressing (or the updater-test location in the storage bucket).
```
# Windows (PowerShell)
& 'C:\Program Files\Zoo Design Studio\Zoo Design Studio.exe'
# macOS
/Applications/Zoo\ Modeling\ App.app/Contents/MacOS/Zoo\ Modeling\ App
# Linux
./Zoo Design Studio-{version}-{arch}-linux.AppImage
```
#### 4. Publish the release
Head over to https://github.com/KittyCAD/modeling-app/releases/new, pick the newly created tag and type it in the _Release title_ field as well.
Hit _Generate release notes_ as a starting point to discuss the changelog in the issue. Once done, make sure _Set as the latest release_ is checked, and hit _Publish release_.
A new `publish-apps-release` will kick in and you should be able to find it [here](https://github.com/KittyCAD/modeling-app/actions?query=event%3Arelease). On success, the files will be uploaded to the public bucket as well as to the GitHub release, and the announcement on Discord will be sent.
#### 5. Close the issue
If everything is well and the release is out to the public, the issue tracking the release shall be closed.
## Fuzzing the parser
Make sure you install cargo fuzz:
```bash
$ cargo install cargo-fuzz
```
```bash
$ cd rust/kcl-lib
# list the fuzz targets
$ cargo fuzz list
# run the parser fuzzer
$ cargo +nightly fuzz run parser
```
For more information on fuzzing you can check out
[this guide](https://rust-fuzz.github.io/book/cargo-fuzz.html).
## Tests
### Playwright tests
You will need a `./e2e/playwright/playwright-secrets.env` file:
```bash
$ touch ./e2e/playwright/playwright-secrets.env
$ cat ./e2e/playwright/playwright-secrets.env
token=<dev.zoo.dev/account/api-tokens>
snapshottoken=<zoo.dev/account/api-tokens>
```
or use `export` to set the environment variables `token` and `snapshottoken`.
#### Snapshot tests (Google Chrome on Ubuntu only)
Only Ubunu and Google Chrome is supported for the set of tests evaluating screenshot snapshots.
If you don't run Ubuntu locally or in a VM, you may use a GitHub Codespace.
```
npm run playwright -- install chrome
npm run test:snapshots
```
You may use `-- --update-snapshots` as needed.
#### Electron flow tests (Chromium on Ubuntu, macOS, Windows)
```
npm run playwright -- install chromium
npm run test:playwright:electron:local
```
You may use `-- -g "my test"` to match specific test titles, or `-- path/to/file.spec.ts` for a test file.
#### Debugger
However, if you want a debugger I recommend using VSCode and the `playwright` extension, as the above command is a cruder debugger that steps into every function call which is annoying.
With the extension you can set a breakpoint after `waitForDefaultPlanesVisibilityChange` in order to skip app loading, then the vscode debugger's "step over" is much better for being able to stay at the right level of abstraction as you debug the code.
If you want to limit to a single browser use `--project="webkit"` or `firefox`, `Google Chrome`
Or comment out browsers in `playwright.config.ts`.
note chromium has encoder compat issues which is why were testing against the branded 'Google Chrome'
You may consider using the VSCode extension, it's useful for running individual threads, but some some reason the "record a test" is locked to chromium with we can't use. A work around is to us the CI `npm run playwright codegen -b wk --load-storage ./store localhost:3000`
<details>
<summary>
Where `./store` should look like this
</summary>
```JSON
{
"cookies": [],
"origins": [
{
"origin": "http://localhost:3000",
"localStorage": [
{
"name": "store",
"value": "{\"state\":{\"openPanes\":[\"code\"]},\"version\":0}"
},
{
"name": "persistCode",
"value": ""
},
{
"name": "TOKEN_PERSIST_KEY",
"value": "your-token"
}
]
}
]
}
```
</details>
However because much of our tests involve clicking in the stream at specific locations, it's code-gen looks `await page.locator('video').click();` when really we need to use a pixel coord, so I think it's of limited use.
### Unit and component tests
If you already haven't, run the following:
```
npm
npm run build:wasm
npm start
```
and finally:
```
npm run test:unit
```
For individual testing:
```
npm run test abstractSyntaxTree -t "unexpected closed curly brace" --silent=false
```
Which will run our suite of [Vitest unit](https://vitest.dev/) and [React Testing Library E2E](https://testing-library.com/docs/react-testing-library/intro) tests, in interactive mode by default.
### Rust tests
**Dependencies**
- `KITTYCAD_API_TOKEN`
- `cargo-nextest`
- `just`
#### Setting KITTYCAD_API_TOKEN
Use the production zoo.dev token, set this environment variable before running the tests
#### Installing cargonextest
```
cd rust
cargo search cargo-nextest
cargo install cargo-nextest
```
#### just
install [`just`](https://github.com/casey/just?tab=readme-ov-file#pre-built-binaries)
#### Running the tests
```bash
# With just
# Make sure KITTYCAD_API_TOKEN=<prod zoo.dev token> is set
# Make sure you installed cargo-nextest
# Make sure you installed just
cd rust
just test
```
```bash
# Without just
# Make sure KITTYCAD_API_TOKEN=<prod zoo.dev token> is set
# Make sure you installed cargo-nextest
cd rust
export RUST_BRACKTRACE="full" && cargo nextest run --workspace --test-threads=1
```
Where `XXX` is an API token from the production engine (NOT the dev environment).
We recommend using [nextest](https://nexte.st/) to run the Rust tests (its faster and is used in CI). Once installed, run the tests using
```
cd rust
KITTYCAD_API_TOKEN=XXX cargo run nextest
```
### Mapping CI CD jobs to local commands
When you see the CI CD fail on jobs you may wonder three things
- Do I have a bug in my code?
- Is the test flaky?
- Is there a bug in `main`?
To answer these questions the following commands will give you confidence to locate the issue.
#### Static Analysis
Part of the CI CD pipeline performs static analysis on the code. Use the following commands to mimic the CI CD jobs.
The following set of commands should get us closer to one and done commands to instantly retest issues.
```
npm run test-setup
```
> Gotcha, are packages up to date and is the wasm built?
```
npm run tsc
npm run fmt:check
npm run lint
npm run test:unit:local
```
> Gotcha: Our unit tests have integration tests in them. You need to run a localhost server to run the unit tests.
#### E2E Tests
**Playwright Electron**
These E2E tests run in electron. There are tests that are skipped if they are ran in a windows, linux, or macos environment. We can use playwright tags to implement test skipping.
```
npm run test:playwright:electron:local
npm run test:playwright:electron:windows:local
npm run test:playwright:electron:macos:local
npm run test:playwright:electron:ubuntu:local
```
> Why does it say local? The CI CD commands that run in the pipeline cannot be ran locally. A single command will not properly setup the testing environment locally.
#### Some notes on CI
The tests are broken into snapshot tests and non-snapshot tests, and they run in that order, they automatically commit new snap shots, so if you see an image commit check it was an intended change. If we have non-determinism in the snapshots such that they are always committing new images, hopefully this annoyance makes us fix them asap, if you notice this happening let Kurt know. But for the odd occasion `git reset --hard HEAD~ && git push -f` is your friend.
How to interpret failing playwright tests?
If your tests fail, click through to the action and see that the tests failed on a line that includes `await page.getByTestId('loading').waitFor({ state: 'detached' })`, this means the test fail because the stream never started. It's you choice if you want to re-run the test, or ignore the failure.
We run on ubuntu and macos, because safari doesn't work on linux because of the dreaded "no RTCPeerConnection variable" error. But linux runs first and then macos for the same reason that we limit the number of parallel tests to 1 because we limit stream connections per user, so tests would start failing we if let them run together.
If something fails on CI you can download the artifact, unzip it and then open `playwright-report/data/<UUID>.zip` with https://trace.playwright.dev/ to see what happened.
#### Getting started writing a playwright test in our app
Besides following the instructions above and using the playwright docs, our app is weird because of the whole stream thing, which means our testing is weird. Because we've just figured out this stuff and therefore docs might go stale quick here's a 15min vid/tutorial
https://github.com/KittyCAD/modeling-app/assets/29681384/6f5e8e85-1003-4fd9-be7f-f36ce833942d
<details>
<summary>
PS: for the debug panel, the following JSON is useful for snapping the camera
</summary>
```JSON
{"type":"modeling_cmd_req","cmd_id":"054e5472-e5e9-4071-92d7-1ce3bac61956","cmd":{"type":"default_camera_look_at","center":{"x":15,"y":0,"z":0},"up":{"x":0,"y":0,"z":1},"vantage":{"x":30,"y":30,"z":30}}}
```
</details>
Finally, if you'd like to run a development build or contribute to the project, please visit our [contributor guide](CONTRIBUTING.md) to get started.
## KCL
For how to contribute to KCL, [see our KCL README](https://github.com/KittyCAD/modeling-app/tree/main/rust/kcl-lib).
### Logging
To display logging (to the terminal or console) set `ZOO_LOG=1`. This will log some warnings and simple performance metrics. To view these in test runs, use `-- --nocapture`.
To enable memory metrics, build with `--features dhat-heap`.
To contribute to the KittyCAD Language, see the [README](https://github.com/KittyCAD/modeling-app/tree/main/rust/kcl-lib) for KCL.

File diff suppressed because one or more lines are too long

View File

@ -109,3 +109,98 @@ Coordinate systems:
- `zoo` (the default), forward: -Y, up: +Z, handedness: right
- `opengl`, forward: +Z, up: +Y, handedness: right
- `vulkan`, forward: +Z, up: -Y, handedness: left
### Performance
Parallelized foreign-file imports now let you overlap file reads, initialization,
and rendering. To maximize throughput, you need to understand the three distinct
stages—reading, initializing (background render start), and invocation (blocking)
—and structure your code to defer blocking operations until the end.
#### Foreign Import Execution Stages
1. **Import (Read) Stage**
```norun
import "tests/inputs/cube.step" as cube
```
- Reads the file from disk and makes its API available.
- **Does _not_** start Engine rendering or block your script.
2. **Initialization (Background Render) Stage**
```norun
import "tests/inputs/cube.step" as cube
myCube = cube // <- This line starts background rendering
```
- Invoking the imported symbol (assignment or plain call) triggers Engine rendering _in the background_.
- This kickstarts the render pipeline but doesnt block—you can continue other work while the Engine processes the model.
3. **Invocation (Blocking) Stage**
```norun
import "tests/inputs/cube.step" as cube
myCube = cube
myCube
|> translate(z=10) // <- This line blocks
```
- Any method call (e.g., `translate`, `scale`, `rotate`) waits for the background render to finish before applying transformations.
- This is the only point where your script will block.
> **Nuance:** Foreign imports differ from pure KCL modules—calling the same import symbol multiple times (e.g., `screw` twice) starts background rendering twice.
#### Best Practices
##### 1. Defer Blocking Calls
Initialize early but delay all transformations until after your heavy computation:
```norun
import "tests/inputs/cube.step" as cube // 1) Read
myCube = cube // 2) Background render starts
// --- perform other operations and calculations or setup here ---
myCube
|> translate(z=10) // 3) Blocks only here
```
##### 2. Encapsulate Imports in Modules
Keep `main.kcl` free of reads and initialization; wrap them:
```norun
// imports.kcl
import "tests/inputs/cube.step" as cube // Read only
export myCube = cube // Kick off rendering
```
```norun
// main.kcl
import myCube from "imports.kcl" // Import the initialized object
// ... computations ...
myCube
|> translate(z=10) // Blocking call at the end
```
##### 3. Avoid Immediate Method Calls
```norun
import "tests/inputs/cube.step" as cube
cube
|> translate(z=10) // Blocks immediately, negating parallelism
```
Both calling methods right on `cube` immediately or leaving an implicit import without assignment introduce blocking.
#### Future Improvements
Upcoming releases will autoanalyze dependencies and only block when truly necessary. Until then, explicit deferral and modular wrapping give you the best performance.

View File

@ -53,7 +53,7 @@ rotate(
### Returns
[`SolidOrSketchOrImportedGeometry`](/docs/kcl/types/SolidOrSketchOrImportedGeometry) - Data for a solid or an imported geometry.
[`SolidOrSketchOrImportedGeometry`](/docs/kcl/types/SolidOrSketchOrImportedGeometry) - Data for a solid, sketch, or an imported geometry.
### Examples

View File

@ -37,7 +37,7 @@ scale(
### Returns
[`SolidOrSketchOrImportedGeometry`](/docs/kcl/types/SolidOrSketchOrImportedGeometry) - Data for a solid or an imported geometry.
[`SolidOrSketchOrImportedGeometry`](/docs/kcl/types/SolidOrSketchOrImportedGeometry) - Data for a solid, sketch, or an imported geometry.
### Examples

File diff suppressed because one or more lines are too long

View File

@ -28101,14 +28101,62 @@
"args": [
{
"name": "solids",
"type": "[Solid]",
"type": "SolidOrImportedGeometry",
"schema": {
"$schema": "https://spec.openapis.org/oas/3.0/schema/2019-04-02#/definitions/Schema",
"title": "Array_of_Solid",
"type": "array",
"items": {
"$ref": "#/components/schemas/Solid"
},
"title": "SolidOrImportedGeometry",
"description": "Data for a solid or an imported geometry.",
"oneOf": [
{
"description": "Data for an imported geometry.",
"type": "object",
"required": [
"id",
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"importedGeometry"
]
},
"id": {
"description": "The ID of the imported geometry.",
"type": "string",
"format": "uuid"
},
"value": {
"description": "The original file paths.",
"type": "array",
"items": {
"type": "string"
}
}
}
},
{
"type": [
"object",
"array"
],
"items": {
"$ref": "#/components/schemas/Solid"
},
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"solidSet"
]
}
}
}
],
"definitions": {
"Solid": {
"type": "object",
@ -34571,14 +34619,62 @@
],
"returnValue": {
"name": "",
"type": "[Solid]",
"type": "SolidOrImportedGeometry",
"schema": {
"$schema": "https://spec.openapis.org/oas/3.0/schema/2019-04-02#/definitions/Schema",
"title": "Array_of_Solid",
"type": "array",
"items": {
"$ref": "#/components/schemas/Solid"
},
"title": "SolidOrImportedGeometry",
"description": "Data for a solid or an imported geometry.",
"oneOf": [
{
"description": "Data for an imported geometry.",
"type": "object",
"required": [
"id",
"type",
"value"
],
"properties": {
"type": {
"type": "string",
"enum": [
"importedGeometry"
]
},
"id": {
"description": "The ID of the imported geometry.",
"type": "string",
"format": "uuid"
},
"value": {
"description": "The original file paths.",
"type": "array",
"items": {
"type": "string"
}
}
}
},
{
"type": [
"object",
"array"
],
"items": {
"$ref": "#/components/schemas/Solid"
},
"required": [
"type"
],
"properties": {
"type": {
"type": "string",
"enum": [
"solidSet"
]
}
}
}
],
"definitions": {
"Solid": {
"type": "object",
@ -36198,7 +36294,8 @@
"// Setting the appearance of a 3D pattern can be done _before_ or _after_ the pattern.\n// This example shows _before_ the pattern.\nexampleSketch = startSketchOn(XZ)\n |> startProfileAt([0, 0], %)\n |> line(end = [0, 2])\n |> line(end = [3, 1])\n |> line(end = [0, -4])\n |> close()\n\nexample = extrude(exampleSketch, length = 1)\n |> appearance(color = '#ff0000', metalness = 90, roughness = 90)\n |> patternLinear3d(axis = [1, 0, 1], instances = 7, distance = 6)",
"// Setting the appearance of a 3D pattern can be done _before_ or _after_ the pattern.\n// This example shows _after_ the pattern.\nexampleSketch = startSketchOn(XZ)\n |> startProfileAt([0, 0], %)\n |> line(end = [0, 2])\n |> line(end = [3, 1])\n |> line(end = [0, -4])\n |> close()\n\nexample = extrude(exampleSketch, length = 1)\n |> patternLinear3d(axis = [1, 0, 1], instances = 7, distance = 6)\n |> appearance(color = '#ff0000', metalness = 90, roughness = 90)",
"// Color the result of a 2D pattern that was extruded.\nexampleSketch = startSketchOn(XZ)\n |> startProfileAt([.5, 25], %)\n |> line(end = [0, 5])\n |> line(end = [-1, 0])\n |> line(end = [0, -5])\n |> close()\n |> patternCircular2d(\n center = [0, 0],\n instances = 13,\n arcDegrees = 360,\n rotateDuplicates = true,\n )\n\nexample = extrude(exampleSketch, length = 1)\n |> appearance(color = '#ff0000', metalness = 90, roughness = 90)",
"// Color the result of a sweep.\n\n// Create a path for the sweep.\nsweepPath = startSketchOn(XZ)\n |> startProfileAt([0.05, 0.05], %)\n |> line(end = [0, 7])\n |> tangentialArc(angle = 90, radius = 5)\n |> line(end = [-3, 0])\n |> tangentialArc(angle = -90, radius = 5)\n |> line(end = [0, 7])\n\npipeHole = startSketchOn(XY)\n |> circle(center = [0, 0], radius = 1.5)\n\nsweepSketch = startSketchOn(XY)\n |> circle(center = [0, 0], radius = 2)\n |> hole(pipeHole, %)\n |> sweep(path = sweepPath)\n |> appearance(color = \"#ff0000\", metalness = 50, roughness = 50)"
"// Color the result of a sweep.\n\n// Create a path for the sweep.\nsweepPath = startSketchOn(XZ)\n |> startProfileAt([0.05, 0.05], %)\n |> line(end = [0, 7])\n |> tangentialArc(angle = 90, radius = 5)\n |> line(end = [-3, 0])\n |> tangentialArc(angle = -90, radius = 5)\n |> line(end = [0, 7])\n\npipeHole = startSketchOn(XY)\n |> circle(center = [0, 0], radius = 1.5)\n\nsweepSketch = startSketchOn(XY)\n |> circle(center = [0, 0], radius = 2)\n |> hole(pipeHole, %)\n |> sweep(path = sweepPath)\n |> appearance(color = \"#ff0000\", metalness = 50, roughness = 50)",
"// Change the appearance of an imported model.\n\n\nimport \"tests/inputs/cube.sldprt\" as cube\n\ncube\n// |> appearance(\n// color = \"#ff0000\",\n// metalness = 50,\n// roughness = 50\n// )"
]
},
{
@ -249578,7 +249675,7 @@
"schema": {
"$schema": "https://spec.openapis.org/oas/3.0/schema/2019-04-02#/definitions/Schema",
"title": "SolidOrSketchOrImportedGeometry",
"description": "Data for a solid or an imported geometry.",
"description": "Data for a solid, sketch, or an imported geometry.",
"oneOf": [
{
"description": "Data for an imported geometry.",
@ -260975,7 +261072,7 @@
"schema": {
"$schema": "https://spec.openapis.org/oas/3.0/schema/2019-04-02#/definitions/Schema",
"title": "SolidOrSketchOrImportedGeometry",
"description": "Data for a solid or an imported geometry.",
"description": "Data for a solid, sketch, or an imported geometry.",
"oneOf": [
{
"description": "Data for an imported geometry.",
@ -262721,7 +262818,7 @@
"schema": {
"$schema": "https://spec.openapis.org/oas/3.0/schema/2019-04-02#/definitions/Schema",
"title": "SolidOrSketchOrImportedGeometry",
"description": "Data for a solid or an imported geometry.",
"description": "Data for a solid, sketch, or an imported geometry.",
"oneOf": [
{
"description": "Data for an imported geometry.",
@ -270879,7 +270976,7 @@
"schema": {
"$schema": "https://spec.openapis.org/oas/3.0/schema/2019-04-02#/definitions/Schema",
"title": "SolidOrSketchOrImportedGeometry",
"description": "Data for a solid or an imported geometry.",
"description": "Data for a solid, sketch, or an imported geometry.",
"oneOf": [
{
"description": "Data for an imported geometry.",
@ -319774,7 +319871,7 @@
"schema": {
"$schema": "https://spec.openapis.org/oas/3.0/schema/2019-04-02#/definitions/Schema",
"title": "SolidOrSketchOrImportedGeometry",
"description": "Data for a solid or an imported geometry.",
"description": "Data for a solid, sketch, or an imported geometry.",
"oneOf": [
{
"description": "Data for an imported geometry.",
@ -327932,7 +328029,7 @@
"schema": {
"$schema": "https://spec.openapis.org/oas/3.0/schema/2019-04-02#/definitions/Schema",
"title": "SolidOrSketchOrImportedGeometry",
"description": "Data for a solid or an imported geometry.",
"description": "Data for a solid, sketch, or an imported geometry.",
"oneOf": [
{
"description": "Data for an imported geometry.",

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -33,7 +33,7 @@ translate(
### Returns
[`SolidOrSketchOrImportedGeometry`](/docs/kcl/types/SolidOrSketchOrImportedGeometry) - Data for a solid or an imported geometry.
[`SolidOrSketchOrImportedGeometry`](/docs/kcl/types/SolidOrSketchOrImportedGeometry) - Data for a solid, sketch, or an imported geometry.
### Examples

View File

@ -145,7 +145,7 @@ This helps keep your code neat and avoid unnecessary declarations.
Say you have a long pipeline of sketch functions, like this:
```norun
startSketchOn('XZ')
startSketchOn(XZ)
|> line(%, end = [3, 4])
|> line(%, end = [10, 10])
|> line(%, end = [-13, -14])
@ -160,7 +160,7 @@ means that `|> line(%, end = [3, 4])` and `|> line(end = [3, 4])` are equivalent
could be rewritten as
```norun
startSketchOn('XZ')
startSketchOn(XZ)
|> line(end = [3, 4])
|> line(end = [10, 10])
|> line(end = [-13, -14])
@ -182,7 +182,7 @@ The syntax for declaring a tag is `$myTag` you would use it in the following
way:
```norun
startSketchOn('XZ')
startSketchOn(XZ)
|> startProfileAt(origin, %)
|> angledLine(angle = 0, length = 191.26, tag = $rectangleSegmentA001)
|> angledLine(
@ -217,7 +217,7 @@ However if the code was written like this:
```norun
fn rect(origin) {
return startSketchOn('XZ')
return startSketchOn(XZ)
|> startProfileAt(origin, %)
|> angledLine(angle = 0, length = 191.26, tag = $rectangleSegmentA001)
|> angledLine(
@ -227,7 +227,7 @@ fn rect(origin) {
)
|> angledLine(
angle = segAng(rectangleSegmentA001),
length = -segLen(rectangleSegmentA001)
length = -segLen(rectangleSegmentA001),
tag = $rectangleSegmentC001,
)
|> line(endAbsolute = [profileStartX(%), profileStartY(%)])
@ -247,7 +247,7 @@ For example the following code works.
```norun
fn rect(origin) {
return startSketchOn('XZ')
return startSketchOn(XZ)
|> startProfileAt(origin, %)
|> angledLine(angle = 0, length = 191.26, tag = $rectangleSegmentA001)
|> angledLine(
@ -257,7 +257,7 @@ fn rect(origin) {
)
|> angledLine(
angle = segAng(rectangleSegmentA001),
length = -segLen(rectangleSegmentA001)
length = -segLen(rectangleSegmentA001),
tag = $rectangleSegmentC001,
)
|> line(endAbsolute = [profileStartX(%), profileStartY(%)])
@ -268,11 +268,8 @@ rect([0, 0])
myRect = rect([20, 0])
myRect
|> extrude(10, %)
|> fillet(
radius = 0.5,
tags = [myRect.tags.rectangleSegmentA001]
)
|> extrude(length = 10)
|> fillet(radius = 0.5, tags = [myRect.tags.rectangleSegmentA001])
```
See how we use the tag `rectangleSegmentA001` in the `fillet` function outside

View File

@ -1,10 +1,10 @@
---
title: "SolidOrSketchOrImportedGeometry"
excerpt: "Data for a solid or an imported geometry."
excerpt: "Data for a solid, sketch, or an imported geometry."
layout: manual
---
Data for a solid or an imported geometry.
Data for a solid, sketch, or an imported geometry.

View File

@ -94,11 +94,8 @@ rect([0, 0])
myRect = rect([20, 0])
myRect
|> extrude(10, %)
|> fillet(
radius = 0.5,
tags = [myRect.tags.rectangleSegmentA001]
)
|> extrude(length = 10)
|> fillet(radius = 0.5, tags = [myRect.tags.rectangleSegmentA001])
```
See how we use the tag `rectangleSegmentA001` in the `fillet` function outside

View File

@ -178,6 +178,13 @@ export class CmdBarFixture {
return this.page.getByRole('option', options)
}
/**
* Clicks the Create new variable button for kcl input
*/
createNewVariable = async () => {
await this.page.getByRole('button', { name: 'Create new variable' }).click()
}
/**
* Captures a snapshot of the request sent to the text-to-cad API endpoint
* and saves it to a file named after the current test.

View File

@ -257,6 +257,14 @@ export class SceneFixture {
await expectPixelColor(this.page, colour, coords, diff)
}
expectPixelColorNotToBe = async (
colour: [number, number, number] | [number, number, number][],
coords: { x: number; y: number },
diff: number
) => {
await expectPixelColorNotToBe(this.page, colour, coords, diff)
}
get gizmo() {
return this.page.locator('[aria-label*=gizmo]')
}
@ -278,37 +286,69 @@ function isColourArray(
return isArray(colour[0])
}
export async function expectPixelColor(
type PixelColorMatchMode = 'matches' | 'differs'
export async function checkPixelColor(
page: Page,
colour: [number, number, number] | [number, number, number][],
coords: { x: number; y: number },
diff: number
diff: number,
mode: PixelColorMatchMode
) {
let finalValue = colour
const isMatchMode = mode === 'matches'
const actionText = isMatchMode ? 'expecting' : 'not expecting'
const functionName = isMatchMode
? 'ExpectPixelColor'
: 'ExpectPixelColourNotToBe'
await expect
.poll(
async () => {
const pixel = (await getPixelRGBs(page)(coords, 1))[0]
if (!pixel) return null
finalValue = pixel
let matches
if (!isColourArray(colour)) {
return pixel.every(
matches = pixel.every(
(channel, index) => Math.abs(channel - colour[index]) < diff
)
} else {
matches = colour.some((c) =>
c.every((channel, index) => Math.abs(pixel[index] - channel) < diff)
)
}
return colour.some((c) =>
c.every((channel, index) => Math.abs(pixel[index] - channel) < diff)
)
return isMatchMode ? matches : !matches
},
{ timeout: 10_000 }
)
.toBeTruthy()
.catch((cause) => {
throw new Error(
`ExpectPixelColor: point ${JSON.stringify(
`${functionName}: point ${JSON.stringify(
coords
)} was expecting ${colour} but got ${finalValue}`,
)} was ${actionText} ${colour} but got ${finalValue}`,
{ cause }
)
})
}
export async function expectPixelColor(
page: Page,
colour: [number, number, number] | [number, number, number][],
coords: { x: number; y: number },
diff: number
) {
await checkPixelColor(page, colour, coords, diff, 'matches')
}
export async function expectPixelColorNotToBe(
page: Page,
colour: [number, number, number] | [number, number, number][],
coords: { x: number; y: number },
diff: number
) {
await checkPixelColor(page, colour, coords, diff, 'differs')
}

View File

@ -169,6 +169,180 @@ test.describe('Point-and-click assemblies tests', () => {
}
)
test(
`Insert the bracket part into an assembly and transform it`,
{ tag: ['@electron'] },
async ({
context,
page,
homePage,
scene,
editor,
toolbar,
cmdBar,
tronApp,
}) => {
if (!tronApp) {
fail()
}
const midPoint = { x: 500, y: 250 }
const moreToTheRightPoint = { x: 900, y: 250 }
const bgColor: [number, number, number] = [30, 30, 30]
const partColor: [number, number, number] = [100, 100, 100]
const tolerance = 30
const u = await getUtils(page)
const gizmo = page.locator('[aria-label*=gizmo]')
const resetCameraButton = page.getByRole('button', { name: 'Reset view' })
await test.step('Setup parts and expect empty assembly scene', async () => {
const projectName = 'assembly'
await context.folderSetupFn(async (dir) => {
const bracketDir = path.join(dir, projectName)
await fsp.mkdir(bracketDir, { recursive: true })
await Promise.all([
fsp.copyFile(
path.join('public', 'kcl-samples', 'bracket', 'main.kcl'),
path.join(bracketDir, 'bracket.kcl')
),
fsp.writeFile(path.join(bracketDir, 'main.kcl'), ''),
])
})
await page.setBodyDimensions({ width: 1000, height: 500 })
await homePage.openProject(projectName)
await scene.settled(cmdBar)
await toolbar.closePane('code')
})
await test.step('Insert kcl as module', async () => {
await insertPartIntoAssembly(
'bracket.kcl',
'bracket',
toolbar,
cmdBar,
page
)
await toolbar.openPane('code')
await editor.expectEditor.toContain(
`
import "bracket.kcl" as bracket
bracket
`,
{ shouldNormalise: true }
)
await scene.settled(cmdBar)
// Check scene for changes
await toolbar.closePane('code')
await u.doAndWaitForCmd(async () => {
await gizmo.click({ button: 'right' })
await resetCameraButton.click()
}, 'zoom_to_fit')
await toolbar.closePane('debug')
await scene.expectPixelColor(partColor, midPoint, tolerance)
await scene.expectPixelColor(bgColor, moreToTheRightPoint, tolerance)
})
await test.step('Set translate on module', async () => {
await toolbar.openPane('feature-tree')
const op = await toolbar.getFeatureTreeOperation('bracket', 0)
await op.click({ button: 'right' })
await page.getByTestId('context-menu-set-translate').click()
await cmdBar.expectState({
stage: 'arguments',
currentArgKey: 'x',
currentArgValue: '0',
headerArguments: {
X: '',
Y: '',
Z: '',
},
highlightedHeaderArg: 'x',
commandName: 'Translate',
})
await page.keyboard.insertText('5')
await cmdBar.progressCmdBar()
await page.keyboard.insertText('0.1')
await cmdBar.progressCmdBar()
await page.keyboard.insertText('0.2')
await cmdBar.progressCmdBar()
await cmdBar.expectState({
stage: 'review',
headerArguments: {
X: '5',
Y: '0.1',
Z: '0.2',
},
commandName: 'Translate',
})
await cmdBar.progressCmdBar()
await toolbar.closePane('feature-tree')
await toolbar.openPane('code')
await editor.expectEditor.toContain(
`
bracket
|> translate(x = 5, y = 0.1, z = 0.2)
`,
{ shouldNormalise: true }
)
// Expect translated part in the scene
await scene.expectPixelColor(bgColor, midPoint, tolerance)
await scene.expectPixelColor(partColor, moreToTheRightPoint, tolerance)
})
await test.step('Set rotate on module', async () => {
await toolbar.closePane('code')
await toolbar.openPane('feature-tree')
const op = await toolbar.getFeatureTreeOperation('bracket', 0)
await op.click({ button: 'right' })
await page.getByTestId('context-menu-set-rotate').click()
await cmdBar.expectState({
stage: 'arguments',
currentArgKey: 'roll',
currentArgValue: '0',
headerArguments: {
Roll: '',
Pitch: '',
Yaw: '',
},
highlightedHeaderArg: 'roll',
commandName: 'Rotate',
})
await page.keyboard.insertText('0.1')
await cmdBar.progressCmdBar()
await page.keyboard.insertText('0.2')
await cmdBar.progressCmdBar()
await page.keyboard.insertText('0.3')
await cmdBar.progressCmdBar()
await cmdBar.expectState({
stage: 'review',
headerArguments: {
Roll: '0.1',
Pitch: '0.2',
Yaw: '0.3',
},
commandName: 'Rotate',
})
await cmdBar.progressCmdBar()
await toolbar.closePane('feature-tree')
await toolbar.openPane('code')
await editor.expectEditor.toContain(
`
bracket
|> translate(x = 5, y = 0.1, z = 0.2)
|> rotate(roll = 0.1, pitch = 0.2, yaw = 0.3)
`,
{ shouldNormalise: true }
)
// Expect no change in the scene as the rotations are tiny
await scene.expectPixelColor(bgColor, midPoint, tolerance)
await scene.expectPixelColor(partColor, moreToTheRightPoint, tolerance)
})
}
)
test(
`Insert foreign parts into assembly as whole module import`,
{ tag: ['@electron'] },
@ -231,10 +405,6 @@ test.describe('Point-and-click assemblies tests', () => {
)
await scene.settled(cmdBar)
// TODO: remove this once #5780 is fixed
await page.reload()
await scene.settled(cmdBar)
await expect(page.locator('.cm-lint-marker-error')).not.toBeVisible()
await toolbar.closePane('code')
await scene.expectPixelColor(partColor, partPoint, tolerance)
@ -279,10 +449,6 @@ test.describe('Point-and-click assemblies tests', () => {
)
await scene.settled(cmdBar)
// TODO: remove this once #5780 is fixed
await page.reload()
await scene.settled(cmdBar)
await expect(page.locator('.cm-lint-marker-error')).not.toBeVisible()
await toolbar.closePane('code')
await scene.expectPixelColor(partColor, partPoint, tolerance)

View File

@ -1046,7 +1046,7 @@ openSketch = startSketchOn(XY)
}) => {
// One dumb hardcoded screen pixel value
const testPoint = { x: 620, y: 257 }
const expectedOutput = `helix001 = helix( axis = X, radius = 5, length = 5, revolutions = 1, angleStart = 360, ccw = false,)`
const expectedOutput = `helix001 = helix( axis = X, radius = 5, length = 5, revolutions = 1, angleStart = 270, ccw = false,)`
const expectedLine = `axis=X,`
await homePage.goToModelingScene()
@ -1072,6 +1072,23 @@ openSketch = startSketchOn(XY)
await expect.poll(() => page.getByText('Axis').count()).toBe(6)
await cmdBar.progressCmdBar()
await cmdBar.progressCmdBar()
await cmdBar.expectState({
stage: 'arguments',
currentArgKey: 'angleStart',
highlightedHeaderArg: 'angleStart',
currentArgValue: '360',
headerArguments: {
Mode: 'Axis',
Axis: 'X',
Revolutions: '1',
AngleStart: '',
Length: '',
Radius: '',
CounterClockWise: '',
},
commandName: 'Helix',
})
await cmdBar.currentArgumentInput.locator('.cm-content').fill('270')
await cmdBar.progressCmdBar()
await cmdBar.progressCmdBar()
await cmdBar.progressCmdBar()
@ -1080,7 +1097,7 @@ openSketch = startSketchOn(XY)
headerArguments: {
Mode: 'Axis',
Axis: 'X',
AngleStart: '360',
AngleStart: '270',
Revolutions: '1',
Length: '5',
Radius: '5',
@ -1115,7 +1132,7 @@ openSketch = startSketchOn(XY)
currentArgValue: '',
headerArguments: {
Axis: 'X',
AngleStart: '360',
AngleStart: '270',
Revolutions: '1',
Radius: '5',
Length: initialInput,
@ -1131,7 +1148,7 @@ openSketch = startSketchOn(XY)
stage: 'review',
headerArguments: {
Axis: 'X',
AngleStart: '360',
AngleStart: '270',
Revolutions: '1',
Radius: '5',
Length: newInput,
@ -3818,4 +3835,469 @@ extrude001 = extrude(profile001, length = 100)
)
})
})
const translateExtrudeCases: { variables: boolean }[] = [
{
variables: false,
},
{
variables: true,
},
]
translateExtrudeCases.map(({ variables }) => {
test(`Set translate on extrude through right-click menu (variables: ${variables})`, async ({
context,
page,
homePage,
scene,
editor,
toolbar,
cmdBar,
}) => {
const initialCode = `sketch001 = startSketchOn(XZ)
profile001 = circle(sketch001, center = [0, 0], radius = 1)
extrude001 = extrude(profile001, length = 1)
`
await context.addInitScript((initialCode) => {
localStorage.setItem('persistCode', initialCode)
}, initialCode)
await page.setBodyDimensions({ width: 1000, height: 500 })
await homePage.goToModelingScene()
await scene.settled(cmdBar)
// One dumb hardcoded screen pixel value
const midPoint = { x: 500, y: 250 }
const moreToTheRightPoint = { x: 800, y: 250 }
const bgColor: [number, number, number] = [50, 50, 50]
const partColor: [number, number, number] = [150, 150, 150]
const tolerance = 50
await test.step('Confirm extrude exists with default appearance', async () => {
await toolbar.closePane('code')
await scene.expectPixelColor(partColor, midPoint, tolerance)
await scene.expectPixelColor(bgColor, moreToTheRightPoint, tolerance)
})
await test.step('Set translate through command bar flow', async () => {
await toolbar.openPane('feature-tree')
const op = await toolbar.getFeatureTreeOperation('Extrude', 0)
await op.click({ button: 'right' })
await page.getByTestId('context-menu-set-translate').click()
await cmdBar.expectState({
stage: 'arguments',
currentArgKey: 'x',
currentArgValue: '0',
headerArguments: {
X: '',
Y: '',
Z: '',
},
highlightedHeaderArg: 'x',
commandName: 'Translate',
})
await page.keyboard.insertText('3')
if (variables) {
await cmdBar.createNewVariable()
}
await cmdBar.progressCmdBar()
await page.keyboard.insertText('0.1')
if (variables) {
await cmdBar.createNewVariable()
}
await cmdBar.progressCmdBar()
await page.keyboard.insertText('0.2')
if (variables) {
await cmdBar.createNewVariable()
}
await cmdBar.progressCmdBar()
await cmdBar.expectState({
stage: 'review',
headerArguments: {
X: '3',
Y: '0.1',
Z: '0.2',
},
commandName: 'Translate',
})
await cmdBar.progressCmdBar()
await toolbar.closePane('feature-tree')
})
await test.step('Confirm code and scene have changed', async () => {
await toolbar.openPane('code')
if (variables) {
await editor.expectEditor.toContain(
`
z001 = 0.2
y001 = 0.1
x001 = 3
sketch001 = startSketchOn(XZ)
profile001 = circle(sketch001, center = [0, 0], radius = 1)
extrude001 = extrude(profile001, length = 1)
|> translate(x = x001, y = y001, z = z001)
`,
{ shouldNormalise: true }
)
} else {
await editor.expectEditor.toContain(
`
sketch001 = startSketchOn(XZ)
profile001 = circle(sketch001, center = [0, 0], radius = 1)
extrude001 = extrude(profile001, length = 1)
|> translate(x = 3, y = 0.1, z = 0.2)
`,
{ shouldNormalise: true }
)
}
await scene.expectPixelColor(bgColor, midPoint, tolerance)
await scene.expectPixelColor(partColor, moreToTheRightPoint, tolerance)
})
await test.step('Edit translate', async () => {
await toolbar.openPane('feature-tree')
const op = await toolbar.getFeatureTreeOperation('Extrude', 0)
await op.click({ button: 'right' })
await page.getByTestId('context-menu-set-translate').click()
await cmdBar.expectState({
stage: 'arguments',
currentArgKey: 'z',
currentArgValue: variables ? 'z001' : '0.2',
headerArguments: {
X: '3',
Y: '0.1',
Z: '0.2',
},
highlightedHeaderArg: 'z',
commandName: 'Translate',
})
await page.keyboard.insertText('0.3')
await cmdBar.progressCmdBar()
await cmdBar.expectState({
stage: 'review',
headerArguments: {
X: '3',
Y: '0.1',
Z: '0.3',
},
commandName: 'Translate',
})
await cmdBar.progressCmdBar()
await toolbar.closePane('feature-tree')
await toolbar.openPane('code')
await editor.expectEditor.toContain(`z = 0.3`)
// Expect almost no change in scene
await scene.expectPixelColor(bgColor, midPoint, tolerance)
await scene.expectPixelColor(partColor, moreToTheRightPoint, tolerance)
})
})
})
const rotateExtrudeCases: { variables: boolean }[] = [
{
variables: false,
},
{
variables: true,
},
]
rotateExtrudeCases.map(({ variables }) => {
test(`Set rotate on extrude through right-click menu (variables: ${variables})`, async ({
context,
page,
homePage,
scene,
editor,
toolbar,
cmdBar,
}) => {
const initialCode = `sketch001 = startSketchOn(XZ)
profile001 = circle(sketch001, center = [0, 0], radius = 1)
extrude001 = extrude(profile001, length = 1)
`
await context.addInitScript((initialCode) => {
localStorage.setItem('persistCode', initialCode)
}, initialCode)
await page.setBodyDimensions({ width: 1000, height: 500 })
await homePage.goToModelingScene()
await scene.settled(cmdBar)
await test.step('Set rotate through command bar flow', async () => {
await toolbar.openPane('feature-tree')
const op = await toolbar.getFeatureTreeOperation('Extrude', 0)
await op.click({ button: 'right' })
await page.getByTestId('context-menu-set-rotate').click()
await cmdBar.expectState({
stage: 'arguments',
currentArgKey: 'roll',
currentArgValue: '0',
headerArguments: {
Roll: '',
Pitch: '',
Yaw: '',
},
highlightedHeaderArg: 'roll',
commandName: 'Rotate',
})
await page.keyboard.insertText('1.1')
if (variables) {
await cmdBar.createNewVariable()
}
await cmdBar.progressCmdBar()
await page.keyboard.insertText('1.2')
if (variables) {
await cmdBar.createNewVariable()
}
await cmdBar.progressCmdBar()
await page.keyboard.insertText('1.3')
if (variables) {
await cmdBar.createNewVariable()
}
await cmdBar.progressCmdBar()
await cmdBar.expectState({
stage: 'review',
headerArguments: {
Roll: '1.1',
Pitch: '1.2',
Yaw: '1.3',
},
commandName: 'Rotate',
})
await cmdBar.progressCmdBar()
await toolbar.closePane('feature-tree')
})
await test.step('Confirm code and scene have changed', async () => {
await toolbar.openPane('code')
if (variables) {
await editor.expectEditor.toContain(
`
yaw001 = 1.3
pitch001 = 1.2
roll001 = 1.1
sketch001 = startSketchOn(XZ)
profile001 = circle(sketch001, center = [0, 0], radius = 1)
extrude001 = extrude(profile001, length = 1)
|> rotate(roll = roll001, pitch = pitch001, yaw = yaw001)
`,
{ shouldNormalise: true }
)
} else {
await editor.expectEditor.toContain(
`
sketch001 = startSketchOn(XZ)
profile001 = circle(sketch001, center = [0, 0], radius = 1)
extrude001 = extrude(profile001, length = 1)
|> rotate(roll = 1.1, pitch = 1.2, yaw = 1.3)
`,
{ shouldNormalise: true }
)
}
})
await test.step('Edit rotate', async () => {
await toolbar.openPane('feature-tree')
const op = await toolbar.getFeatureTreeOperation('Extrude', 0)
await op.click({ button: 'right' })
await page.getByTestId('context-menu-set-rotate').click()
await cmdBar.expectState({
stage: 'arguments',
currentArgKey: 'yaw',
currentArgValue: variables ? 'yaw001' : '1.3',
headerArguments: {
Roll: '1.1',
Pitch: '1.2',
Yaw: '1.3',
},
highlightedHeaderArg: 'yaw',
commandName: 'Rotate',
})
await page.keyboard.insertText('13')
await cmdBar.progressCmdBar()
await cmdBar.expectState({
stage: 'review',
headerArguments: {
Roll: '1.1',
Pitch: '1.2',
Yaw: '13',
},
commandName: 'Rotate',
})
await cmdBar.progressCmdBar()
await toolbar.closePane('feature-tree')
await toolbar.openPane('code')
await editor.expectEditor.toContain(`yaw = 13`)
})
})
})
test(`Set translate and rotate on extrude through selection`, async ({
context,
page,
homePage,
scene,
editor,
toolbar,
cmdBar,
}) => {
const initialCode = `sketch001 = startSketchOn(XZ)
profile001 = circle(sketch001, center = [0, 0], radius = 1)
extrude001 = extrude(profile001, length = 1)
`
await context.addInitScript((initialCode) => {
localStorage.setItem('persistCode', initialCode)
}, initialCode)
await page.setBodyDimensions({ width: 1000, height: 500 })
await homePage.goToModelingScene()
await scene.settled(cmdBar)
// One dumb hardcoded screen pixel value
const midPoint = { x: 500, y: 250 }
const moreToTheRightPoint = { x: 800, y: 250 }
const bgColor: [number, number, number] = [50, 50, 50]
const partColor: [number, number, number] = [150, 150, 150]
const tolerance = 50
const [clickMidPoint] = scene.makeMouseHelpers(midPoint.x, midPoint.y)
const [clickMoreToTheRightPoint] = scene.makeMouseHelpers(
moreToTheRightPoint.x,
moreToTheRightPoint.y
)
await test.step('Confirm extrude exists with default appearance', async () => {
await toolbar.closePane('code')
await scene.expectPixelColor(partColor, midPoint, tolerance)
await scene.expectPixelColor(bgColor, moreToTheRightPoint, tolerance)
})
await test.step('Set translate through command bar flow', async () => {
await cmdBar.openCmdBar()
await cmdBar.chooseCommand('Translate')
await cmdBar.expectState({
stage: 'arguments',
currentArgKey: 'selection',
currentArgValue: '',
headerArguments: {
Selection: '',
X: '',
Y: '',
Z: '',
},
highlightedHeaderArg: 'selection',
commandName: 'Translate',
})
await clickMidPoint()
await cmdBar.progressCmdBar()
await cmdBar.expectState({
stage: 'arguments',
currentArgKey: 'x',
currentArgValue: '0',
headerArguments: {
Selection: '1 path',
X: '',
Y: '',
Z: '',
},
highlightedHeaderArg: 'x',
commandName: 'Translate',
})
await page.keyboard.insertText('2')
await cmdBar.progressCmdBar()
await cmdBar.progressCmdBar()
await cmdBar.progressCmdBar()
await cmdBar.expectState({
stage: 'review',
headerArguments: {
Selection: '1 path',
X: '2',
Y: '0',
Z: '0',
},
commandName: 'Translate',
})
await cmdBar.progressCmdBar()
})
await test.step('Confirm code and scene have changed', async () => {
await toolbar.openPane('code')
await editor.expectEditor.toContain(
`
sketch001 = startSketchOn(XZ)
profile001 = circle(sketch001, center = [0, 0], radius = 1)
extrude001 = extrude(profile001, length = 1)
|> translate(x = 2, y = 0, z = 0)
`,
{ shouldNormalise: true }
)
await scene.expectPixelColor(bgColor, midPoint, tolerance)
await scene.expectPixelColor(partColor, moreToTheRightPoint, tolerance)
})
await test.step('Set rotate through command bar flow', async () => {
// clear selection
await clickMidPoint()
await cmdBar.openCmdBar()
await cmdBar.chooseCommand('Rotate')
await cmdBar.expectState({
stage: 'arguments',
currentArgKey: 'selection',
currentArgValue: '',
headerArguments: {
Selection: '',
Roll: '',
Pitch: '',
Yaw: '',
},
highlightedHeaderArg: 'selection',
commandName: 'Rotate',
})
await clickMoreToTheRightPoint()
await cmdBar.progressCmdBar()
await cmdBar.expectState({
stage: 'arguments',
currentArgKey: 'roll',
currentArgValue: '0',
headerArguments: {
Selection: '1 path',
Roll: '',
Pitch: '',
Yaw: '',
},
highlightedHeaderArg: 'roll',
commandName: 'Rotate',
})
await page.keyboard.insertText('0.1')
await cmdBar.progressCmdBar()
await page.keyboard.insertText('0.2')
await cmdBar.progressCmdBar()
await page.keyboard.insertText('0.3')
await cmdBar.progressCmdBar()
await cmdBar.expectState({
stage: 'review',
headerArguments: {
Selection: '1 path',
Roll: '0.1',
Pitch: '0.2',
Yaw: '0.3',
},
commandName: 'Rotate',
})
await cmdBar.progressCmdBar()
})
await test.step('Confirm code has changed', async () => {
await toolbar.openPane('code')
await editor.expectEditor.toContain(
`
sketch001 = startSketchOn(XZ)
profile001 = circle(sketch001, center = [0, 0], radius = 1)
extrude001 = extrude(profile001, length = 1)
|> translate(x = 2, y = 0, z = 0)
|> rotate(roll = 0.1, pitch = 0.2, yaw = 0.3)
`,
{ shouldNormalise: true }
)
// No change here since the angles are super small
await scene.expectPixelColor(bgColor, midPoint, tolerance)
await scene.expectPixelColor(partColor, moreToTheRightPoint, tolerance)
})
})
})

View File

@ -3058,7 +3058,7 @@ test.describe('manual edits during sketch mode', () => {
}) => {
const initialCode = `myVar1 = 5
myVar2 = 6
sketch001 = startSketchOn(XZ)
profile001 = startProfileAt([106.68, 89.77], sketch001)
|> line(end = [132.34, 157.8])
@ -3069,10 +3069,8 @@ test.describe('manual edits during sketch mode', () => {
sketch002 = startSketchOn(extrude001, face = seg01)
profile002 = startProfileAt([83.39, 329.15], sketch002)
|> angledLine(angle = 0, length = 119.61, tag = $rectangleSegmentA001)
|> angledLine(angle = segAng(rectangleSegmentA001) - 90, length = 156.54, angle = -28)
|> angledLine(length = 156.54, angle = -28)
|> angledLine(
angle = segAng(rectangleSegmentA001),
length = -segLen(rectangleSegmentA001),
angle = -151,
length = 116.27,
)
@ -3089,38 +3087,43 @@ test.describe('manual edits during sketch mode', () => {
await homePage.goToModelingScene()
await scene.connectionEstablished()
await scene.settled(cmdBar)
const expectSketchOriginToBeDrawn = async () => {
await scene.expectPixelColor(TEST_COLORS.WHITE, { x: 672, y: 193 }, 15)
}
await test.step('Open feature tree and edit second sketch', async () => {
await toolbar.openFeatureTreePane()
const sketchButton = await toolbar.getFeatureTreeOperation('Sketch', 1)
await sketchButton.dblclick()
await page.waitForTimeout(700) // Wait for engine animation
await expectSketchOriginToBeDrawn()
})
await test.step('Add new variable and wait for re-execution', async () => {
await page.waitForTimeout(500) // wait for deferred execution
await editor.replaceCode('myVar2 = 6', 'myVar2 = 6\nmyVar3 = 7')
await page.waitForTimeout(2000) // wait for deferred execution
await expectSketchOriginToBeDrawn()
})
const handle1Location = { x: 843, y: 235 }
await test.step('Edit sketch by dragging handle', async () => {
await page.waitForTimeout(500)
await editor.expectEditor.toContain('length = 156.54, angle = -28')
await page.mouse.move(handle1Location.x, handle1Location.y)
await page.mouse.down()
await page.mouse.move(handle1Location.x + 50, handle1Location.y + 50, {
steps: 5,
})
await page.mouse.up()
await editor.expectEditor.toContain('length = 231.59, angle = -34')
// await page.waitForTimeout(1000) // Wait for update
await expect
.poll(
async () => {
await editor.expectEditor.toContain('length = 156.54, angle = -28')
await page.mouse.move(handle1Location.x, handle1Location.y)
await page.mouse.down()
await page.mouse.move(
handle1Location.x + 50,
handle1Location.y + 50,
{
steps: 5,
}
)
await page.mouse.up()
await editor.expectEditor.toContain('length = 231.59, angle = -34')
return true
},
{ timeout: 10_000 }
)
.toBeTruthy()
})
await test.step('Delete variables and wait for re-execution', async () => {
@ -3129,20 +3132,27 @@ test.describe('manual edits during sketch mode', () => {
await page.waitForTimeout(50)
await editor.replaceCode('myVar2 = 6', '')
await page.waitForTimeout(2000) // Wait for deferred execution
await expectSketchOriginToBeDrawn()
})
const handle2Location = { x: 872, y: 273 }
await test.step('Edit sketch again', async () => {
await editor.expectEditor.toContain('length = 231.59, angle = -34')
await page.waitForTimeout(500)
await page.mouse.move(handle2Location.x, handle2Location.y)
await page.mouse.down()
await page.mouse.move(handle2Location.x, handle2Location.y - 50, {
steps: 5,
})
await page.mouse.up()
await editor.expectEditor.toContain('length = 167.36, angle = -14')
await expect
.poll(
async () => {
await page.mouse.move(handle2Location.x, handle2Location.y)
await page.mouse.down()
await page.mouse.move(handle2Location.x, handle2Location.y - 50, {
steps: 5,
})
await page.mouse.up()
await editor.expectEditor.toContain('length = 167.36, angle = -14')
return true
},
{ timeout: 10_000 }
)
.toBeTruthy()
})
await test.step('add whole other sketch before current sketch', async () => {
@ -3154,19 +3164,27 @@ test.describe('manual edits during sketch mode', () => {
profile004 = circle(sketch003, center = [143.91, 136.89], radius = 71.63)`
)
await page.waitForTimeout(2000) // Wait for deferred execution
await expectSketchOriginToBeDrawn()
})
const handle3Location = { x: 844, y: 212 }
await test.step('edit sketch again', async () => {
await editor.expectEditor.toContain('length = 167.36, angle = -14')
await page.mouse.move(handle3Location.x, handle3Location.y)
await page.mouse.down()
await page.mouse.move(handle3Location.x, handle3Location.y + 110, {
steps: 5,
})
await page.mouse.up()
await editor.expectEditor.toContain('length = 219.2, angle = -56')
await page.waitForTimeout(500) // Wait for deferred execution
await expect
.poll(
async () => {
await editor.expectEditor.toContain('length = 167.36, angle = -14')
await page.mouse.move(handle3Location.x, handle3Location.y)
await page.mouse.down()
await page.mouse.move(handle3Location.x, handle3Location.y + 110, {
steps: 5,
})
await page.mouse.up()
await editor.expectEditor.toContain('length = 219.2, angle = -56')
return true
},
{ timeout: 10_000 }
)
.toBeTruthy()
})
// exit sketch and assert whole code
@ -3174,32 +3192,27 @@ test.describe('manual edits during sketch mode', () => {
await toolbar.exitSketch()
await editor.expectEditor.toContain(
`myVar1 = 5
sketch003 = startSketchOn(XY)
profile004 = circle(sketch003, center = [143.91, 136.89], radius = 71.63)
sketch001 = startSketchOn(XZ)
profile001 = startProfileAt([106.68, 89.77], sketch001)
|> line(end = [132.34, 157.8])
|> line(end = [67.65, -460.55], tag = $seg01)
|> line(endAbsolute = [profileStartX(%), profileStartY(%)])
|> close()
extrude001 = extrude(profile001, length = 500)
sketch002 = startSketchOn(extrude001, face = seg01)
profile002 = startProfileAt([83.39, 329.15], sketch002)
|> angledLine(angle = 0, length = 119.61, tag = $rectangleSegmentA001)
|> angledLine(angle = segAng(rectangleSegmentA001) - 90, length = 219.2, angle = -56)
|> angledLine(
angle = segAng(rectangleSegmentA001),
length = -segLen(rectangleSegmentA001),
angle = -151,
length = 116.27,
)
|> line(endAbsolute = [profileStartX(%), profileStartY(%)])
|> close()
profile003 = startProfileAt([-201.08, 254.17], sketch002)
|> line(end = [103.55, 33.32])
|> line(end = [48.8, -153.54])
`,
sketch003 = startSketchOn(XY)
profile004 = circle(sketch003, center = [143.91, 136.89], radius = 71.63)
sketch001 = startSketchOn(XZ)
profile001 = startProfileAt([106.68, 89.77], sketch001)
|> line(end = [132.34, 157.8])
|> line(end = [67.65, -460.55], tag = $seg01)
|> line(endAbsolute = [profileStartX(%), profileStartY(%)])
|> close()
extrude001 = extrude(profile001, length = 500)
sketch002 = startSketchOn(extrude001, face = seg01)
profile002 = startProfileAt([83.39, 329.15], sketch002)
|> angledLine(angle = 0, length = 119.61, tag = $rectangleSegmentA001)
|> angledLine(length = 219.2, angle = -56)
|> angledLine(angle = -151, length = 116.27)
|> line(endAbsolute = [profileStartX(%), profileStartY(%)])
|> close()
profile003 = startProfileAt([-201.08, 254.17], sketch002)
|> line(end = [103.55, 33.32])
|> line(end = [48.8, -153.54])
`,
{ shouldNormalise: true }
)
await editor.expectState({
@ -3220,7 +3233,7 @@ test.describe('manual edits during sketch mode', () => {
}) => {
const initialCode = `myVar1 = 5
myVar2 = 6
sketch001 = startSketchOn(XZ)
profile001 = startProfileAt([106.68, 89.77], sketch001)
|> line(end = [132.34, 157.8])
@ -3231,10 +3244,8 @@ test.describe('manual edits during sketch mode', () => {
sketch002 = startSketchOn(extrude001, face = seg01)
profile002 = startProfileAt([83.39, 329.15], sketch002)
|> angledLine(angle = 0, length = 119.61, tag = $rectangleSegmentA001)
|> angledLine(angle = segAng(rectangleSegmentA001) - 90, length = 156.54, angle = -28)
|> angledLine(length = 156.54, angle = -28)
|> angledLine(
angle = segAng(rectangleSegmentA001),
length = -segLen(rectangleSegmentA001),
angle = -151,
length = 116.27,
)
@ -3350,7 +3361,21 @@ test.describe('manual edits during sketch mode', () => {
// this checks sketch segments have been drawn
await verifyArrowHeadColor(arrowHeadWhite)
})
await page.waitForTimeout(100)
await test.step('make a change to the code and expect pixel color to change', async () => {
// defends against a regression where sketch would duplicate in the scene
// https://github.com/KittyCAD/modeling-app/issues/6345
await editor.replaceCode(
'startProfileAt([75.8, 317.2',
'startProfileAt([75.8, 217.2'
)
// expect not white anymore
await scene.expectPixelColorNotToBe(
TEST_COLORS.WHITE,
arrowHeadLocation,
15
)
})
}
)
})

View File

@ -44,7 +44,9 @@
packages =
(with pkgs; [
rustToolchain
cargo-criterion
cargo-nextest
cargo-sort
just
postgresql.lib
openssl
@ -67,6 +69,7 @@
PLAYWRIGHT_CHROMIUM_EXECUTABLE_PATH = "${pkgs.playwright-driver.browsers}/chromium-1091/chrome-linux/chrome";
PLAYWRIGHT_BROWSERS_PATH = "${pkgs.playwright-driver.browsers}";
NODE_ENV = "development";
RUSTFMT = "${pkgs.rust-bin.stable.latest.rustfmt}/bin/rustfmt";
};
});

View File

@ -20,7 +20,7 @@ if (process.env.E2E_WORKERS) {
case 'darwin':
case 'win32':
default:
workers = '25%' // Lower concurrency for heavier Electron processes
workers = '40%' // Lower concurrency for heavier Electron processes
break
}
}

View File

@ -25,10 +25,14 @@ When you submit a PR to add or modify KCL samples, images and STEP files will be
---
#### [80-20-rail](80-20-rail/main.kcl) ([screenshot](screenshots/80-20-rail.png))
[![80-20-rail](screenshots/80-20-rail.png)](80-20-rail/main.kcl)
#### [axial-fan](axial-fan/main.kcl) ([screenshot](screenshots/axial-fan.png))
[![axial-fan](screenshots/axial-fan.png)](axial-fan/main.kcl)
#### [ball-bearing](ball-bearing/main.kcl) ([screenshot](screenshots/ball-bearing.png))
[![ball-bearing](screenshots/ball-bearing.png)](ball-bearing/main.kcl)
#### [bench](bench/main.kcl) ([screenshot](screenshots/bench.png))
[![bench](screenshots/bench.png)](bench/main.kcl)
#### [bottle](bottle/main.kcl) ([screenshot](screenshots/bottle.png))
[![bottle](screenshots/bottle.png)](bottle/main.kcl)
#### [bracket](bracket/main.kcl) ([screenshot](screenshots/bracket.png))
[![bracket](screenshots/bracket.png)](bracket/main.kcl)
#### [car-wheel-assembly](car-wheel-assembly/main.kcl) ([screenshot](screenshots/car-wheel-assembly.png))

View File

@ -0,0 +1,153 @@
// Fan Housing
// The plastic housing that contains the fan and the motor
// Set units
@settings(defaultLengthUnit = mm)
// Import parameters
import * from "parameters.kcl"
// Model the housing which holds the motor, the fan, and the mounting provisions
// Bottom mounting face
bottomFaceSketch = startSketchOn(XY)
|> startProfileAt([-fanSize / 2, -fanSize / 2], %)
|> angledLine(angle = 0, length = fanSize, tag = $rectangleSegmentA001)
|> angledLine(angle = segAng(rectangleSegmentA001) + 90, length = fanSize, tag = $rectangleSegmentB001)
|> angledLine(angle = segAng(rectangleSegmentA001), length = -segLen(rectangleSegmentA001), tag = $rectangleSegmentC001)
|> line(endAbsolute = [profileStartX(%), profileStartY(%)], tag = $rectangleSegmentD001)
|> close()
|> hole(circle(center = [0, 0], radius = 4), %)
|> hole(circle(
center = [
mountingHoleSpacing / 2,
mountingHoleSpacing / 2
],
radius = mountingHoleSize / 2,
), %)
|> hole(circle(
center = [
-mountingHoleSpacing / 2,
mountingHoleSpacing / 2
],
radius = mountingHoleSize / 2,
), %)
|> hole(circle(
center = [
mountingHoleSpacing / 2,
-mountingHoleSpacing / 2
],
radius = mountingHoleSize / 2,
), %)
|> hole(circle(
center = [
-mountingHoleSpacing / 2,
-mountingHoleSpacing / 2
],
radius = mountingHoleSize / 2,
), %)
|> extrude(length = 4)
// Add large openings to the bottom face to allow airflow through the fan
airflowPattern = startSketchOn(bottomFaceSketch, face = END)
|> startProfileAt([fanSize * 7 / 25, -fanSize * 9 / 25], %)
|> angledLine(angle = 140, length = fanSize * 12 / 25, tag = $seg01)
|> tangentialArc(radius = fanSize * 1 / 50, angle = 90)
|> angledLine(angle = -130, length = fanSize * 8 / 25)
|> tangentialArc(radius = fanSize * 1 / 50, angle = 90)
|> angledLine(angle = segAng(seg01) + 180, length = fanSize * 2 / 25)
|> tangentialArc(radius = fanSize * 8 / 25, angle = 40)
|> xLine(length = fanSize * 3 / 25)
|> tangentialArc(endAbsolute = [profileStartX(%), profileStartY(%)])
|> close()
|> patternCircular2d(
instances = 4,
center = [0, 0],
arcDegrees = 360,
rotateDuplicates = true,
)
|> extrude(length = -4)
// Create the middle segment of the fan housing body
housingMiddleLength = fanSize / 3
housingMiddleRadius = fanSize / 3 - 1
bodyMiddle = startSketchOn(bottomFaceSketch, face = END)
|> startProfileAt([
housingMiddleLength / 2,
-housingMiddleLength / 2 - housingMiddleRadius
], %)
|> tangentialArc(radius = housingMiddleRadius, angle = 90)
|> yLine(length = housingMiddleLength)
|> tangentialArc(radius = housingMiddleRadius, angle = 90)
|> xLine(length = -housingMiddleLength)
|> tangentialArc(radius = housingMiddleRadius, angle = 90)
|> yLine(length = -housingMiddleLength)
|> tangentialArc(radius = housingMiddleRadius, angle = 90)
|> line(endAbsolute = [profileStartX(%), profileStartY(%)])
|> extrude(length = fanHeight - 4 - 4)
// Cut a hole in the body to accommodate the fan
bodyFanHole = startSketchOn(bodyMiddle, face = END)
|> circle(center = [0, 0], radius = fanSize * 23 / 50)
|> extrude(length = -(fanHeight - 4 - 4))
// Top mounting face. Cut a hole in the face to accommodate the fan
topFaceSketch = startSketchOn(bodyMiddle, face = END)
topHoles = startProfileAt([-fanSize / 2, -fanSize / 2], topFaceSketch)
|> angledLine(angle = 0, length = fanSize, tag = $rectangleSegmentA002)
|> angledLine(angle = segAng(rectangleSegmentA002) + 90, length = fanSize, tag = $rectangleSegmentB002)
|> angledLine(angle = segAng(rectangleSegmentA002), length = -segLen(rectangleSegmentA002), tag = $rectangleSegmentC002)
|> line(endAbsolute = [profileStartX(%), profileStartY(%)], tag = $rectangleSegmentD002)
|> close()
|> hole(circle(center = [0, 0], radius = fanSize * 23 / 50), %)
|> hole(circle(
center = [
mountingHoleSpacing / 2,
mountingHoleSpacing / 2
],
radius = mountingHoleSize / 2,
), %)
|> hole(circle(
center = [
-mountingHoleSpacing / 2,
mountingHoleSpacing / 2
],
radius = mountingHoleSize / 2,
), %)
|> hole(circle(
center = [
mountingHoleSpacing / 2,
-mountingHoleSpacing / 2
],
radius = mountingHoleSize / 2,
), %)
|> hole(circle(
center = [
-mountingHoleSpacing / 2,
-mountingHoleSpacing / 2
],
radius = mountingHoleSize / 2,
), %)
|> extrude(length = 4)
// Create a housing for the electric motor to sit
motorHousing = startSketchOn(bottomFaceSketch, face = END)
|> circle(center = [0, 0], radius = 11.2)
|> extrude(length = 16)
startSketchOn(motorHousing, face = END)
|> circle(center = [0, 0], radius = 10)
|> extrude(length = -16)
|> appearance(color = "#a55e2c")
|> fillet(
radius = abs(fanSize - mountingHoleSpacing) / 2,
tags = [
getNextAdjacentEdge(rectangleSegmentA001),
getNextAdjacentEdge(rectangleSegmentB001),
getNextAdjacentEdge(rectangleSegmentC001),
getNextAdjacentEdge(rectangleSegmentD001),
getNextAdjacentEdge(rectangleSegmentA002),
getNextAdjacentEdge(rectangleSegmentB002),
getNextAdjacentEdge(rectangleSegmentC002),
getNextAdjacentEdge(rectangleSegmentD002)
],
)

View File

@ -0,0 +1,92 @@
// Fan
// Spinning axial fan that moves airflow
// Set units
@settings(defaultLengthUnit = mm)
// Import parameters
import * from "parameters.kcl"
// Model the center of the fan
fanCenter = startSketchOn(XZ)
|> startProfileAt([-0.0001, fanHeight], %)
|> xLine(endAbsolute = -15 + 1.5)
|> tangentialArc(radius = 1.5, angle = 90)
|> yLine(endAbsolute = 4.5)
|> xLine(endAbsolute = -13)
|> yLine(endAbsolute = profileStartY(%) - 5)
|> tangentialArc(radius = 1, angle = -90)
|> xLine(endAbsolute = -1)
|> yLine(length = 2)
|> xLine(length = -0.15)
|> line(endAbsolute = [
profileStartX(%) - 1,
profileStartY(%) - 1.4
])
|> xLine(endAbsolute = profileStartX(%))
|> yLine(endAbsolute = profileStartY(%))
|> close()
|> revolve(axis = {
direction = [0.0, 1.0],
origin = [0.0, 0.0]
})
|> appearance(color = "#f3e2d8")
// Create a function for a lofted fan blade cross section that rotates about the center hub of the fan
fn fanBlade(offsetHeight, startAngle) {
fanBlade = startSketchOn(offsetPlane(XY, offset = offsetHeight))
|> startProfileAt([
15 * cos(toRadians(startAngle)),
15 * sin(toRadians(startAngle))
], %)
|> arc({
angleStart = startAngle,
angleEnd = startAngle + 14,
radius = 15
}, %)
|> arcTo({
end = [
fanSize * 22 / 50 * cos(toRadians(startAngle - 20)),
fanSize * 22 / 50 * sin(toRadians(startAngle - 20))
],
interior = [
fanSize * 11 / 50 * cos(toRadians(startAngle + 3)),
fanSize * 11 / 50 * sin(toRadians(startAngle + 3))
]
}, %)
|> arcTo({
end = [
fanSize * 22 / 50 * cos(toRadians(startAngle - 24)),
fanSize * 22 / 50 * sin(toRadians(startAngle - 24))
],
interior = [
fanSize * 22 / 50 * cos(toRadians(startAngle - 22)),
fanSize * 22 / 50 * sin(toRadians(startAngle - 22))
]
}, %)
|> arcTo({
end = [profileStartX(%), profileStartY(%)],
interior = [
fanSize * 11 / 50 * cos(toRadians(startAngle - 5)),
fanSize * 11 / 50 * sin(toRadians(startAngle - 5))
]
}, %)
|> close()
return fanBlade
}
// Loft the fan blade cross sections into a single blade, then pattern them about the fan center
loft([
fanBlade(4.5, 50),
fanBlade((fanHeight - 2 - 4) / 2, 30),
fanBlade(fanHeight - 2, 0)
])
|> appearance(color = "#f3e2d8")
|> patternCircular3d(
%,
instances = 9,
axis = [0, 0, 1],
center = [0, 0, 0],
arcDegrees = 360,
rotateDuplicates = true,
)

View File

@ -0,0 +1,15 @@
// PC Fan
// A small axial fan, used to push or draw airflow over components to remove excess heat
// Set units
@settings(defaultLengthUnit = mm)
// Import all parts into assembly file
import "fan-housing.kcl" as fanHousing
import "motor.kcl" as motor
import "fan.kcl" as fan
// Produce the model for each imported part
fanHousing
motor
fan

View File

@ -0,0 +1,22 @@
// Motor
// A small electric motor to power the fan
// Set Units
@settings(defaultLengthUnit = mm)
// Import Parameters
import * from "parameters.kcl"
// Model the motor body and stem
topFacePlane = offsetPlane(XY, offset = 4)
motorBody = startSketchOn(topFacePlane)
|> circle(center = [0, 0], radius = 10, tag = $seg04)
|> extrude(length = 17)
|> appearance(color = "#021b55")
|> fillet(radius = 2, tags = [getOppositeEdge(seg04), seg04])
startSketchOn(offsetPlane(XY, offset = 21))
|> circle(center = [0, 0], radius = 1)
|> extrude(length = 3.8)
|> appearance(color = "#dbc89e")

View File

@ -0,0 +1,10 @@
// Global parameters for the axial fan
// Set units
@settings(defaultLengthUnit = mm)
// Define Parameters
export fanSize = 120
export fanHeight = 25
export mountingHoleSpacing = 105
export mountingHoleSize = 4.5

View File

@ -0,0 +1,35 @@
// Bottle
// A simple bottle with a hollow, watertight interior
// Set Units
@settings(defaultLengthUnit = mm)
// Input dimensions to define the bottle
bottleWidth = 80
bottleLength = 125
bottleHeight = 220
neckDepth = 18
neckDiameter = 45
wallThickness = 4
// Create a rounded body for the bottle
bottleBody = startSketchOn(XY)
|> startProfileAt([-bottleLength / 2, 0], %)
|> yLine(length = bottleWidth / 3)
|> arcTo({
end = [bottleLength / 2, bottleWidth / 3],
interior = [0, bottleWidth / 2]
}, %)
|> yLine(endAbsolute = 0)
|> mirror2d(axis = X)
|> close()
|> extrude(length = bottleHeight - neckDepth)
// Create a neck centered at the top of the bottle
bottleNeck = startSketchOn(bottleBody, face = END)
|> circle(center = [0, 0], radius = neckDiameter / 2)
|> extrude(length = neckDepth)
// Define a shell operation so that the entire body and neck are hollow, with only the top face opened
bottleShell = shell(bottleNeck, faces = [END], thickness = wallThickness)
|> appearance(%, color = "#0078c2")

View File

@ -6,6 +6,13 @@
"title": "80/20 Rail",
"description": "An 80/20 extruded aluminum linear rail. T-slot profile adjustable by profile height, rail length, and origin position"
},
{
"file": "main.kcl",
"pathFromProjectDirectoryToFirstFile": "axial-fan/main.kcl",
"multipleFiles": true,
"title": "PC Fan",
"description": "A small axial fan, used to push or draw airflow over components to remove excess heat"
},
{
"file": "main.kcl",
"pathFromProjectDirectoryToFirstFile": "ball-bearing/main.kcl",
@ -20,6 +27,13 @@
"title": "Bench",
"description": "This is a slight remix of Depep1's original 3D Boaty (https://www.printables.com/model/1141963-3d-boaty). This is a tool used for benchmarking 3D FDM printers for bed adhesion, overhangs, bridging and top surface quality. The name of this file is a bit of misnomer, the shape of the object is a typical park bench."
},
{
"file": "main.kcl",
"pathFromProjectDirectoryToFirstFile": "bottle/main.kcl",
"multipleFiles": false,
"title": "Bottle",
"description": "A simple bottle with a hollow, watertight interior"
},
{
"file": "main.kcl",
"pathFromProjectDirectoryToFirstFile": "bracket/main.kcl",

Binary file not shown.

After

Width:  |  Height:  |  Size: 105 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

95
rust/Cargo.lock generated
View File

@ -758,7 +758,7 @@ dependencies = [
"hashbrown 0.14.5",
"lock_api",
"once_cell",
"parking_lot_core",
"parking_lot_core 0.9.10",
]
[[package]]
@ -772,7 +772,7 @@ dependencies = [
"hashbrown 0.14.5",
"lock_api",
"once_cell",
"parking_lot_core",
"parking_lot_core 0.9.10",
]
[[package]]
@ -847,7 +847,7 @@ dependencies = [
"backtrace",
"lazy_static",
"mintex",
"parking_lot",
"parking_lot 0.12.3",
"rustc-hash 1.1.0",
"serde",
"serde_json",
@ -1697,6 +1697,18 @@ dependencies = [
"similar",
]
[[package]]
name = "instant"
version = "0.1.13"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e0242819d153cba4b4b05a5a8f2a7e9bbf97b6055b2a002b395c96b5ff3c0222"
dependencies = [
"cfg-if",
"js-sys",
"wasm-bindgen",
"web-sys",
]
[[package]]
name = "ipnet"
version = "2.11.0"
@ -1885,6 +1897,7 @@ dependencies = [
"image",
"indexmap 2.8.0",
"insta",
"instant",
"itertools 0.13.0",
"js-sys",
"kcl-derive-docs",
@ -1907,6 +1920,7 @@ dependencies = [
"serde_json",
"sha2",
"tabled",
"tempdir",
"thiserror 2.0.12",
"tokio",
"tokio-tungstenite",
@ -1920,6 +1934,7 @@ dependencies = [
"validator",
"wasm-bindgen",
"wasm-bindgen-futures",
"wasm-timer",
"web-sys",
"web-time",
"winnow 0.6.24",
@ -2459,6 +2474,17 @@ dependencies = [
"unicode-width 0.2.0",
]
[[package]]
name = "parking_lot"
version = "0.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7d17b78036a60663b797adeaee46f5c9dfebb86948d1255007a1d6be0271ff99"
dependencies = [
"instant",
"lock_api",
"parking_lot_core 0.8.6",
]
[[package]]
name = "parking_lot"
version = "0.12.3"
@ -2466,7 +2492,21 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f1bf18183cf54e8d6059647fc3063646a1801cf30896933ec2311622cc4b9a27"
dependencies = [
"lock_api",
"parking_lot_core",
"parking_lot_core 0.9.10",
]
[[package]]
name = "parking_lot_core"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "60a2cfe6f0ad2bfc16aefa463b497d5c7a5ecd44a23efa72aa342d90177356dc"
dependencies = [
"cfg-if",
"instant",
"libc",
"redox_syscall 0.2.16",
"smallvec",
"winapi",
]
[[package]]
@ -2477,7 +2517,7 @@ checksum = "1e401f977ab385c9e4e3ab30627d6f26d00e2c73eef317493c4ec6d468726cf8"
dependencies = [
"cfg-if",
"libc",
"redox_syscall",
"redox_syscall 0.5.10",
"smallvec",
"windows-targets 0.52.6",
]
@ -3017,6 +3057,15 @@ dependencies = [
"rand_core 0.3.1",
]
[[package]]
name = "redox_syscall"
version = "0.2.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fb5a58c1855b4b6819d59012155603f0b22ad30cad752600aadfcb695265519a"
dependencies = [
"bitflags 1.3.2",
]
[[package]]
name = "redox_syscall"
version = "0.5.10"
@ -3084,6 +3133,15 @@ version = "0.8.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c"
[[package]]
name = "remove_dir_all"
version = "0.5.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3acd125665422973a33ac9d3dd2df85edad0f4ae9b00dafb1a05e43a9f5ef8e7"
dependencies = [
"winapi",
]
[[package]]
name = "reqwest"
version = "0.12.15"
@ -3779,6 +3837,16 @@ version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e502f78cdbb8ba4718f566c418c52bc729126ffd16baee5baa718cf25dd5a69a"
[[package]]
name = "tempdir"
version = "0.3.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "15f2b5fb00ccdf689e0149d1b1b3c03fead81c2b37735d812fa8bddbbf41b6d8"
dependencies = [
"rand 0.4.6",
"remove_dir_all",
]
[[package]]
name = "tempfile"
version = "3.19.0"
@ -3964,7 +4032,7 @@ dependencies = [
"bytes",
"libc",
"mio",
"parking_lot",
"parking_lot 0.12.3",
"pin-project-lite",
"signal-hook-registry",
"socket2",
@ -4561,6 +4629,21 @@ dependencies = [
"web-sys",
]
[[package]]
name = "wasm-timer"
version = "0.2.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "be0ecb0db480561e9a7642b5d3e4187c128914e58aa84330b9493e3eb68c5e7f"
dependencies = [
"futures",
"js-sys",
"parking_lot 0.11.2",
"pin-utils",
"wasm-bindgen",
"wasm-bindgen-futures",
"web-sys",
]
[[package]]
name = "web-sys"
version = "0.3.77"

View File

@ -22,7 +22,6 @@ debug = 0
[profile.dev.package]
insta = { opt-level = 3 }
similar = { opt-level = 3 }
[profile.test]
debug = "line-tables-only"

View File

@ -33,10 +33,16 @@ new-sim-test test_name render_to_png="true":
# Run a KCL deterministic simulation test case and accept output.
overwrite-sim-test-sample test_name:
EXPECTORATE=overwrite TWENTY_TWENTY=overwrite {{cita}} -p kcl-lib --no-quiet -- simulation_tests::kcl_samples::parse_{{test_name}}
EXPECTORATE=overwrite TWENTY_TWENTY=overwrite {{cita}} -p kcl-lib --no-quiet -- simulation_tests::kcl_samples::unparse_{{test_name}}
EXPECTORATE=overwrite TWENTY_TWENTY=overwrite {{cita}} -p kcl-lib --no-quiet -- simulation_tests::kcl_samples::kcl_test_execute_{{test_name}}
EXPECTORATE=overwrite TWENTY_TWENTY=overwrite {{cita}} -p kcl-lib --no-quiet -- simulation_tests::kcl_samples::test_after_engine_generate_manifest
overwrite-sim-test test_name:
EXPECTORATE=overwrite TWENTY_TWENTY=overwrite {{cita}} -p kcl-lib --no-quiet -- simulation_tests::{{test_name}}::parse
EXPECTORATE=overwrite TWENTY_TWENTY=overwrite {{cita}} -p kcl-lib --no-quiet -- simulation_tests::{{test_name}}::unparse
EXPECTORATE=overwrite TWENTY_TWENTY=overwrite {{cita}} -p kcl-lib --no-quiet -- simulation_tests::{{test_name}}::kcl_test_execute
[ {{test_name}} != "kcl_samples" ] || EXPECTORATE=overwrite TWENTY_TWENTY=overwrite {{cita}} -p kcl-lib --no-quiet -- simulation_tests::{{test_name}}::test_after_engine_generate_manifest
# Regenerate all the simulation test output.
redo-sim-tests:

View File

@ -69,6 +69,7 @@ serde = { workspace = true }
serde_json = { workspace = true }
sha2 = "0.10.8"
tabled = { version = "0.18.0", optional = true }
tempdir = "0.3.7"
thiserror = "2.0.0"
toml = "0.8.19"
ts-rs = { version = "10.1.0", features = [
@ -88,14 +89,17 @@ winnow = "=0.6.24"
zip = { workspace = true }
[target.'cfg(target_arch = "wasm32")'.dependencies]
instant = { version = "0.1.13", features = ["wasm-bindgen", "inaccurate"] }
js-sys = { version = "0.3.72" }
tokio = { workspace = true, features = ["sync", "time"] }
tower-lsp = { workspace = true, features = ["runtime-agnostic"] }
wasm-bindgen = "0.2.99"
wasm-bindgen-futures = "0.4.49"
wasm-timer = "0.2.5"
web-sys = { version = "0.3.76", features = ["console"] }
[target.'cfg(not(target_arch = "wasm32"))'.dependencies]
instant = "0.1.13"
tokio = { workspace = true, features = ["full"] }
tokio-tungstenite = { version = "0.24.0", features = [
"rustls-tls-native-roots",
@ -104,6 +108,7 @@ tower-lsp = { workspace = true, features = ["proposed", "default"] }
[features]
default = ["cli", "engine"]
benchmark-execution = []
cli = ["dep:clap", "kittycad/clap"]
dhat-heap = ["dep:dhat"]
# For the lsp server, when run with stdout for rpc we want to disable println.
@ -122,7 +127,7 @@ criterion = { version = "0.5.1", features = ["async_tokio"] }
expectorate = "1.1.0"
handlebars = "6.3.2"
image = { version = "0.25.6", default-features = false, features = ["png"] }
insta = { version = "1.41.1", features = ["json", "filters", "redactions"] }
insta = { version = "1.42.2", features = ["json", "filters", "redactions"] }
kcl-directory-test-macro = { version = "0.1", path = "../kcl-directory-test-macro" }
miette = { version = "7.5.0", features = ["fancy"] }
pretty_assertions = "1.4.1"

View File

@ -45,6 +45,7 @@ fn run_benchmarks(c: &mut Criterion) {
let benchmark_dirs = discover_benchmark_dirs(&base_dir);
#[cfg(feature = "benchmark-execution")]
let rt = tokio::runtime::Runtime::new().unwrap();
for dir in benchmark_dirs {
@ -67,12 +68,14 @@ fn run_benchmarks(c: &mut Criterion) {
.sample_size(10)
.measurement_time(std::time::Duration::from_secs(1)); // Short measurement time to keep it from running in parallel
#[cfg(feature = "benchmark-execution")]
let program = kcl_lib::Program::parse_no_errs(&input_content).unwrap();
group.bench_function(format!("parse_{}", dir_name), |b| {
b.iter(|| kcl_lib::Program::parse_no_errs(black_box(&input_content)).unwrap())
});
#[cfg(feature = "benchmark-execution")]
group.bench_function(format!("execute_{}", dir_name), |b| {
b.iter(|| {
if let Err(err) = rt.block_on(async {

View File

@ -7,6 +7,7 @@ use kittycad_modeling_cmds as kcmc;
#[derive(Debug)]
struct Variation<'a> {
code: &'a str,
other_files: Vec<(std::path::PathBuf, std::string::String)>,
settings: &'a kcl_lib::ExecutorSettings,
}
@ -31,7 +32,31 @@ async fn cache_test(
// set the new settings.
ctx.settings = variation.settings.clone();
let outcome = ctx.run_with_caching(program).await.unwrap();
if !variation.other_files.is_empty() {
let tmp_dir = std::env::temp_dir();
let tmp_dir = tmp_dir
.join(format!("kcl_test_{}", test_name))
.join(uuid::Uuid::new_v4().to_string());
// Create a temporary file for each of the other files.
for (variant_path, variant_code) in &variation.other_files {
let tmp_file = tmp_dir.join(variant_path);
std::fs::create_dir_all(tmp_file.parent().unwrap()).unwrap();
std::fs::write(tmp_file, variant_code).unwrap();
}
ctx.settings.project_directory = Some(tmp_dir.clone());
}
let outcome = match ctx.run_with_caching(program).await {
Ok(outcome) => outcome,
Err(error) => {
let report = error.clone().into_miette_report_with_outputs(variation.code).unwrap();
let report = miette::Report::new(report);
panic!("{:?}", report);
}
};
let snapshot_png_bytes = ctx.prepare_snapshot().await.unwrap().contents.0;
// Decode the snapshot, return it.
@ -68,6 +93,7 @@ async fn kcl_test_cache_change_grid_visualizes_grid_off_to_on() {
vec![
Variation {
code,
other_files: vec![],
settings: &kcl_lib::ExecutorSettings {
show_grid: false,
..Default::default()
@ -75,6 +101,7 @@ async fn kcl_test_cache_change_grid_visualizes_grid_off_to_on() {
},
Variation {
code,
other_files: vec![],
settings: &kcl_lib::ExecutorSettings {
show_grid: true,
..Default::default()
@ -107,6 +134,7 @@ async fn kcl_test_cache_change_grid_visualizes_grid_on_to_off() {
vec![
Variation {
code,
other_files: vec![],
settings: &kcl_lib::ExecutorSettings {
show_grid: true,
..Default::default()
@ -114,6 +142,7 @@ async fn kcl_test_cache_change_grid_visualizes_grid_on_to_off() {
},
Variation {
code,
other_files: vec![],
settings: &kcl_lib::ExecutorSettings {
show_grid: false,
..Default::default()
@ -146,6 +175,7 @@ async fn kcl_test_cache_change_highlight_edges_changes_visual() {
vec![
Variation {
code,
other_files: vec![],
settings: &kcl_lib::ExecutorSettings {
highlight_edges: true,
..Default::default()
@ -153,6 +183,7 @@ async fn kcl_test_cache_change_highlight_edges_changes_visual() {
},
Variation {
code,
other_files: vec![],
settings: &kcl_lib::ExecutorSettings {
highlight_edges: false,
..Default::default()
@ -168,6 +199,58 @@ async fn kcl_test_cache_change_highlight_edges_changes_visual() {
assert!(first.1 != second.1);
}
#[tokio::test(flavor = "multi_thread")]
async fn kcl_test_cache_multi_file_same_code_dont_reexecute() {
let code = r#"import "toBeImported.kcl" as importedCube
importedCube
sketch001 = startSketchOn(XZ)
profile001 = startProfileAt([-134.53, -56.17], sketch001)
|> angledLine(angle = 0, length = 79.05, tag = $rectangleSegmentA001)
|> angledLine(angle = segAng(rectangleSegmentA001) - 90, length = 76.28)
|> angledLine(angle = segAng(rectangleSegmentA001), length = -segLen(rectangleSegmentA001), tag = $seg01)
|> line(endAbsolute = [profileStartX(%), profileStartY(%)], tag = $seg02)
|> close()
extrude001 = extrude(profile001, length = 100)
sketch003 = startSketchOn(extrude001, face = seg02)
sketch002 = startSketchOn(extrude001, face = seg01)
"#;
let other_file = (
std::path::PathBuf::from("toBeImported.kcl"),
r#"sketch001 = startSketchOn(XZ)
profile001 = startProfileAt([281.54, 305.81], sketch001)
|> angledLine(angle = 0, length = 123.43, tag = $rectangleSegmentA001)
|> angledLine(angle = segAng(rectangleSegmentA001) - 90, length = 85.99)
|> angledLine(angle = segAng(rectangleSegmentA001), length = -segLen(rectangleSegmentA001))
|> line(endAbsolute = [profileStartX(%), profileStartY(%)])
|> close()
extrude(profile001, length = 100)"#
.to_string(),
);
let result = cache_test(
"multi_file_same_code_dont_reexecute",
vec![
Variation {
code,
other_files: vec![other_file.clone()],
settings: &Default::default(),
},
Variation {
code,
other_files: vec![other_file],
settings: &Default::default(),
},
],
)
.await;
result.first().unwrap();
result.last().unwrap();
}
#[tokio::test(flavor = "multi_thread")]
async fn kcl_test_cache_add_line_preserves_artifact_commands() {
let code = r#"sketch001 = startSketchOn('XY')
@ -190,10 +273,12 @@ extrude(sketch001, length = 4)
vec![
Variation {
code,
other_files: vec![],
settings: &Default::default(),
},
Variation {
code: code_with_extrude.as_str(),
other_files: vec![],
settings: &Default::default(),
},
],
@ -282,3 +367,109 @@ async fn kcl_test_cache_empty_file_pop_cache_empty_file_planes_work() {
ctx.close().await;
}
#[tokio::test(flavor = "multi_thread")]
async fn kcl_test_cache_multi_file_after_empty_with_export() {
let code = r#"import importedCube from "toBeImported.kcl"
importedCube
sketch001 = startSketchOn(XZ)
profile001 = startProfileAt([-134.53, -56.17], sketch001)
|> angledLine(angle = 0, length = 79.05, tag = $rectangleSegmentA001)
|> angledLine(angle = segAng(rectangleSegmentA001) - 90, length = 76.28)
|> angledLine(angle = segAng(rectangleSegmentA001), length = -segLen(rectangleSegmentA001), tag = $seg01)
|> line(endAbsolute = [profileStartX(%), profileStartY(%)], tag = $seg02)
|> close()
extrude001 = extrude(profile001, length = 100)
sketch003 = startSketchOn(extrude001, face = seg02)
sketch002 = startSketchOn(extrude001, face = seg01)
"#;
let other_file = (
std::path::PathBuf::from("toBeImported.kcl"),
r#"sketch001 = startSketchOn(XZ)
profile001 = startProfileAt([281.54, 305.81], sketch001)
|> angledLine(angle = 0, length = 123.43, tag = $rectangleSegmentA001)
|> angledLine(angle = segAng(rectangleSegmentA001) - 90, length = 85.99)
|> angledLine(angle = segAng(rectangleSegmentA001), length = -segLen(rectangleSegmentA001))
|> line(endAbsolute = [profileStartX(%), profileStartY(%)])
|> close()
export importedCube = extrude(profile001, length = 100)
"#
.to_string(),
);
let result = cache_test(
"multi_file_after_empty",
vec![
Variation {
code: "",
other_files: vec![],
settings: &Default::default(),
},
Variation {
code,
other_files: vec![other_file],
settings: &Default::default(),
},
],
)
.await;
result.first().unwrap();
result.last().unwrap();
}
#[tokio::test(flavor = "multi_thread")]
async fn kcl_test_cache_multi_file_after_empty_with_woo() {
let code = r#"import "toBeImported.kcl" as importedCube
importedCube
sketch001 = startSketchOn(XZ)
profile001 = startProfileAt([-134.53, -56.17], sketch001)
|> angledLine(angle = 0, length = 79.05, tag = $rectangleSegmentA001)
|> angledLine(angle = segAng(rectangleSegmentA001) - 90, length = 76.28)
|> angledLine(angle = segAng(rectangleSegmentA001), length = -segLen(rectangleSegmentA001), tag = $seg01)
|> line(endAbsolute = [profileStartX(%), profileStartY(%)], tag = $seg02)
|> close()
extrude001 = extrude(profile001, length = 100)
sketch003 = startSketchOn(extrude001, face = seg02)
sketch002 = startSketchOn(extrude001, face = seg01)
"#;
let other_file = (
std::path::PathBuf::from("toBeImported.kcl"),
r#"sketch001 = startSketchOn(XZ)
profile001 = startProfileAt([281.54, 305.81], sketch001)
|> angledLine(angle = 0, length = 123.43, tag = $rectangleSegmentA001)
|> angledLine(angle = segAng(rectangleSegmentA001) - 90, length = 85.99)
|> angledLine(angle = segAng(rectangleSegmentA001), length = -segLen(rectangleSegmentA001))
|> line(endAbsolute = [profileStartX(%), profileStartY(%)])
|> close()
extrude(profile001, length = 100)
"#
.to_string(),
);
let result = cache_test(
"multi_file_after_empty",
vec![
Variation {
code: "",
other_files: vec![],
settings: &Default::default(),
},
Variation {
code,
other_files: vec![other_file],
settings: &Default::default(),
},
],
)
.await;
result.first().unwrap();
result.last().unwrap();
}

View File

@ -2109,7 +2109,7 @@ async fn kcl_test_better_type_names() {
},
None => todo!(),
};
assert_eq!(err, "This function expected the input argument to be one or more Solids but it's actually of type Sketch. You can convert a sketch (2D) into a Solid (3D) by calling a function like `extrude` or `revolve`");
assert_eq!(err, "This function expected the input argument to be one or more Solids or imported geometry but it's actually of type Sketch. You can convert a sketch (2D) into a Solid (3D) by calling a function like `extrude` or `revolve`");
}
#[tokio::test(flavor = "multi_thread")]

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 136 KiB

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 148 KiB

After

Width:  |  Height:  |  Size: 141 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 96 KiB

After

Width:  |  Height:  |  Size: 105 KiB

View File

@ -133,6 +133,7 @@ impl StdLibFnArg {
|| self.type_ == "[Solid]"
|| self.type_ == "SketchSurface"
|| self.type_ == "SketchOrSurface"
|| self.type_ == "SolidOrImportedGeometry"
|| self.type_ == "SolidOrSketchOrImportedGeometry")
&& (self.required || self.include_in_snippet)
{

View File

@ -18,7 +18,7 @@ use tokio::sync::{mpsc, oneshot, RwLock};
use tokio_tungstenite::tungstenite::Message as WsMsg;
use uuid::Uuid;
use super::{EngineStats, ExecutionKind};
use super::EngineStats;
use crate::{
engine::EngineManager,
errors::{KclError, KclErrorDetails},
@ -45,13 +45,13 @@ pub struct EngineConnection {
batch: Arc<RwLock<Vec<(WebSocketRequest, SourceRange)>>>,
batch_end: Arc<RwLock<IndexMap<uuid::Uuid, (WebSocketRequest, SourceRange)>>>,
artifact_commands: Arc<RwLock<Vec<ArtifactCommand>>>,
ids_of_async_commands: Arc<RwLock<IndexMap<Uuid, SourceRange>>>,
/// The default planes for the scene.
default_planes: Arc<RwLock<Option<DefaultPlanes>>>,
/// If the server sends session data, it'll be copied to here.
session_data: Arc<RwLock<Option<ModelingSessionData>>>,
execution_kind: Arc<RwLock<ExecutionKind>>,
stats: EngineStats,
}
@ -116,6 +116,17 @@ impl Drop for TcpReadHandle {
}
}
struct ResponsesInformation {
/// The responses from the engine.
responses: Arc<RwLock<IndexMap<uuid::Uuid, WebSocketResponse>>>,
}
impl ResponsesInformation {
pub async fn add(&self, id: Uuid, response: WebSocketResponse) {
self.responses.write().await.insert(id, response);
}
}
/// Requests to send to the engine, and a way to await a response.
struct ToEngineReq {
/// The request to send
@ -228,10 +239,13 @@ impl EngineConnection {
let session_data: Arc<RwLock<Option<ModelingSessionData>>> = Arc::new(RwLock::new(None));
let session_data2 = session_data.clone();
let responses: Arc<RwLock<IndexMap<uuid::Uuid, WebSocketResponse>>> = Arc::new(RwLock::new(IndexMap::new()));
let responses_clone = responses.clone();
let ids_of_async_commands: Arc<RwLock<IndexMap<Uuid, SourceRange>>> = Arc::new(RwLock::new(IndexMap::new()));
let socket_health = Arc::new(RwLock::new(SocketHealth::Active));
let pending_errors = Arc::new(RwLock::new(Vec::new()));
let pending_errors_clone = pending_errors.clone();
let responses_information = ResponsesInformation {
responses: responses.clone(),
};
let socket_health_tcp_read = socket_health.clone();
let tcp_read_handle = tokio::spawn(async move {
@ -245,8 +259,7 @@ impl EngineConnection {
WebSocketResponse::Success(SuccessWebSocketResponse {
resp: OkWebSocketResponseData::ModelingBatch { responses },
..
}) =>
{
}) => {
#[expect(
clippy::iter_over_hash_type,
reason = "modeling command uses a HashMap and keys are random, so we don't really have a choice"
@ -255,26 +268,32 @@ impl EngineConnection {
let id: uuid::Uuid = (*resp_id).into();
match batch_response {
BatchResponse::Success { response } => {
responses_clone.write().await.insert(
id,
WebSocketResponse::Success(SuccessWebSocketResponse {
success: true,
request_id: Some(id),
resp: OkWebSocketResponseData::Modeling {
modeling_response: response.clone(),
},
}),
);
// If the id is in our ids of async commands, remove
// it.
responses_information
.add(
id,
WebSocketResponse::Success(SuccessWebSocketResponse {
success: true,
request_id: Some(id),
resp: OkWebSocketResponseData::Modeling {
modeling_response: response.clone(),
},
}),
)
.await;
}
BatchResponse::Failure { errors } => {
responses_clone.write().await.insert(
id,
WebSocketResponse::Failure(FailureWebSocketResponse {
success: false,
request_id: Some(id),
errors: errors.clone(),
}),
);
responses_information
.add(
id,
WebSocketResponse::Failure(FailureWebSocketResponse {
success: false,
request_id: Some(id),
errors: errors.clone(),
}),
)
.await;
}
}
}
@ -292,14 +311,16 @@ impl EngineConnection {
errors,
}) => {
if let Some(id) = request_id {
responses_clone.write().await.insert(
*id,
WebSocketResponse::Failure(FailureWebSocketResponse {
success: false,
request_id: *request_id,
errors: errors.clone(),
}),
);
responses_information
.add(
*id,
WebSocketResponse::Failure(FailureWebSocketResponse {
success: false,
request_id: *request_id,
errors: errors.clone(),
}),
)
.await;
} else {
// Add it to our pending errors.
let mut pe = pending_errors_clone.write().await;
@ -315,7 +336,7 @@ impl EngineConnection {
}
if let Some(id) = id {
responses_clone.write().await.insert(id, ws_resp.clone());
responses_information.add(id, ws_resp.clone()).await;
}
}
Err(e) => {
@ -342,9 +363,9 @@ impl EngineConnection {
batch: Arc::new(RwLock::new(Vec::new())),
batch_end: Arc::new(RwLock::new(IndexMap::new())),
artifact_commands: Arc::new(RwLock::new(Vec::new())),
ids_of_async_commands,
default_planes: Default::default(),
session_data,
execution_kind: Default::default(),
stats: Default::default(),
})
}
@ -368,16 +389,8 @@ impl EngineManager for EngineConnection {
self.artifact_commands.clone()
}
async fn execution_kind(&self) -> ExecutionKind {
let guard = self.execution_kind.read().await;
*guard
}
async fn replace_execution_kind(&self, execution_kind: ExecutionKind) -> ExecutionKind {
let mut guard = self.execution_kind.write().await;
let original = *guard;
*guard = execution_kind;
original
fn ids_of_async_commands(&self) -> Arc<RwLock<IndexMap<Uuid, SourceRange>>> {
self.ids_of_async_commands.clone()
}
fn stats(&self) -> &EngineStats {
@ -400,13 +413,13 @@ impl EngineManager for EngineConnection {
Ok(())
}
async fn inner_send_modeling_cmd(
async fn inner_fire_modeling_cmd(
&self,
id: uuid::Uuid,
_id: uuid::Uuid,
source_range: SourceRange,
cmd: WebSocketRequest,
_id_to_source_range: HashMap<Uuid, SourceRange>,
) -> Result<WebSocketResponse, KclError> {
) -> Result<(), KclError> {
let (tx, rx) = oneshot::channel();
// Send the request to the engine, via the actor.
@ -438,6 +451,19 @@ impl EngineManager for EngineConnection {
})
})?;
Ok(())
}
async fn inner_send_modeling_cmd(
&self,
id: uuid::Uuid,
source_range: SourceRange,
cmd: WebSocketRequest,
id_to_source_range: HashMap<Uuid, SourceRange>,
) -> Result<WebSocketResponse, KclError> {
self.inner_fire_modeling_cmd(id, source_range, cmd, id_to_source_range)
.await?;
// Wait for the response.
let current_time = std::time::Instant::now();
while current_time.elapsed().as_secs() < 60 {

View File

@ -12,11 +12,11 @@ use kcmc::{
WebSocketResponse,
},
};
use kittycad_modeling_cmds::{self as kcmc};
use kittycad_modeling_cmds::{self as kcmc, websocket::ModelingCmdReq, ImportFiles, ModelingCmd};
use tokio::sync::RwLock;
use uuid::Uuid;
use super::{EngineStats, ExecutionKind};
use super::EngineStats;
use crate::{
errors::KclError,
exec::DefaultPlanes,
@ -29,7 +29,8 @@ pub struct EngineConnection {
batch: Arc<RwLock<Vec<(WebSocketRequest, SourceRange)>>>,
batch_end: Arc<RwLock<IndexMap<uuid::Uuid, (WebSocketRequest, SourceRange)>>>,
artifact_commands: Arc<RwLock<Vec<ArtifactCommand>>>,
execution_kind: Arc<RwLock<ExecutionKind>>,
ids_of_async_commands: Arc<RwLock<IndexMap<Uuid, SourceRange>>>,
responses: Arc<RwLock<IndexMap<Uuid, WebSocketResponse>>>,
/// The default planes for the scene.
default_planes: Arc<RwLock<Option<DefaultPlanes>>>,
stats: EngineStats,
@ -41,7 +42,8 @@ impl EngineConnection {
batch: Arc::new(RwLock::new(Vec::new())),
batch_end: Arc::new(RwLock::new(IndexMap::new())),
artifact_commands: Arc::new(RwLock::new(Vec::new())),
execution_kind: Default::default(),
ids_of_async_commands: Arc::new(RwLock::new(IndexMap::new())),
responses: Arc::new(RwLock::new(IndexMap::new())),
default_planes: Default::default(),
stats: Default::default(),
})
@ -59,7 +61,7 @@ impl crate::engine::EngineManager for EngineConnection {
}
fn responses(&self) -> Arc<RwLock<IndexMap<Uuid, WebSocketResponse>>> {
Arc::new(RwLock::new(IndexMap::new()))
self.responses.clone()
}
fn stats(&self) -> &EngineStats {
@ -70,16 +72,8 @@ impl crate::engine::EngineManager for EngineConnection {
self.artifact_commands.clone()
}
async fn execution_kind(&self) -> ExecutionKind {
let guard = self.execution_kind.read().await;
*guard
}
async fn replace_execution_kind(&self, execution_kind: ExecutionKind) -> ExecutionKind {
let mut guard = self.execution_kind.write().await;
let original = *guard;
*guard = execution_kind;
original
fn ids_of_async_commands(&self) -> Arc<RwLock<IndexMap<Uuid, SourceRange>>> {
self.ids_of_async_commands.clone()
}
fn get_default_planes(&self) -> Arc<RwLock<Option<DefaultPlanes>>> {
@ -94,6 +88,25 @@ impl crate::engine::EngineManager for EngineConnection {
Ok(())
}
async fn inner_fire_modeling_cmd(
&self,
id: uuid::Uuid,
source_range: SourceRange,
cmd: WebSocketRequest,
id_to_source_range: HashMap<Uuid, SourceRange>,
) -> Result<(), KclError> {
// Pop off the id we care about.
self.ids_of_async_commands.write().await.swap_remove(&id);
// Add the response to our responses.
let response = self
.inner_send_modeling_cmd(id, source_range, cmd, id_to_source_range)
.await?;
self.responses().write().await.insert(id, response);
Ok(())
}
async fn inner_send_modeling_cmd(
&self,
id: uuid::Uuid,
@ -123,6 +136,20 @@ impl crate::engine::EngineManager for EngineConnection {
success: true,
}))
}
WebSocketRequest::ModelingCmdReq(ModelingCmdReq {
cmd: ModelingCmd::ImportFiles(ImportFiles { .. }),
cmd_id,
}) => Ok(WebSocketResponse::Success(SuccessWebSocketResponse {
request_id: Some(id),
resp: OkWebSocketResponseData::Modeling {
modeling_response: OkModelingCmdResponse::ImportFiles(
kittycad_modeling_cmds::output::ImportFiles {
object_id: cmd_id.into(),
},
),
},
success: true,
})),
WebSocketRequest::ModelingCmdReq(_) => Ok(WebSocketResponse::Success(SuccessWebSocketResponse {
request_id: Some(id),
resp: OkWebSocketResponseData::Modeling {

View File

@ -11,7 +11,7 @@ use uuid::Uuid;
use wasm_bindgen::prelude::*;
use crate::{
engine::{EngineStats, ExecutionKind},
engine::EngineStats,
errors::{KclError, KclErrorDetails},
execution::{ArtifactCommand, DefaultPlanes, IdGenerator},
SourceRange,
@ -22,6 +22,15 @@ extern "C" {
#[derive(Debug, Clone)]
pub type EngineCommandManager;
#[wasm_bindgen(method, js_name = fireModelingCommandFromWasm, catch)]
fn fire_modeling_cmd_from_wasm(
this: &EngineCommandManager,
id: String,
rangeStr: String,
cmdStr: String,
idToRangeStr: String,
) -> Result<(), js_sys::Error>;
#[wasm_bindgen(method, js_name = sendModelingCommandFromWasm, catch)]
fn send_modeling_cmd_from_wasm(
this: &EngineCommandManager,
@ -38,35 +47,128 @@ extern "C" {
#[derive(Debug, Clone)]
pub struct EngineConnection {
manager: Arc<EngineCommandManager>,
response_context: Arc<ResponseContext>,
batch: Arc<RwLock<Vec<(WebSocketRequest, SourceRange)>>>,
batch_end: Arc<RwLock<IndexMap<uuid::Uuid, (WebSocketRequest, SourceRange)>>>,
responses: Arc<RwLock<IndexMap<Uuid, WebSocketResponse>>>,
artifact_commands: Arc<RwLock<Vec<ArtifactCommand>>>,
execution_kind: Arc<RwLock<ExecutionKind>>,
ids_of_async_commands: Arc<RwLock<IndexMap<Uuid, SourceRange>>>,
/// The default planes for the scene.
default_planes: Arc<RwLock<Option<DefaultPlanes>>>,
stats: EngineStats,
}
#[wasm_bindgen]
#[derive(Debug, Clone)]
pub struct ResponseContext {
responses: Arc<RwLock<IndexMap<Uuid, WebSocketResponse>>>,
}
#[wasm_bindgen]
impl ResponseContext {
#[wasm_bindgen(constructor)]
pub fn new() -> Self {
Self {
responses: Arc::new(RwLock::new(IndexMap::new())),
}
}
// Add a response to the context.
pub async fn send_response(&self, data: js_sys::Uint8Array) -> Result<(), JsValue> {
let ws_result: WebSocketResponse = match bson::from_slice(&data.to_vec()) {
Ok(res) => res,
Err(_) => {
// We don't care about the error if we can't parse it.
return Ok(());
}
};
let id = match &ws_result {
WebSocketResponse::Success(res) => res.request_id,
WebSocketResponse::Failure(res) => res.request_id,
};
let Some(id) = id else {
// We only care if we have an id.
return Ok(());
};
// Add this response to our responses.
self.add(id, ws_result.clone()).await;
Ok(())
}
}
impl ResponseContext {
pub async fn add(&self, id: Uuid, response: WebSocketResponse) {
self.responses.write().await.insert(id, response);
}
pub fn responses(&self) -> Arc<RwLock<IndexMap<Uuid, WebSocketResponse>>> {
self.responses.clone()
}
}
// Safety: WebAssembly will only ever run in a single-threaded context.
unsafe impl Send for EngineConnection {}
unsafe impl Sync for EngineConnection {}
impl EngineConnection {
pub async fn new(manager: EngineCommandManager) -> Result<EngineConnection, JsValue> {
pub async fn new(
manager: EngineCommandManager,
response_context: Arc<ResponseContext>,
) -> Result<EngineConnection, JsValue> {
#[allow(clippy::arc_with_non_send_sync)]
Ok(EngineConnection {
manager: Arc::new(manager),
batch: Arc::new(RwLock::new(Vec::new())),
batch_end: Arc::new(RwLock::new(IndexMap::new())),
responses: Arc::new(RwLock::new(IndexMap::new())),
response_context,
artifact_commands: Arc::new(RwLock::new(Vec::new())),
execution_kind: Default::default(),
ids_of_async_commands: Arc::new(RwLock::new(IndexMap::new())),
default_planes: Default::default(),
stats: Default::default(),
})
}
async fn do_fire_modeling_cmd(
&self,
id: uuid::Uuid,
source_range: SourceRange,
cmd: WebSocketRequest,
id_to_source_range: HashMap<uuid::Uuid, SourceRange>,
) -> Result<(), KclError> {
let source_range_str = serde_json::to_string(&source_range).map_err(|e| {
KclError::Engine(KclErrorDetails {
message: format!("Failed to serialize source range: {:?}", e),
source_ranges: vec![source_range],
})
})?;
let cmd_str = serde_json::to_string(&cmd).map_err(|e| {
KclError::Engine(KclErrorDetails {
message: format!("Failed to serialize modeling command: {:?}", e),
source_ranges: vec![source_range],
})
})?;
let id_to_source_range_str = serde_json::to_string(&id_to_source_range).map_err(|e| {
KclError::Engine(KclErrorDetails {
message: format!("Failed to serialize id to source range: {:?}", e),
source_ranges: vec![source_range],
})
})?;
self.manager
.fire_modeling_cmd_from_wasm(id.to_string(), source_range_str, cmd_str, id_to_source_range_str)
.map_err(|e| {
KclError::Engine(KclErrorDetails {
message: e.to_string().into(),
source_ranges: vec![source_range],
})
})?;
Ok(())
}
async fn do_send_modeling_cmd(
&self,
id: uuid::Uuid,
@ -153,7 +255,7 @@ impl crate::engine::EngineManager for EngineConnection {
}
fn responses(&self) -> Arc<RwLock<IndexMap<Uuid, WebSocketResponse>>> {
self.responses.clone()
self.response_context.responses.clone()
}
fn stats(&self) -> &EngineStats {
@ -164,16 +266,8 @@ impl crate::engine::EngineManager for EngineConnection {
self.artifact_commands.clone()
}
async fn execution_kind(&self) -> ExecutionKind {
let guard = self.execution_kind.read().await;
*guard
}
async fn replace_execution_kind(&self, execution_kind: ExecutionKind) -> ExecutionKind {
let mut guard = self.execution_kind.write().await;
let original = *guard;
*guard = execution_kind;
original
fn ids_of_async_commands(&self) -> Arc<RwLock<IndexMap<Uuid, SourceRange>>> {
self.ids_of_async_commands.clone()
}
fn get_default_planes(&self) -> Arc<RwLock<Option<DefaultPlanes>>> {
@ -207,6 +301,19 @@ impl crate::engine::EngineManager for EngineConnection {
Ok(())
}
async fn inner_fire_modeling_cmd(
&self,
id: uuid::Uuid,
source_range: SourceRange,
cmd: WebSocketRequest,
id_to_source_range: HashMap<Uuid, SourceRange>,
) -> Result<(), KclError> {
self.do_fire_modeling_cmd(id, source_range, cmd, id_to_source_range)
.await?;
Ok(())
}
async fn inner_send_modeling_cmd(
&self,
id: uuid::Uuid,
@ -218,14 +325,7 @@ impl crate::engine::EngineManager for EngineConnection {
.do_send_modeling_cmd(id, source_range, cmd, id_to_source_range)
.await?;
// In isolated mode, we don't save the response.
if self.execution_kind().await.is_isolated() {
return Ok(ws_result);
}
let mut responses = self.responses.write().await;
responses.insert(id, ws_result.clone());
drop(responses);
self.response_context.add(id, ws_result.clone()).await;
Ok(ws_result)
}

View File

@ -47,23 +47,6 @@ lazy_static::lazy_static! {
pub static ref GRID_SCALE_TEXT_OBJECT_ID: uuid::Uuid = uuid::Uuid::parse_str("10782f33-f588-4668-8bcd-040502d26590").unwrap();
}
/// The mode of execution. When isolated, like during an import, attempting to
/// send a command results in an error.
#[derive(Debug, Default, Clone, Copy, Deserialize, Serialize, PartialEq, Eq, ts_rs::TS, JsonSchema)]
#[ts(export)]
#[serde(rename_all = "camelCase")]
pub enum ExecutionKind {
#[default]
Normal,
Isolated,
}
impl ExecutionKind {
pub fn is_isolated(&self) -> bool {
matches!(self, ExecutionKind::Isolated)
}
}
#[derive(Default, Debug)]
pub struct EngineStats {
pub commands_batched: AtomicUsize,
@ -93,6 +76,9 @@ pub trait EngineManager: std::fmt::Debug + Send + Sync + 'static {
/// Get the artifact commands that have accumulated so far.
fn artifact_commands(&self) -> Arc<RwLock<Vec<ArtifactCommand>>>;
/// Get the ids of the async commands we are waiting for.
fn ids_of_async_commands(&self) -> Arc<RwLock<IndexMap<Uuid, SourceRange>>>;
/// Take the batch of commands that have accumulated so far and clear them.
async fn take_batch(&self) -> Vec<(WebSocketRequest, SourceRange)> {
std::mem::take(&mut *self.batch().write().await)
@ -113,18 +99,16 @@ pub trait EngineManager: std::fmt::Debug + Send + Sync + 'static {
std::mem::take(&mut *self.artifact_commands().write().await)
}
/// Take the ids of async commands that have accumulated so far and clear them.
async fn take_ids_of_async_commands(&self) -> IndexMap<Uuid, SourceRange> {
std::mem::take(&mut *self.ids_of_async_commands().write().await)
}
/// Take the responses that have accumulated so far and clear them.
async fn take_responses(&self) -> IndexMap<Uuid, WebSocketResponse> {
std::mem::take(&mut *self.responses().write().await)
}
/// Get the current execution kind.
async fn execution_kind(&self) -> ExecutionKind;
/// Replace the current execution kind with a new value and return the
/// existing value.
async fn replace_execution_kind(&self, execution_kind: ExecutionKind) -> ExecutionKind;
/// Get the default planes.
fn get_default_planes(&self) -> Arc<RwLock<Option<DefaultPlanes>>>;
@ -160,8 +144,18 @@ pub trait EngineManager: std::fmt::Debug + Send + Sync + 'static {
async fn clear_queues(&self) {
self.batch().write().await.clear();
self.batch_end().write().await.clear();
self.ids_of_async_commands().write().await.clear();
}
/// Send a modeling command and do not wait for the response message.
async fn inner_fire_modeling_cmd(
&self,
id: uuid::Uuid,
source_range: SourceRange,
cmd: WebSocketRequest,
id_to_source_range: HashMap<Uuid, SourceRange>,
) -> Result<(), crate::errors::KclError>;
/// Send a modeling command and wait for the response message.
async fn inner_send_modeling_cmd(
&self,
@ -204,6 +198,68 @@ pub trait EngineManager: std::fmt::Debug + Send + Sync + 'static {
Ok(())
}
/// Ensure a specific async command has been completed.
async fn ensure_async_command_completed(
&self,
id: uuid::Uuid,
source_range: Option<SourceRange>,
) -> Result<OkWebSocketResponseData, KclError> {
let source_range = if let Some(source_range) = source_range {
source_range
} else {
// Look it up if we don't have it.
self.ids_of_async_commands()
.read()
.await
.get(&id)
.cloned()
.unwrap_or_default()
};
let current_time = instant::Instant::now();
while current_time.elapsed().as_secs() < 60 {
let responses = self.responses().read().await.clone();
let Some(resp) = responses.get(&id) else {
// Sleep for a little so we don't hog the CPU.
// No seriously WE DO NOT WANT TO PAUSE THE WHOLE APP ON THE JS SIDE.
let duration = instant::Duration::from_millis(100);
#[cfg(target_arch = "wasm32")]
wasm_timer::Delay::new(duration).await.map_err(|err| {
KclError::Internal(KclErrorDetails {
message: format!("Failed to sleep: {:?}", err),
source_ranges: vec![source_range],
})
})?;
#[cfg(not(target_arch = "wasm32"))]
tokio::time::sleep(duration).await;
continue;
};
// If the response is an error, return it.
// Parsing will do that and we can ignore the result, we don't care.
let response = self.parse_websocket_response(resp.clone(), source_range)?;
return Ok(response);
}
Err(KclError::Engine(KclErrorDetails {
message: "async command timed out".to_string(),
source_ranges: vec![source_range],
}))
}
/// Ensure ALL async commands have been completed.
async fn ensure_async_commands_completed(&self) -> Result<(), KclError> {
// Check if all async commands have been completed.
let ids = self.take_ids_of_async_commands().await;
// Try to get them from the responses.
for (id, source_range) in ids {
self.ensure_async_command_completed(id, Some(source_range)).await?;
}
Ok(())
}
/// Set the visibility of edges.
async fn set_edge_visibility(
&self,
@ -289,11 +345,6 @@ pub trait EngineManager: std::fmt::Debug + Send + Sync + 'static {
source_range: SourceRange,
cmd: &ModelingCmd,
) -> Result<(), crate::errors::KclError> {
// In isolated mode, we don't send the command to the engine.
if self.execution_kind().await.is_isolated() {
return Ok(());
}
let req = WebSocketRequest::ModelingCmdReq(ModelingCmdReq {
cmd: cmd.clone(),
cmd_id: id.into(),
@ -315,11 +366,6 @@ pub trait EngineManager: std::fmt::Debug + Send + Sync + 'static {
source_range: SourceRange,
cmds: &[ModelingCmdReq],
) -> Result<(), crate::errors::KclError> {
// In isolated mode, we don't send the command to the engine.
if self.execution_kind().await.is_isolated() {
return Ok(());
}
// Add cmds to the batch.
let mut extended_cmds = Vec::with_capacity(cmds.len());
for cmd in cmds {
@ -342,11 +388,6 @@ pub trait EngineManager: std::fmt::Debug + Send + Sync + 'static {
source_range: SourceRange,
cmd: &ModelingCmd,
) -> Result<(), crate::errors::KclError> {
// In isolated mode, we don't send the command to the engine.
if self.execution_kind().await.is_isolated() {
return Ok(());
}
let req = WebSocketRequest::ModelingCmdReq(ModelingCmdReq {
cmd: cmd.clone(),
cmd_id: id.into(),
@ -365,36 +406,66 @@ pub trait EngineManager: std::fmt::Debug + Send + Sync + 'static {
source_range: SourceRange,
cmd: &ModelingCmd,
) -> Result<OkWebSocketResponseData, crate::errors::KclError> {
self.batch_modeling_cmd(id, source_range, cmd).await?;
let mut requests = self.take_batch().await.clone();
// Add the command to the batch.
requests.push((
WebSocketRequest::ModelingCmdReq(ModelingCmdReq {
cmd: cmd.clone(),
cmd_id: id.into(),
}),
source_range,
));
self.stats().commands_batched.fetch_add(1, Ordering::Relaxed);
// Flush the batch queue.
self.flush_batch(false, source_range).await
self.run_batch(requests, source_range).await
}
/// Force flush the batch queue.
async fn flush_batch(
/// Send the modeling cmd async and don't wait for the response.
/// Add it to our list of async commands.
async fn async_modeling_cmd(
&self,
// Whether or not to flush the end commands as well.
// We only do this at the very end of the file.
batch_end: bool,
id: uuid::Uuid,
source_range: SourceRange,
cmd: &ModelingCmd,
) -> Result<(), crate::errors::KclError> {
// Add the command ID to the list of async commands.
self.ids_of_async_commands().write().await.insert(id, source_range);
// Add to artifact commands.
self.handle_artifact_command(cmd, id.into(), &HashMap::from([(id, source_range)]))
.await?;
// Fire off the command now, but don't wait for the response, we don't care about it.
self.inner_fire_modeling_cmd(
id,
source_range,
WebSocketRequest::ModelingCmdReq(ModelingCmdReq {
cmd: cmd.clone(),
cmd_id: id.into(),
}),
HashMap::from([(id, source_range)]),
)
.await?;
Ok(())
}
/// Run the batch for the specific commands.
async fn run_batch(
&self,
orig_requests: Vec<(WebSocketRequest, SourceRange)>,
source_range: SourceRange,
) -> Result<OkWebSocketResponseData, crate::errors::KclError> {
let all_requests = if batch_end {
let mut requests = self.take_batch().await.clone();
requests.extend(self.take_batch_end().await.values().cloned());
requests
} else {
self.take_batch().await.clone()
};
// Return early if we have no commands to send.
if all_requests.is_empty() {
if orig_requests.is_empty() {
return Ok(OkWebSocketResponseData::Modeling {
modeling_response: OkModelingCmdResponse::Empty {},
});
}
let requests: Vec<ModelingCmdReq> = all_requests
let requests: Vec<ModelingCmdReq> = orig_requests
.iter()
.filter_map(|(val, _)| match val {
WebSocketRequest::ModelingCmdReq(ModelingCmdReq { cmd, cmd_id }) => Some(ModelingCmdReq {
@ -411,9 +482,9 @@ pub trait EngineManager: std::fmt::Debug + Send + Sync + 'static {
responses: true,
});
let final_req = if all_requests.len() == 1 {
let final_req = if orig_requests.len() == 1 {
// We can unwrap here because we know the batch has only one element.
all_requests.first().unwrap().0.clone()
orig_requests.first().unwrap().0.clone()
} else {
batched_requests
};
@ -421,7 +492,7 @@ pub trait EngineManager: std::fmt::Debug + Send + Sync + 'static {
// Create the map of original command IDs to source range.
// This is for the wasm side, kurt needs it for selections.
let mut id_to_source_range = HashMap::new();
for (req, range) in all_requests.iter() {
for (req, range) in orig_requests.iter() {
match req {
WebSocketRequest::ModelingCmdReq(ModelingCmdReq { cmd: _, cmd_id }) => {
id_to_source_range.insert(Uuid::from(*cmd_id), *range);
@ -436,7 +507,7 @@ pub trait EngineManager: std::fmt::Debug + Send + Sync + 'static {
}
// Do the artifact commands.
for (req, _) in all_requests.iter() {
for (req, _) in orig_requests.iter() {
match &req {
WebSocketRequest::ModelingCmdBatchReq(ModelingBatch { requests, .. }) => {
for request in requests {
@ -506,6 +577,25 @@ pub trait EngineManager: std::fmt::Debug + Send + Sync + 'static {
}
}
/// Force flush the batch queue.
async fn flush_batch(
&self,
// Whether or not to flush the end commands as well.
// We only do this at the very end of the file.
batch_end: bool,
source_range: SourceRange,
) -> Result<OkWebSocketResponseData, crate::errors::KclError> {
let all_requests = if batch_end {
let mut requests = self.take_batch().await.clone();
requests.extend(self.take_batch_end().await.values().cloned());
requests
} else {
self.take_batch().await.clone()
};
self.run_batch(all_requests, source_range).await
}
async fn make_default_plane(
&self,
plane_id: uuid::Uuid,

View File

@ -217,10 +217,13 @@ impl IntoDiagnostic for KclErrorWithOutputs {
fn to_lsp_diagnostics(&self, code: &str) -> Vec<Diagnostic> {
let message = self.error.get_message();
let source_ranges = self.error.source_ranges();
println!("self: {:?}", self);
source_ranges
.into_iter()
.map(|source_range| {
println!("source_range: {:?}", source_range);
println!("filenames: {:?}", self.filenames);
let source = self
.source_files
.get(&source_range.module_id())

View File

@ -243,7 +243,7 @@ fn generate_changed_program(old_ast: Node<Program>, mut new_ast: Node<Program>,
#[cfg(test)]
mod tests {
use super::*;
use crate::execution::{parse_execute, ExecTestResults};
use crate::execution::{parse_execute, parse_execute_with_project_dir, ExecTestResults};
#[tokio::test(flavor = "multi_thread")]
async fn test_get_changed_program_same_code() {
@ -600,4 +600,64 @@ startSketchOn('XY')
}
);
}
#[tokio::test(flavor = "multi_thread")]
async fn test_multi_file_no_changes_does_not_reexecute() {
let code = r#"import "toBeImported.kcl" as importedCube
importedCube
sketch001 = startSketchOn(XZ)
profile001 = startProfileAt([-134.53, -56.17], sketch001)
|> angledLine(angle = 0, length = 79.05, tag = $rectangleSegmentA001)
|> angledLine(angle = segAng(rectangleSegmentA001) - 90, length = 76.28)
|> angledLine(angle = segAng(rectangleSegmentA001), length = -segLen(rectangleSegmentA001), tag = $seg01)
|> line(endAbsolute = [profileStartX(%), profileStartY(%)], tag = $seg02)
|> close()
extrude001 = extrude(profile001, length = 100)
sketch003 = startSketchOn(extrude001, face = seg02)
sketch002 = startSketchOn(extrude001, face = seg01)
"#;
let other_file = (
std::path::PathBuf::from("toBeImported.kcl"),
r#"sketch001 = startSketchOn(XZ)
profile001 = startProfileAt([281.54, 305.81], sketch001)
|> angledLine(angle = 0, length = 123.43, tag = $rectangleSegmentA001)
|> angledLine(angle = segAng(rectangleSegmentA001) - 90, length = 85.99)
|> angledLine(angle = segAng(rectangleSegmentA001), length = -segLen(rectangleSegmentA001))
|> line(endAbsolute = [profileStartX(%), profileStartY(%)])
|> close()
extrude(profile001, length = 100)"#
.to_string(),
);
let tmp_dir = std::env::temp_dir();
let tmp_dir = tmp_dir.join(uuid::Uuid::new_v4().to_string());
// Create a temporary file for each of the other files.
let tmp_file = tmp_dir.join(other_file.0);
std::fs::create_dir_all(tmp_file.parent().unwrap()).unwrap();
std::fs::write(tmp_file, other_file.1).unwrap();
let ExecTestResults { program, exec_ctxt, .. } =
parse_execute_with_project_dir(code, Some(tmp_dir)).await.unwrap();
let mut new_program = crate::Program::parse_no_errs(code).unwrap();
new_program.compute_digest();
let result = get_changed_program(
CacheInformation {
ast: &program.ast,
settings: &exec_ctxt.settings,
},
CacheInformation {
ast: &new_program.ast,
settings: &exec_ctxt.settings,
},
)
.await;
assert_eq!(result, CacheResult::NoAction(false));
}
}

View File

@ -9,7 +9,6 @@ use super::{
types::{PrimitiveType, CHECK_NUMERIC_TYPES},
};
use crate::{
engine::ExecutionKind,
errors::{KclError, KclErrorDetails},
execution::{
annotations,
@ -66,15 +65,13 @@ impl ExecutorContext {
exec_state.mod_local.explicit_length_units = true;
}
let new_units = exec_state.length_unit();
if !self.engine.execution_kind().await.is_isolated() {
self.engine
.set_units(
new_units.into(),
annotation.as_source_range(),
exec_state.id_generator(),
)
.await?;
}
self.engine
.set_units(
new_units.into(),
annotation.as_source_range(),
exec_state.id_generator(),
)
.await?;
} else {
exec_state.err(CompilationError::err(
annotation.as_source_range(),
@ -104,15 +101,11 @@ impl ExecutorContext {
&self,
program: &Node<Program>,
exec_state: &mut ExecState,
exec_kind: ExecutionKind,
preserve_mem: bool,
module_id: ModuleId,
path: &ModulePath,
) -> Result<(Option<KclValue>, EnvironmentRef, Vec<String>), KclError> {
crate::log::log(format!("enter module {path} {} {exec_kind:?}", exec_state.stack()));
let old_units = exec_state.length_unit();
let original_execution = self.engine.replace_execution_kind(exec_kind).await;
crate::log::log(format!("enter module {path} {}", exec_state.stack()));
let mut local_state = ModuleState::new(path.std_path(), exec_state.stack().memory.clone(), Some(module_id));
if !preserve_mem {
@ -131,7 +124,6 @@ impl ExecutorContext {
.exec_block(program, exec_state, crate::execution::BodyType::Root)
.await;
let new_units = exec_state.length_unit();
let env_ref = if preserve_mem {
exec_state.mut_stack().pop_and_preserve_env()
} else {
@ -141,17 +133,6 @@ impl ExecutorContext {
std::mem::swap(&mut exec_state.mod_local, &mut local_state);
}
// We only need to reset the units if we are not on the Main path.
// If we reset at the end of the main path, then we just add on an extra
// command and we'd need to flush the batch again.
// This avoids that.
if !exec_kind.is_isolated() && new_units != old_units && *path != ModulePath::Main {
self.engine
.set_units(old_units.into(), Default::default(), exec_state.id_generator())
.await?;
}
self.engine.replace_execution_kind(original_execution).await;
crate::log::log(format!("leave {path}"));
result.map(|result| (result, env_ref, local_state.module_exports))
@ -161,7 +142,7 @@ impl ExecutorContext {
#[async_recursion]
pub(super) async fn exec_block<'a>(
&'a self,
program: NodeRef<'a, crate::parsing::ast::types::Program>,
program: NodeRef<'a, Program>,
exec_state: &mut ExecState,
body_type: BodyType,
) -> Result<Option<KclValue>, KclError> {
@ -185,9 +166,8 @@ impl ExecutorContext {
match &import_stmt.selector {
ImportSelector::List { items } => {
let (env_ref, module_exports) = self
.exec_module_for_items(module_id, exec_state, ExecutionKind::Isolated, source_range)
.await?;
let (env_ref, module_exports) =
self.exec_module_for_items(module_id, exec_state, source_range).await?;
for import_item in items {
// Extract the item from the module.
let item = exec_state
@ -228,9 +208,8 @@ impl ExecutorContext {
}
}
ImportSelector::Glob(_) => {
let (env_ref, module_exports) = self
.exec_module_for_items(module_id, exec_state, ExecutionKind::Isolated, source_range)
.await?;
let (env_ref, module_exports) =
self.exec_module_for_items(module_id, exec_state, source_range).await?;
for name in module_exports.iter() {
let item = exec_state
.stack()
@ -421,7 +400,7 @@ impl ExecutorContext {
Ok(last_expr)
}
pub(super) async fn open_module(
pub async fn open_module(
&self,
path: &ImportPath,
attrs: &[Node<Annotation>],
@ -429,6 +408,7 @@ impl ExecutorContext {
source_range: SourceRange,
) -> Result<ModuleId, KclError> {
let resolved_path = ModulePath::from_import_path(path, &self.settings.project_directory);
match path {
ImportPath::Kcl { .. } => {
exec_state.global.mod_loader.cycle_check(&resolved_path, source_range)?;
@ -485,7 +465,6 @@ impl ExecutorContext {
&self,
module_id: ModuleId,
exec_state: &mut ExecState,
exec_kind: ExecutionKind,
source_range: SourceRange,
) -> Result<(EnvironmentRef, Vec<String>), KclError> {
let path = exec_state.global.module_infos[&module_id].path.clone();
@ -494,12 +473,12 @@ impl ExecutorContext {
let result = match &mut repr {
ModuleRepr::Root => Err(exec_state.circular_import_error(&path, source_range)),
ModuleRepr::Kcl(_, Some((env_ref, items))) => Ok((*env_ref, items.clone())),
ModuleRepr::Kcl(_, Some((_, env_ref, items))) => Ok((*env_ref, items.clone())),
ModuleRepr::Kcl(program, cache) => self
.exec_module_from_ast(program, module_id, &path, exec_state, exec_kind, source_range)
.exec_module_from_ast(program, module_id, &path, exec_state, source_range, false)
.await
.map(|(_, er, items)| {
*cache = Some((er, items.clone()));
.map(|(val, er, items)| {
*cache = Some((val, er, items.clone()));
(er, items)
}),
ModuleRepr::Foreign(geom) => Err(KclError::Semantic(KclErrorDetails {
@ -518,7 +497,6 @@ impl ExecutorContext {
module_id: ModuleId,
module_name: &BoxNode<Name>,
exec_state: &mut ExecState,
exec_kind: ExecutionKind,
source_range: SourceRange,
) -> Result<Option<KclValue>, KclError> {
exec_state.global.operations.push(Operation::GroupBegin {
@ -535,13 +513,14 @@ impl ExecutorContext {
let result = match &mut repr {
ModuleRepr::Root => Err(exec_state.circular_import_error(&path, source_range)),
ModuleRepr::Kcl(_, Some((val, _, _))) => Ok(val.clone()),
ModuleRepr::Kcl(program, cached_items) => {
let result = self
.exec_module_from_ast(program, module_id, &path, exec_state, exec_kind, source_range)
.exec_module_from_ast(program, module_id, &path, exec_state, source_range, false)
.await;
match result {
Ok((val, env, items)) => {
*cached_items = Some((env, items));
*cached_items = Some((val.clone(), env, items));
Ok(val)
}
Err(e) => Err(e),
@ -560,18 +539,18 @@ impl ExecutorContext {
result
}
async fn exec_module_from_ast(
pub async fn exec_module_from_ast(
&self,
program: &Node<Program>,
module_id: ModuleId,
path: &ModulePath,
exec_state: &mut ExecState,
exec_kind: ExecutionKind,
source_range: SourceRange,
preserve_mem: bool,
) -> Result<(Option<KclValue>, EnvironmentRef, Vec<String>), KclError> {
exec_state.global.mod_loader.enter_module(path);
let result = self
.exec_module_body(program, exec_state, exec_kind, false, module_id, path)
.exec_module_body(program, exec_state, preserve_mem, module_id, path)
.await;
exec_state.global.mod_loader.leave_module(path);
@ -608,7 +587,7 @@ impl ExecutorContext {
Expr::Name(name) => {
let value = name.get_result(exec_state, self).await?.clone();
if let KclValue::Module { value: module_id, meta } = value {
self.exec_module_for_result(module_id, name, exec_state, ExecutionKind::Normal, metadata.source_range)
self.exec_module_for_result(module_id, name, exec_state, metadata.source_range)
.await?
.unwrap_or_else(|| {
exec_state.warn(CompilationError::err(
@ -808,7 +787,7 @@ impl Node<Name> {
};
mem_spec = Some(
ctx.exec_module_for_items(*module_id, exec_state, ExecutionKind::Normal, p.as_source_range())
ctx.exec_module_for_items(*module_id, exec_state, p.as_source_range())
.await?,
);
}
@ -1320,7 +1299,7 @@ impl Node<CallExpressionKw> {
));
}
let op = if func.feature_tree_operation() && !ctx.is_isolated_execution().await {
let op = if func.feature_tree_operation() {
let op_labeled_args = args
.kw_args
.labeled
@ -1406,7 +1385,7 @@ impl Node<CallExpressionKw> {
e.add_source_ranges(vec![callsite])
})?;
if matches!(fn_src, FunctionSource::User { .. }) && !ctx.is_isolated_execution().await {
if matches!(fn_src, FunctionSource::User { .. }) {
// Track return operation.
exec_state.global.operations.push(Operation::GroupEnd);
}
@ -1458,7 +1437,7 @@ impl Node<CallExpression> {
));
}
let op = if func.feature_tree_operation() && !ctx.is_isolated_execution().await {
let op = if func.feature_tree_operation() {
let op_labeled_args = func
.args(false)
.iter()
@ -1516,19 +1495,17 @@ impl Node<CallExpression> {
// exec_state.
let func = fn_name.get_result(exec_state, ctx).await?.clone();
if !ctx.is_isolated_execution().await {
// Track call operation.
exec_state.global.operations.push(Operation::GroupBegin {
group: Group::FunctionCall {
name: Some(fn_name.to_string()),
function_source_range: func.function_def_source_range().unwrap_or_default(),
unlabeled_arg: None,
// TODO: Add the arguments for legacy positional parameters.
labeled_args: Default::default(),
},
source_range: callsite,
});
}
// Track call operation.
exec_state.global.operations.push(Operation::GroupBegin {
group: Group::FunctionCall {
name: Some(fn_name.to_string()),
function_source_range: func.function_def_source_range().unwrap_or_default(),
unlabeled_arg: None,
// TODO: Add the arguments for legacy positional parameters.
labeled_args: Default::default(),
},
source_range: callsite,
});
let Some(fn_src) = func.as_fn() else {
return Err(KclError::Semantic(KclErrorDetails {
@ -1557,10 +1534,8 @@ impl Node<CallExpression> {
})
})?;
if !ctx.is_isolated_execution().await {
// Track return operation.
exec_state.global.operations.push(Operation::GroupEnd);
}
// Track return operation.
exec_state.global.operations.push(Operation::GroupEnd);
Ok(result)
}
@ -2313,7 +2288,7 @@ impl FunctionSource {
}
}
let op = if props.include_in_feature_tree && !ctx.is_isolated_execution().await {
let op = if props.include_in_feature_tree {
let op_labeled_args = args
.kw_args
.labeled
@ -2357,28 +2332,26 @@ impl FunctionSource {
Ok(Some(result))
}
FunctionSource::User { ast, memory, .. } => {
if !ctx.is_isolated_execution().await {
// Track call operation.
let op_labeled_args = args
.kw_args
.labeled
.iter()
.map(|(k, arg)| (k.clone(), OpArg::new(OpKclValue::from(&arg.value), arg.source_range)))
.collect();
exec_state.global.operations.push(Operation::GroupBegin {
group: Group::FunctionCall {
name: fn_name.clone(),
function_source_range: ast.as_source_range(),
unlabeled_arg: args
.kw_args
.unlabeled
.as_ref()
.map(|arg| OpArg::new(OpKclValue::from(&arg.value), arg.source_range)),
labeled_args: op_labeled_args,
},
source_range: callsite,
});
}
// Track call operation.
let op_labeled_args = args
.kw_args
.labeled
.iter()
.map(|(k, arg)| (k.clone(), OpArg::new(OpKclValue::from(&arg.value), arg.source_range)))
.collect();
exec_state.global.operations.push(Operation::GroupBegin {
group: Group::FunctionCall {
name: fn_name.clone(),
function_source_range: ast.as_source_range(),
unlabeled_arg: args
.kw_args
.unlabeled
.as_ref()
.map(|arg| OpArg::new(OpKclValue::from(&arg.value), arg.source_range)),
labeled_args: op_labeled_args,
},
source_range: callsite,
});
call_user_defined_function_kw(fn_name.as_deref(), args.kw_args, *memory, ast, exec_state, ctx).await
}
@ -2391,10 +2364,13 @@ impl FunctionSource {
mod test {
use std::sync::Arc;
use tokio::io::AsyncWriteExt;
use super::*;
use crate::{
execution::{memory::Stack, parse_execute, ContextType},
parsing::ast::types::{DefaultParamVal, Identifier, Parameter},
ExecutorSettings,
};
#[tokio::test(flavor = "multi_thread")]
@ -2504,7 +2480,7 @@ mod test {
// Run each test.
let func_expr = &Node::no_src(FunctionExpression {
params,
body: crate::parsing::ast::types::Program::empty(),
body: Program::empty(),
return_type: None,
digest: None,
});
@ -2607,6 +2583,102 @@ a = foo()
assert!(result.unwrap_err().to_string().contains("return"));
}
#[tokio::test(flavor = "multi_thread")]
async fn load_all_modules() {
// program a.kcl
let programa_kcl = r#"
export a = 1
"#;
// program b.kcl
let programb_kcl = r#"
import a from 'a.kcl'
export b = a + 1
"#;
// program c.kcl
let programc_kcl = r#"
import a from 'a.kcl'
export c = a + 2
"#;
// program main.kcl
let main_kcl = r#"
import b from 'b.kcl'
import c from 'c.kcl'
d = b + c
"#;
let main = crate::parsing::parse_str(main_kcl, ModuleId::default())
.parse_errs_as_err()
.unwrap();
let tmpdir = tempdir::TempDir::new("zma_kcl_load_all_modules").unwrap();
tokio::fs::File::create(tmpdir.path().join("main.kcl"))
.await
.unwrap()
.write_all(main_kcl.as_bytes())
.await
.unwrap();
tokio::fs::File::create(tmpdir.path().join("a.kcl"))
.await
.unwrap()
.write_all(programa_kcl.as_bytes())
.await
.unwrap();
tokio::fs::File::create(tmpdir.path().join("b.kcl"))
.await
.unwrap()
.write_all(programb_kcl.as_bytes())
.await
.unwrap();
tokio::fs::File::create(tmpdir.path().join("c.kcl"))
.await
.unwrap()
.write_all(programc_kcl.as_bytes())
.await
.unwrap();
let exec_ctxt = ExecutorContext {
engine: Arc::new(Box::new(
crate::engine::conn_mock::EngineConnection::new()
.await
.map_err(|err| {
KclError::Internal(crate::errors::KclErrorDetails {
message: format!("Failed to create mock engine connection: {}", err),
source_ranges: vec![SourceRange::default()],
})
})
.unwrap(),
)),
fs: Arc::new(crate::fs::FileManager::new()),
settings: ExecutorSettings {
project_directory: Some(tmpdir.path().into()),
..Default::default()
},
stdlib: Arc::new(crate::std::StdLib::new()),
context_type: ContextType::Mock,
};
let mut exec_state = ExecState::new(&exec_ctxt);
exec_ctxt
.run_concurrent(
&crate::Program {
ast: main.clone(),
original_file_contents: "".to_owned(),
},
&mut exec_state,
false,
)
.await
.unwrap();
}
#[tokio::test(flavor = "multi_thread")]
async fn user_coercion() {
let program = r#"fn foo(x: Axis2d) {

View File

@ -15,6 +15,8 @@ use crate::{
std::{args::TyF64, sketch::PlaneData},
};
use super::ExecutorContext;
type Point2D = kcmc::shared::Point2d<f64>;
type Point3D = kcmc::shared::Point3d<f64>;
@ -76,9 +78,45 @@ pub struct ImportedGeometry {
pub value: Vec<String>,
#[serde(skip)]
pub meta: Vec<Metadata>,
/// If the imported geometry has completed.
#[serde(skip)]
completed: bool,
}
/// Data for a solid or an imported geometry.
impl ImportedGeometry {
pub fn new(id: uuid::Uuid, value: Vec<String>, meta: Vec<Metadata>) -> Self {
Self {
id,
value,
meta,
completed: false,
}
}
async fn wait_for_finish(&mut self, ctx: &ExecutorContext) -> Result<(), KclError> {
if self.completed {
return Ok(());
}
ctx.engine
.ensure_async_command_completed(self.id, self.meta.first().map(|m| m.source_range))
.await?;
self.completed = true;
Ok(())
}
pub async fn id(&mut self, ctx: &ExecutorContext) -> Result<uuid::Uuid, KclError> {
if !self.completed {
self.wait_for_finish(ctx).await?;
}
Ok(self.id)
}
}
/// Data for a solid, sketch, or an imported geometry.
#[derive(Debug, Clone, Serialize, PartialEq, ts_rs::TS, JsonSchema)]
#[ts(export)]
#[serde(tag = "type", rename_all = "camelCase")]
@ -128,11 +166,61 @@ impl From<SolidOrSketchOrImportedGeometry> for crate::execution::KclValue {
}
impl SolidOrSketchOrImportedGeometry {
pub(crate) fn ids(&self) -> Vec<uuid::Uuid> {
pub(crate) async fn ids(&mut self, ctx: &ExecutorContext) -> Result<Vec<uuid::Uuid>, KclError> {
match self {
SolidOrSketchOrImportedGeometry::ImportedGeometry(s) => vec![s.id],
SolidOrSketchOrImportedGeometry::SolidSet(s) => s.iter().map(|s| s.id).collect(),
SolidOrSketchOrImportedGeometry::SketchSet(s) => s.iter().map(|s| s.id).collect(),
SolidOrSketchOrImportedGeometry::ImportedGeometry(s) => {
let id = s.id(ctx).await?;
Ok(vec![id])
}
SolidOrSketchOrImportedGeometry::SolidSet(s) => Ok(s.iter().map(|s| s.id).collect()),
SolidOrSketchOrImportedGeometry::SketchSet(s) => Ok(s.iter().map(|s| s.id).collect()),
}
}
}
/// Data for a solid or an imported geometry.
#[derive(Debug, Clone, Serialize, PartialEq, ts_rs::TS, JsonSchema)]
#[ts(export)]
#[serde(tag = "type", rename_all = "camelCase")]
#[allow(clippy::vec_box)]
pub enum SolidOrImportedGeometry {
ImportedGeometry(Box<ImportedGeometry>),
SolidSet(Vec<Solid>),
}
impl From<SolidOrImportedGeometry> for crate::execution::KclValue {
fn from(value: SolidOrImportedGeometry) -> Self {
match value {
SolidOrImportedGeometry::ImportedGeometry(s) => crate::execution::KclValue::ImportedGeometry(*s),
SolidOrImportedGeometry::SolidSet(mut s) => {
if s.len() == 1 {
crate::execution::KclValue::Solid {
value: Box::new(s.pop().unwrap()),
}
} else {
crate::execution::KclValue::HomArray {
value: s
.into_iter()
.map(|s| crate::execution::KclValue::Solid { value: Box::new(s) })
.collect(),
ty: crate::execution::types::RuntimeType::solid(),
}
}
}
}
}
}
impl SolidOrImportedGeometry {
pub(crate) async fn ids(&mut self, ctx: &ExecutorContext) -> Result<Vec<uuid::Uuid>, KclError> {
match self {
SolidOrImportedGeometry::ImportedGeometry(s) => {
let id = s.id(ctx).await?;
Ok(vec![id])
}
SolidOrImportedGeometry::SolidSet(s) => Ok(s.iter().map(|s| s.id).collect()),
}
}
}

View File

@ -5,10 +5,8 @@ use kcmc::{
coord::{System, KITTYCAD},
each_cmd as mcmd,
format::InputFormat3d,
ok_response::OkModelingCmdResponse,
shared::FileImportFormat,
units::UnitLength,
websocket::OkWebSocketResponseData,
ImportFile, ModelingCmd,
};
use kittycad_modeling_cmds as kcmc;
@ -289,34 +287,17 @@ pub struct PreImportedGeometry {
}
pub async fn send_to_engine(pre: PreImportedGeometry, ctxt: &ExecutorContext) -> Result<ImportedGeometry, KclError> {
if ctxt.no_engine_commands().await {
return Ok(ImportedGeometry {
id: pre.id,
value: pre.command.files.iter().map(|f| f.path.to_string()).collect(),
meta: vec![pre.source_range.into()],
});
}
let imported_geometry = ImportedGeometry::new(
pre.id,
pre.command.files.iter().map(|f| f.path.to_string()).collect(),
vec![pre.source_range.into()],
);
let resp = ctxt
.engine
.send_modeling_cmd(pre.id, pre.source_range, &ModelingCmd::from(pre.command.clone()))
ctxt.engine
.async_modeling_cmd(pre.id, pre.source_range, &ModelingCmd::from(pre.command.clone()))
.await?;
let OkWebSocketResponseData::Modeling {
modeling_response: OkModelingCmdResponse::ImportFiles(imported_files),
} = &resp
else {
return Err(KclError::Engine(KclErrorDetails {
message: format!("ImportFiles response was not as expected: {:?}", resp),
source_ranges: vec![pre.source_range],
}));
};
Ok(ImportedGeometry {
id: imported_files.object_id,
value: pre.command.files.iter().map(|f| f.path.to_string()).collect(),
meta: vec![pre.source_range.into()],
})
Ok(imported_geometry)
}
/// Get the source format from the extension.

View File

@ -4,13 +4,13 @@ use anyhow::Result;
use schemars::JsonSchema;
use serde::Serialize;
use super::{types::UnitLen, EnvironmentRef, ExecState, MetaSettings};
use crate::{
errors::KclErrorDetails,
execution::{
annotations::{SETTINGS, SETTINGS_UNIT_LENGTH},
types::{NumericType, PrimitiveType, RuntimeType},
Face, Helix, ImportedGeometry, Metadata, Plane, Sketch, Solid, TagIdentifier,
types::{NumericType, PrimitiveType, RuntimeType, UnitLen},
EnvironmentRef, ExecState, Face, Helix, ImportedGeometry, MetaSettings, Metadata, Plane, Sketch, Solid,
TagIdentifier,
},
parsing::ast::types::{
DefaultParamVal, FunctionExpression, KclNone, Literal, LiteralValue, Node, TagDeclarator, TagNode,

View File

@ -28,18 +28,18 @@ pub use state::{ExecState, MetaSettings};
use crate::{
engine::EngineManager,
errors::KclError,
errors::{KclError, KclErrorDetails},
execution::{
artifact::build_artifact_graph,
cache::{CacheInformation, CacheResult},
types::{UnitAngle, UnitLen},
},
fs::FileManager,
modules::{ModuleId, ModulePath},
modules::{ModuleId, ModulePath, ModuleRepr},
parsing::ast::types::{Expr, ImportPath, NodeRef},
source_range::SourceRange,
std::StdLib,
CompilationError, ExecError, ExecutionKind, KclErrorWithOutputs,
CompilationError, ExecError, KclErrorWithOutputs,
};
pub(crate) mod annotations;
@ -495,13 +495,9 @@ impl ExecutorContext {
self.context_type == ContextType::Mock || self.context_type == ContextType::MockCustomForwarded
}
pub async fn is_isolated_execution(&self) -> bool {
self.engine.execution_kind().await.is_isolated()
}
/// Returns true if we should not send engine commands for any reason.
pub async fn no_engine_commands(&self) -> bool {
self.is_mock() || self.is_isolated_execution().await
self.is_mock()
}
pub async fn send_clear_scene(
@ -672,7 +668,7 @@ impl ExecutorContext {
(program, exec_state, false)
};
let result = self.inner_run(&program, &mut exec_state, preserve_mem).await;
let result = self.run_concurrent(&program, &mut exec_state, preserve_mem).await;
if result.is_err() {
cache::bust_cache().await;
@ -705,9 +701,210 @@ impl ExecutorContext {
program: &crate::Program,
exec_state: &mut ExecState,
) -> Result<(EnvironmentRef, Option<ModelingSessionData>), KclErrorWithOutputs> {
self.run_concurrent(program, exec_state, false).await
}
/// Perform the execution of a program.
///
/// You can optionally pass in some initialization memory for partial
/// execution.
///
/// To access non-fatal errors and warnings, extract them from the `ExecState`.
pub async fn run_single_threaded(
&self,
program: &crate::Program,
exec_state: &mut ExecState,
) -> Result<(EnvironmentRef, Option<ModelingSessionData>), KclErrorWithOutputs> {
exec_state.add_root_module_contents(program);
self.eval_prelude(exec_state, SourceRange::synthetic())
.await
.map_err(KclErrorWithOutputs::no_outputs)?;
self.inner_run(program, exec_state, false).await
}
/// Perform the execution of a program using an (experimental!) concurrent
/// execution model. This has the same signature as [Self::run].
///
/// For now -- do not use this unless you're willing to accept some
/// breakage.
///
/// You can optionally pass in some initialization memory for partial
/// execution.
///
/// To access non-fatal errors and warnings, extract them from the `ExecState`.
pub async fn run_concurrent(
&self,
program: &crate::Program,
exec_state: &mut ExecState,
preserve_mem: bool,
) -> Result<(EnvironmentRef, Option<ModelingSessionData>), KclErrorWithOutputs> {
exec_state.add_root_module_contents(program);
self.eval_prelude(exec_state, SourceRange::synthetic())
.await
.map_err(KclErrorWithOutputs::no_outputs)?;
let mut universe = std::collections::HashMap::new();
let default_planes = self.engine.get_default_planes().read().await.clone();
crate::walk::import_universe(self, &program.ast, &mut universe, exec_state)
.await
.map_err(|err| {
let module_id_to_module_path: IndexMap<ModuleId, ModulePath> = exec_state
.global
.path_to_source_id
.iter()
.map(|(k, v)| ((*v), k.clone()))
.collect();
KclErrorWithOutputs::new(
err,
exec_state.global.operations.clone(),
exec_state.global.artifact_commands.clone(),
exec_state.global.artifact_graph.clone(),
module_id_to_module_path,
exec_state.global.id_to_source.clone(),
default_planes.clone(),
)
})?;
for modules in crate::walk::import_graph(&universe, self)
.map_err(|err| {
let module_id_to_module_path: IndexMap<ModuleId, ModulePath> = exec_state
.global
.path_to_source_id
.iter()
.map(|(k, v)| ((*v), k.clone()))
.collect();
KclErrorWithOutputs::new(
err,
exec_state.global.operations.clone(),
exec_state.global.artifact_commands.clone(),
exec_state.global.artifact_graph.clone(),
module_id_to_module_path,
exec_state.global.id_to_source.clone(),
default_planes.clone(),
)
})?
.into_iter()
{
#[cfg(not(target_arch = "wasm32"))]
let mut set = tokio::task::JoinSet::new();
#[allow(clippy::type_complexity)]
let (results_tx, mut results_rx): (
tokio::sync::mpsc::Sender<(
ModuleId,
ModulePath,
Result<(Option<KclValue>, EnvironmentRef, Vec<String>), KclError>,
)>,
tokio::sync::mpsc::Receiver<_>,
) = tokio::sync::mpsc::channel(1);
for module in modules {
let Some((import_stmt, module_id, module_path, program)) = universe.get(&module) else {
return Err(KclErrorWithOutputs::no_outputs(KclError::Internal(KclErrorDetails {
message: format!("Module {module} not found in universe"),
source_ranges: Default::default(),
})));
};
let module_id = *module_id;
let module_path = module_path.clone();
let program = program.clone();
let exec_state = exec_state.clone();
let exec_ctxt = self.clone();
let results_tx = results_tx.clone();
let source_range = SourceRange::from(import_stmt);
#[cfg(target_arch = "wasm32")]
{
wasm_bindgen_futures::spawn_local(async move {
//set.spawn(async move {
let mut exec_state = exec_state;
let exec_ctxt = exec_ctxt;
let result = exec_ctxt
.exec_module_from_ast(
&program,
module_id,
&module_path,
&mut exec_state,
source_range,
false,
)
.await;
results_tx
.send((module_id, module_path, result))
.await
.unwrap_or_default();
});
}
#[cfg(not(target_arch = "wasm32"))]
{
set.spawn(async move {
let mut exec_state = exec_state;
let exec_ctxt = exec_ctxt;
let result = exec_ctxt
.exec_module_from_ast(
&program,
module_id,
&module_path,
&mut exec_state,
source_range,
false,
)
.await;
results_tx
.send((module_id, module_path, result))
.await
.unwrap_or_default();
});
}
}
drop(results_tx);
while let Some((module_id, _, result)) = results_rx.recv().await {
match result {
Ok((val, session_data, variables)) => {
let mut repr = exec_state.global.module_infos[&module_id].take_repr();
let ModuleRepr::Kcl(_, cache) = &mut repr else {
continue;
};
*cache = Some((val, session_data, variables));
exec_state.global.module_infos[&module_id].restore_repr(repr);
}
Err(e) => {
let module_id_to_module_path: IndexMap<ModuleId, ModulePath> = exec_state
.global
.path_to_source_id
.iter()
.map(|(k, v)| ((*v), k.clone()))
.collect();
return Err(KclErrorWithOutputs::new(
e,
exec_state.global.operations.clone(),
exec_state.global.artifact_commands.clone(),
exec_state.global.artifact_graph.clone(),
module_id_to_module_path,
exec_state.global.id_to_source.clone(),
default_planes,
));
}
}
}
}
self.inner_run(program, exec_state, preserve_mem).await
}
/// Perform the execution of a program. Accept all possible parameters and
/// output everything.
async fn inner_run(
@ -716,8 +913,6 @@ impl ExecutorContext {
exec_state: &mut ExecState,
preserve_mem: bool,
) -> Result<(EnvironmentRef, Option<ModelingSessionData>), KclErrorWithOutputs> {
exec_state.add_root_module_contents(program);
let _stats = crate::log::LogPerfStats::new("Interpretation");
// Re-apply the settings, in case the cache was busted.
@ -752,7 +947,7 @@ impl ExecutorContext {
exec_state.global.artifact_graph.clone(),
module_id_to_module_path,
exec_state.global.id_to_source.clone(),
default_planes,
default_planes.clone(),
)
})?;
@ -762,6 +957,7 @@ impl ExecutorContext {
cache::write_old_memory((mem, exec_state.global.module_infos.clone())).await;
}
let session_data = self.engine.get_session_data().await;
Ok((env_ref, session_data))
}
@ -783,13 +979,15 @@ impl ExecutorContext {
.exec_module_body(
program,
exec_state,
ExecutionKind::Normal,
preserve_mem,
ModuleId::default(),
&ModulePath::Main,
)
.await;
// Ensure all the async commands completed.
self.engine.ensure_async_commands_completed().await?;
// If we errored out and early-returned, there might be commands which haven't been executed
// and should be dropped.
self.engine.clear_queues().await;
@ -837,9 +1035,7 @@ impl ExecutorContext {
source_range,
)
.await?;
let (module_memory, _) = self
.exec_module_for_items(id, exec_state, ExecutionKind::Isolated, source_range)
.await?;
let (module_memory, _) = self.exec_module_for_items(id, exec_state, source_range).await?;
exec_state.mut_stack().memory.set_std(module_memory);
}
@ -947,6 +1143,14 @@ impl ExecutorContext {
#[cfg(test)]
pub(crate) async fn parse_execute(code: &str) -> Result<ExecTestResults, KclError> {
parse_execute_with_project_dir(code, None).await
}
#[cfg(test)]
pub(crate) async fn parse_execute_with_project_dir(
code: &str,
project_directory: Option<std::path::PathBuf>,
) -> Result<ExecTestResults, KclError> {
let program = crate::Program::parse_no_errs(code)?;
let exec_ctxt = ExecutorContext {
@ -960,7 +1164,10 @@ pub(crate) async fn parse_execute(code: &str) -> Result<ExecTestResults, KclErro
)),
fs: Arc::new(crate::fs::FileManager::new()),
stdlib: Arc::new(crate::std::StdLib::new()),
settings: Default::default(),
settings: ExecutorSettings {
project_directory,
..Default::default()
},
context_type: ContextType::Mock,
};
let mut exec_state = ExecState::new(&exec_ctxt);

View File

@ -229,6 +229,10 @@ impl ExecState {
self.global.module_infos.insert(id, module_info);
}
pub fn get_module(&mut self, id: ModuleId) -> Option<&ModuleInfo> {
self.global.module_infos.get(&id)
}
pub fn current_default_units(&self) -> NumericType {
NumericType::Default {
len: self.length_unit(),

View File

@ -1338,11 +1338,11 @@ mod test {
value: Box::new(Plane::from_plane_data(crate::std::sketch::PlaneData::XY, exec_state)),
},
// No easy way to make a Face, Sketch, Solid, or Helix
KclValue::ImportedGeometry(crate::execution::ImportedGeometry {
id: uuid::Uuid::nil(),
value: Vec::new(),
meta: Vec::new(),
}),
KclValue::ImportedGeometry(crate::execution::ImportedGeometry::new(
uuid::Uuid::nil(),
Vec::new(),
Vec::new(),
)),
// Other values don't have types
]
}

View File

@ -76,12 +76,12 @@ pub mod std;
pub mod test_server;
mod thread;
mod unparser;
mod walk;
pub mod walk;
#[cfg(target_arch = "wasm32")]
mod wasm;
pub use coredump::CoreDump;
pub use engine::{EngineManager, EngineStats, ExecutionKind};
pub use engine::{EngineManager, EngineStats};
pub use errors::{
CompilationError, ConnectionError, ExecError, KclError, KclErrorWithOutputs, Report, ReportWithOutputs,
};
@ -109,7 +109,7 @@ pub mod exec {
pub mod wasm_engine {
pub use crate::{
coredump::wasm::{CoreDumpManager, CoreDumper},
engine::conn_wasm::{EngineCommandManager, EngineConnection},
engine::conn_wasm::{EngineCommandManager, EngineConnection, ResponseContext},
fs::wasm::{FileManager, FileSystemManager},
};
}

View File

@ -6,6 +6,7 @@ use serde::{Deserialize, Serialize};
use crate::{
errors::{KclError, KclErrorDetails},
exec::KclValue,
execution::{EnvironmentRef, PreImportedGeometry},
fs::{FileManager, FileSystem},
parsing::ast::types::{ImportPath, Node, Program},
@ -94,7 +95,7 @@ pub(crate) fn read_std(mod_name: &str) -> Option<&'static str> {
}
/// Info about a module.
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize)]
#[derive(Debug, Clone, PartialEq, Serialize)]
pub struct ModuleInfo {
/// The ID of the module.
pub(crate) id: ModuleId,
@ -117,11 +118,11 @@ impl ModuleInfo {
}
#[allow(clippy::large_enum_variant)]
#[derive(Debug, Clone, PartialEq, Deserialize, Serialize)]
#[derive(Debug, Clone, PartialEq, Serialize)]
pub enum ModuleRepr {
Root,
// AST, memory, exported names
Kcl(Node<Program>, Option<(EnvironmentRef, Vec<String>)>),
Kcl(Node<Program>, Option<(Option<KclValue>, EnvironmentRef, Vec<String>)>),
Foreign(PreImportedGeometry),
Dummy,
}

View File

@ -3,6 +3,7 @@
use std::{cell::RefCell, collections::BTreeMap};
use indexmap::IndexMap;
use winnow::{
combinator::{alt, delimited, opt, peek, preceded, repeat, repeat_till, separated, separated_pair, terminated},
dispatch,
@ -3138,6 +3139,21 @@ fn fn_call_kw(i: &mut TokenSlice) -> PResult<Node<CallExpressionKw>> {
opt(comma_sep).parse_next(i)?;
let end = close_paren.parse_next(i)?.end;
// Validate there aren't any duplicate labels.
let mut counted_labels = IndexMap::with_capacity(args.len());
for arg in &args {
*counted_labels.entry(&arg.label.inner.name).or_insert(0) += 1;
}
if let Some((duplicated, n)) = counted_labels.iter().find(|(_label, n)| n > &&1) {
let msg = format!(
"You've used the parameter labelled '{duplicated}' {n} times in a single function call. You can only set each parameter once! Remove all but one use."
);
ParseContext::err(CompilationError::err(
SourceRange::new(fn_name.start, end, fn_name.module_id),
msg,
));
}
let non_code_meta = NonCodeMeta {
non_code_nodes,
..Default::default()
@ -4497,8 +4513,8 @@ e
/// )
/// |> yLine(endAbsolute = 0)
/// |> close(%)
///
/// example = extrude(5, exampleSketch)
///
/// example = extrude(exampleSketch, length = 5)
/// ```
@(impl = std_rust)
export fn cos(num: number(rad)): number(_) {}"#;
@ -5075,6 +5091,18 @@ baz = 2
);
}
}
#[test]
fn test_sensible_error_duplicated_args() {
let program = r#"f(arg = 1, normal = 44, arg = 2)"#;
let (_, mut errs) = assert_no_fatal(program);
assert_eq!(errs.len(), 1);
let err = errs.pop().unwrap();
assert_eq!(
err.message,
"You've used the parameter labelled 'arg' 2 times in a single function call. You can only set each parameter once! Remove all but one use.",
);
}
}
#[cfg(test)]

View File

@ -1,4 +1,5 @@
use std::{
collections::HashMap,
panic::{catch_unwind, AssertUnwindSafe},
path::{Path, PathBuf},
};
@ -251,6 +252,38 @@ fn assert_common_snapshots(
artifact_commands: Vec<ArtifactCommand>,
artifact_graph: ArtifactGraph,
) {
let artifact_commands = {
// Due to our newfound concurrency, we're going to mess with the
// artifact_commands a bit -- we're going to maintain the order,
// but only for a given module ID. This means the artifact_commands
// is no longer meaningful, but it is deterministic and will hopefully
// catch meaningful changes in behavior.
let mut artifact_commands_map = artifact_commands
.into_iter()
.map(|v| (v.range.module_id().as_usize(), v))
.fold(
HashMap::<usize, Vec<ArtifactCommand>>::new(),
|mut map, (module_id, el)| {
let mut v = map.remove(&module_id).unwrap_or_default();
v.push(el);
map.insert(module_id, v);
map
},
);
let mut artifact_commands_keys = artifact_commands_map.keys().cloned().collect::<Vec<_>>();
artifact_commands_keys.sort();
let artifact_commands: Vec<ArtifactCommand> = artifact_commands_keys
.iter()
.flat_map(|idx| artifact_commands_map.remove(idx).unwrap())
.collect();
assert_eq!(0, artifact_commands_map.len());
artifact_commands
};
let result1 = catch_unwind(AssertUnwindSafe(|| {
assert_snapshot(test, "Operations executed", || {
insta::assert_json_snapshot!("ops", operations, {
@ -2546,3 +2579,24 @@ mod tangent_to_3_point_arc {
super::execute(TEST_NAME, true).await
}
}
mod import_async {
const TEST_NAME: &str = "import_async";
/// Test parsing KCL.
#[test]
fn parse() {
super::parse(TEST_NAME)
}
/// Test that parsing and unparsing KCL produces the original KCL input.
#[tokio::test(flavor = "multi_thread")]
async fn unparse() {
super::unparse(TEST_NAME).await
}
/// Test that KCL is executed correctly.
#[tokio::test(flavor = "multi_thread")]
async fn kcl_test_execute() {
super::execute(TEST_NAME, true).await
}
}

View File

@ -13,7 +13,7 @@ use crate::{
errors::{KclError, KclErrorDetails},
execution::{
types::{NumericType, PrimitiveType, RuntimeType},
ExecState, KclValue, Solid,
ExecState, KclValue, SolidOrImportedGeometry,
},
std::Args,
};
@ -43,7 +43,11 @@ struct AppearanceData {
/// Set the appearance of a solid. This only works on solids, not sketches or individual paths.
pub async fn appearance(exec_state: &mut ExecState, args: Args) -> Result<KclValue, KclError> {
let solids = args.get_unlabeled_kw_arg_typed("solids", &RuntimeType::solids(), exec_state)?;
let solids = args.get_unlabeled_kw_arg_typed(
"solids",
&RuntimeType::Union(vec![RuntimeType::solids(), RuntimeType::imported()]),
exec_state,
)?;
let color: String = args.get_kw_arg("color")?;
let count_ty = RuntimeType::Primitive(PrimitiveType::Number(NumericType::count()));
@ -270,6 +274,19 @@ pub async fn appearance(exec_state: &mut ExecState, args: Args) -> Result<KclVal
/// roughness = 50
/// )
/// ```
///
/// ```no_run
/// // Change the appearance of an imported model.
///
/// import "tests/inputs/cube.sldprt" as cube
///
/// cube
/// // |> appearance(
/// // color = "#ff0000",
/// // metalness = 50,
/// // roughness = 50
/// // )
/// ```
#[stdlib {
name = "appearance",
keywords = true,
@ -282,14 +299,16 @@ pub async fn appearance(exec_state: &mut ExecState, args: Args) -> Result<KclVal
}
}]
async fn inner_appearance(
solids: Vec<Solid>,
solids: SolidOrImportedGeometry,
color: String,
metalness: Option<f64>,
roughness: Option<f64>,
exec_state: &mut ExecState,
args: Args,
) -> Result<Vec<Solid>, KclError> {
for solid in &solids {
) -> Result<SolidOrImportedGeometry, KclError> {
let mut solids = solids.clone();
for solid_id in solids.ids(&args.ctx).await? {
// Set the material properties.
let rgb = rgba_simple::RGB::<f32>::from_hex(&color).map_err(|err| {
KclError::Semantic(KclErrorDetails {
@ -308,7 +327,7 @@ async fn inner_appearance(
args.batch_modeling_cmd(
exec_state.next_uuid(),
ModelingCmd::from(mcmd::ObjectSetMaterialParamsPbr {
object_id: solid.id,
object_id: solid_id,
color,
metalness: metalness.unwrap_or_default() as f32 / 100.0,
roughness: roughness.unwrap_or_default() as f32 / 100.0,

View File

@ -28,6 +28,9 @@ use crate::{
ModuleId,
};
const ERROR_STRING_SKETCH_TO_SOLID_HELPER: &str =
"You can convert a sketch (2D) into a Solid (3D) by calling a function like `extrude` or `revolve`";
#[derive(Debug, Clone)]
pub struct Arg {
/// The evaluated argument.
@ -220,18 +223,19 @@ impl Args {
ty.human_friendly_type(),
);
let suggestion = match (ty, actual_type_name) {
(RuntimeType::Primitive(PrimitiveType::Solid), "Sketch") => Some(
"You can convert a sketch (2D) into a Solid (3D) by calling a function like `extrude` or `revolve`",
),
(RuntimeType::Array(t, _), "Sketch") if **t == RuntimeType::Primitive(PrimitiveType::Solid) => Some(
"You can convert a sketch (2D) into a Solid (3D) by calling a function like `extrude` or `revolve`",
),
(RuntimeType::Primitive(PrimitiveType::Solid), "Sketch") => Some(ERROR_STRING_SKETCH_TO_SOLID_HELPER),
(RuntimeType::Array(t, _), "Sketch") if **t == RuntimeType::Primitive(PrimitiveType::Solid) => {
Some(ERROR_STRING_SKETCH_TO_SOLID_HELPER)
}
_ => None,
};
let message = match suggestion {
let mut message = match suggestion {
None => msg_base,
Some(sugg) => format!("{msg_base}. {sugg}"),
};
if message.contains("one or more Solids or imported geometry but it's actually of type Sketch") {
message = format!("{message}. {ERROR_STRING_SKETCH_TO_SOLID_HELPER}");
}
KclError::Semantic(KclErrorDetails {
source_ranges: arg.source_ranges(),
message,
@ -343,18 +347,20 @@ impl Args {
ty.human_friendly_type(),
);
let suggestion = match (ty, actual_type_name) {
(RuntimeType::Primitive(PrimitiveType::Solid), "Sketch") => Some(
"You can convert a sketch (2D) into a Solid (3D) by calling a function like `extrude` or `revolve`",
),
(RuntimeType::Array(ty, _), "Sketch") if **ty == RuntimeType::Primitive(PrimitiveType::Solid) => Some(
"You can convert a sketch (2D) into a Solid (3D) by calling a function like `extrude` or `revolve`",
),
(RuntimeType::Primitive(PrimitiveType::Solid), "Sketch") => Some(ERROR_STRING_SKETCH_TO_SOLID_HELPER),
(RuntimeType::Array(ty, _), "Sketch") if **ty == RuntimeType::Primitive(PrimitiveType::Solid) => {
Some(ERROR_STRING_SKETCH_TO_SOLID_HELPER)
}
_ => None,
};
let message = match suggestion {
let mut message = match suggestion {
None => msg_base,
Some(sugg) => format!("{msg_base}. {sugg}"),
};
if message.contains("one or more Solids or imported geometry but it's actually of type Sketch") {
message = format!("{message}. {ERROR_STRING_SKETCH_TO_SOLID_HELPER}");
}
KclError::Semantic(KclErrorDetails {
source_ranges: arg.source_ranges(),
message,
@ -1396,6 +1402,26 @@ impl<'a> FromKclValue<'a> for crate::execution::SolidOrSketchOrImportedGeometry
}
}
impl<'a> FromKclValue<'a> for crate::execution::SolidOrImportedGeometry {
fn from_kcl_val(arg: &'a KclValue) -> Option<Self> {
match arg {
KclValue::Solid { value } => Some(Self::SolidSet(vec![(**value).clone()])),
KclValue::HomArray { value, .. } => {
let mut solids = vec![];
for item in value {
match item {
KclValue::Solid { value } => solids.push((**value).clone()),
_ => return None,
}
}
Some(Self::SolidSet(solids))
}
KclValue::ImportedGeometry(value) => Some(Self::ImportedGeometry(Box::new(value.clone()))),
_ => None,
}
}
}
impl<'a> FromKclValue<'a> for super::sketch::SketchData {
fn from_kcl_val(arg: &'a KclValue) -> Option<Self> {
// Order is critical since PlaneData is a subset of Plane.

View File

@ -171,7 +171,8 @@ async fn inner_scale(
args.flush_batch_for_solids(exec_state, solids).await?;
}
for object_id in objects.ids() {
let mut objects = objects.clone();
for object_id in objects.ids(&args.ctx).await? {
let id = exec_state.next_uuid();
args.batch_modeling_cmd(
@ -409,7 +410,8 @@ async fn inner_translate(
args.flush_batch_for_solids(exec_state, solids).await?;
}
for object_id in objects.ids() {
let mut objects = objects.clone();
for object_id in objects.ids(&args.ctx).await? {
let id = exec_state.next_uuid();
args.batch_modeling_cmd(
@ -774,7 +776,8 @@ async fn inner_rotate(
args.flush_batch_for_solids(exec_state, solids).await?;
}
for object_id in objects.ids() {
let mut objects = objects.clone();
for object_id in objects.ids(&args.ctx).await? {
let id = exec_state.next_uuid();
if let (Some(axis), Some(angle)) = (axis, angle) {

View File

@ -86,7 +86,7 @@ async fn do_execute_and_snapshot(
) -> Result<(ExecState, EnvironmentRef, image::DynamicImage), ExecErrorWithState> {
let mut exec_state = ExecState::new(ctx);
let result = ctx
.run(&program, &mut exec_state)
.run_single_threaded(&program, &mut exec_state)
.await
.map_err(|err| ExecErrorWithState::new(err.into(), exec_state.clone()))?;
for e in exec_state.errors() {

View File

@ -6,31 +6,37 @@ use std::{
use anyhow::Result;
use crate::{
parsing::ast::types::{ImportPath, NodeRef, Program},
errors::KclErrorDetails,
modules::{ModulePath, ModuleRepr},
parsing::ast::types::{ImportPath, ImportStatement, Node as AstNode, NodeRef, Program},
walk::{Node, Visitable},
ExecState, ExecutorContext, KclError, ModuleId, SourceRange,
};
/// Specific dependency between two modules. The 0th element of this tuple
/// Specific dependency between two modules. The 0th element of this info
/// is the "importing" module, the 1st is the "imported" module. The 0th
/// module *depends on* the 1st module.
type Dependency = (String, String);
type Graph = Vec<Dependency>;
type DependencyInfo = (AstNode<ImportStatement>, ModuleId, ModulePath, AstNode<Program>);
type Universe = HashMap<String, DependencyInfo>;
/// Process a number of programs, returning the graph of dependencies.
///
/// This will (currently) return a list of lists of IDs that can be safely
/// run concurrently. Each "stage" is blocking in this model, which will
/// change in the future. Don't use this function widely, yet.
#[allow(clippy::iter_over_hash_type)]
pub fn import_graph(progs: HashMap<String, NodeRef<'_, Program>>) -> Result<Vec<Vec<String>>> {
pub fn import_graph(progs: &Universe, ctx: &ExecutorContext) -> Result<Vec<Vec<String>>, KclError> {
let mut graph = Graph::new();
for (name, program) in progs.iter() {
for (name, (_, _, _, program)) in progs.iter() {
graph.extend(
import_dependencies(program)?
import_dependencies(program, ctx)?
.into_iter()
.map(|dependency| (name.clone(), dependency))
.map(|(dependency, _, _)| (name.clone(), dependency))
.collect::<Vec<_>>(),
);
}
@ -40,7 +46,10 @@ pub fn import_graph(progs: HashMap<String, NodeRef<'_, Program>>) -> Result<Vec<
}
#[allow(clippy::iter_over_hash_type)]
fn topsort(all_modules: &[&str], graph: Graph) -> Result<Vec<Vec<String>>> {
fn topsort(all_modules: &[&str], graph: Graph) -> Result<Vec<Vec<String>>, KclError> {
if all_modules.is_empty() {
return Ok(vec![]);
}
let mut dep_map = HashMap::<String, Vec<String>>::new();
for (dependent, dependency) in graph.iter() {
@ -83,7 +92,13 @@ fn topsort(all_modules: &[&str], graph: Graph) -> Result<Vec<Vec<String>>> {
}
if stage_modules.is_empty() {
anyhow::bail!("imports are acyclic");
waiting_modules.sort();
return Err(KclError::ImportCycle(KclErrorDetails {
message: format!("circular import of modules not allowed: {}", waiting_modules.join(", ")),
// TODO: we can get the right import lines from the AST, but we don't
source_ranges: vec![SourceRange::default()],
}));
}
// not strictly needed here, but perhaps helpful to avoid thinking
@ -101,33 +116,97 @@ fn topsort(all_modules: &[&str], graph: Graph) -> Result<Vec<Vec<String>>> {
Ok(order)
}
pub(crate) fn import_dependencies(prog: NodeRef<'_, Program>) -> Result<Vec<String>> {
type ImportDependencies = Vec<(String, AstNode<ImportStatement>, ModulePath)>;
pub(crate) fn import_dependencies(
prog: NodeRef<Program>,
ctx: &ExecutorContext,
) -> Result<ImportDependencies, KclError> {
let ret = Arc::new(Mutex::new(vec![]));
fn walk(ret: Arc<Mutex<Vec<String>>>, node: Node<'_>) {
fn walk(ret: Arc<Mutex<ImportDependencies>>, node: Node<'_>, ctx: &ExecutorContext) -> Result<(), KclError> {
if let Node::ImportStatement(is) = node {
let dependency = match &is.path {
ImportPath::Kcl { filename } => filename.to_string(),
ImportPath::Foreign { path } => path.to_string(),
ImportPath::Std { path } => path.join("::"),
};
// We only care about Kcl imports for now.
if let ImportPath::Kcl { filename } = &is.path {
let resolved_path = ModulePath::from_import_path(&is.path, &ctx.settings.project_directory);
ret.lock().unwrap().push(dependency);
// We need to lock the mutex to push the dependency.
// This is a bit of a hack, but it works for now.
ret.lock()
.map_err(|err| {
KclError::Internal(KclErrorDetails {
message: format!("Failed to lock mutex: {}", err),
source_ranges: Default::default(),
})
})?
.push((filename.to_string(), is.clone(), resolved_path));
}
}
for child in node.children().iter() {
walk(ret.clone(), *child)
walk(ret.clone(), *child, ctx)?;
}
Ok(())
}
walk(ret.clone(), prog.into());
walk(ret.clone(), prog.into(), ctx)?;
let ret = ret.lock().unwrap().clone();
Ok(ret)
let ret = ret.lock().map_err(|err| {
KclError::Internal(KclErrorDetails {
message: format!("Failed to lock mutex: {}", err),
source_ranges: Default::default(),
})
})?;
Ok(ret.clone())
}
pub(crate) async fn import_universe(
ctx: &ExecutorContext,
prog: NodeRef<'_, Program>,
out: &mut Universe,
exec_state: &mut ExecState,
) -> Result<(), KclError> {
let modules = import_dependencies(prog, ctx)?;
for (filename, import_stmt, module_path) in modules {
if out.contains_key(&filename) {
continue;
}
let module_id = ctx
.open_module(&import_stmt.path, &[], exec_state, Default::default())
.await?;
let program = {
let Some(module_info) = exec_state.get_module(module_id) else {
return Err(KclError::Internal(KclErrorDetails {
message: format!("Module {} not found", module_id),
source_ranges: vec![import_stmt.into()],
}));
};
let ModuleRepr::Kcl(program, _) = &module_info.repr else {
// if it's not a KCL module we can skip it since it has no
// dependencies.
continue;
};
program.clone()
};
out.insert(
filename.clone(),
(import_stmt.clone(), module_id, module_path.clone(), program.clone()),
);
Box::pin(import_universe(ctx, &program, out, exec_state)).await?;
}
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use crate::parsing::ast::types::ImportSelector;
macro_rules! kcl {
( $kcl:expr ) => {{
@ -135,26 +214,41 @@ mod tests {
}};
}
#[test]
fn order_imports() {
fn into_module_info(program: AstNode<Program>) -> DependencyInfo {
(
AstNode::no_src(ImportStatement {
selector: ImportSelector::None { alias: None },
path: ImportPath::Kcl { filename: "".into() },
visibility: Default::default(),
digest: None,
}),
ModuleId::default(),
ModulePath::Local { value: "".into() },
program,
)
}
#[tokio::test]
async fn order_imports() {
let mut modules = HashMap::new();
let a = kcl!("");
modules.insert("a.kcl".to_owned(), &a);
modules.insert("a.kcl".to_owned(), into_module_info(a));
let b = kcl!(
"
import \"a.kcl\"
"
);
modules.insert("b.kcl".to_owned(), &b);
modules.insert("b.kcl".to_owned(), into_module_info(b));
let order = import_graph(modules).unwrap();
let ctx = ExecutorContext::new_mock().await;
let order = import_graph(&modules, &ctx).unwrap();
assert_eq!(vec![vec!["a.kcl".to_owned()], vec!["b.kcl".to_owned()]], order);
}
#[test]
fn order_imports_none() {
#[tokio::test]
async fn order_imports_none() {
let mut modules = HashMap::new();
let a = kcl!(
@ -162,49 +256,51 @@ import \"a.kcl\"
y = 2
"
);
modules.insert("a.kcl".to_owned(), &a);
modules.insert("a.kcl".to_owned(), into_module_info(a));
let b = kcl!(
"
x = 1
"
);
modules.insert("b.kcl".to_owned(), &b);
modules.insert("b.kcl".to_owned(), into_module_info(b));
let order = import_graph(modules).unwrap();
let ctx = ExecutorContext::new_mock().await;
let order = import_graph(&modules, &ctx).unwrap();
assert_eq!(vec![vec!["a.kcl".to_owned(), "b.kcl".to_owned()]], order);
}
#[test]
fn order_imports_2() {
#[tokio::test]
async fn order_imports_2() {
let mut modules = HashMap::new();
let a = kcl!("");
modules.insert("a.kcl".to_owned(), &a);
modules.insert("a.kcl".to_owned(), into_module_info(a));
let b = kcl!(
"
import \"a.kcl\"
"
);
modules.insert("b.kcl".to_owned(), &b);
modules.insert("b.kcl".to_owned(), into_module_info(b));
let c = kcl!(
"
import \"a.kcl\"
"
);
modules.insert("c.kcl".to_owned(), &c);
modules.insert("c.kcl".to_owned(), into_module_info(c));
let order = import_graph(modules).unwrap();
let ctx = ExecutorContext::new_mock().await;
let order = import_graph(&modules, &ctx).unwrap();
assert_eq!(
vec![vec!["a.kcl".to_owned()], vec!["b.kcl".to_owned(), "c.kcl".to_owned()]],
order
);
}
#[test]
fn order_imports_cycle() {
#[tokio::test]
async fn order_imports_cycle() {
let mut modules = HashMap::new();
let a = kcl!(
@ -212,15 +308,16 @@ import \"a.kcl\"
import \"b.kcl\"
"
);
modules.insert("a.kcl".to_owned(), &a);
modules.insert("a.kcl".to_owned(), into_module_info(a));
let b = kcl!(
"
import \"a.kcl\"
"
);
modules.insert("b.kcl".to_owned(), &b);
modules.insert("b.kcl".to_owned(), into_module_info(b));
import_graph(modules).unwrap_err();
let ctx = ExecutorContext::new_mock().await;
import_graph(&modules, &ctx).unwrap_err();
}
}

View File

@ -8,3 +8,5 @@ mod import_graph;
pub use ast_node::Node;
pub use ast_visitor::Visitable;
pub use ast_walk::walk;
pub use import_graph::import_graph;
pub(crate) use import_graph::import_universe;

View File

@ -122,11 +122,8 @@ export type string
/// myRect = rect([20, 0])
///
/// myRect
/// |> extrude(10, %)
/// |> fillet(
/// radius = 0.5,
/// tags = [myRect.tags.rectangleSegmentA001]
/// )
/// |> extrude(length = 10)
/// |> fillet(radius = 0.5, tags = [myRect.tags.rectangleSegmentA001])
/// ```
///
/// See how we use the tag `rectangleSegmentA001` in the `fillet` function outside

View File

@ -28,5 +28,29 @@ description: Artifact commands add_lots.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -360,5 +360,29 @@ description: Artifact commands angled_line.kcl
"edge_id": "[uuid]",
"face_id": "[uuid]"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -28,5 +28,29 @@ description: Artifact commands argument_error.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -1,5 +1,5 @@
---
source: kcl/src/simulation_tests.rs
source: kcl-lib/src/simulation_tests.rs
description: Error from executing argument_error.kcl
---
KCL Type error

View File

@ -28,5 +28,29 @@ description: Artifact commands array_elem_pop.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -28,5 +28,29 @@ description: Artifact commands array_elem_pop_empty_fail.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -28,5 +28,29 @@ description: Artifact commands array_elem_pop_fail.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -28,5 +28,29 @@ description: Artifact commands array_elem_push.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -28,5 +28,29 @@ description: Artifact commands array_elem_push_fail.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -28,5 +28,29 @@ description: Artifact commands array_index_oob.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -28,5 +28,29 @@ description: Artifact commands array_range_expr.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -28,5 +28,29 @@ description: Artifact commands array_range_negative_expr.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -520,5 +520,29 @@ description: Artifact commands artifact_graph_example_code1.kcl
"edge_id": "[uuid]",
"face_id": "[uuid]"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -291,5 +291,29 @@ description: Artifact commands artifact_graph_example_code_no_3d.kcl
"angle_snap_increment": null
}
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -214,5 +214,29 @@ description: Artifact commands artifact_graph_example_code_offset_planes.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -881,5 +881,29 @@ description: Artifact commands artifact_graph_sketch_on_face_etc.kcl
"edge_id": "[uuid]",
"face_id": "[uuid]"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -37,6 +37,30 @@ description: Artifact commands assembly_mixed_units_cubes.kcl
"unit": "in"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
@ -602,13 +626,5 @@ description: Artifact commands assembly_mixed_units_cubes.kcl
"edge_id": "[uuid]",
"face_id": "[uuid]"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "in"
}
}
]

View File

@ -37,6 +37,30 @@ description: Artifact commands assembly_non_default_units.kcl
"unit": "in"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
@ -254,5 +278,13 @@ description: Artifact commands assembly_non_default_units.kcl
"type": "close_path",
"path_id": "[uuid]"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "in"
}
}
]

View File

@ -28,5 +28,29 @@ description: Artifact commands bad_units_in_annotation.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -320,5 +320,29 @@ description: Artifact commands basic_fillet_cube_close_opposite.kcl
"tolerance": 0.0000001,
"cut_type": "fillet"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -320,5 +320,29 @@ description: Artifact commands basic_fillet_cube_end.kcl
"tolerance": 0.0000001,
"cut_type": "fillet"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -308,5 +308,29 @@ description: Artifact commands basic_fillet_cube_next_adjacent.kcl
"tolerance": 0.0000001,
"cut_type": "fillet"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -308,5 +308,29 @@ description: Artifact commands basic_fillet_cube_previous_adjacent.kcl
"tolerance": 0.0000001,
"cut_type": "fillet"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -310,5 +310,29 @@ description: Artifact commands basic_fillet_cube_start.kcl
"tolerance": 0.0000001,
"cut_type": "fillet"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -249,5 +249,29 @@ description: Artifact commands big_number_angle_to_match_length_x.kcl
"edge_id": "[uuid]",
"face_id": "[uuid]"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -249,5 +249,29 @@ description: Artifact commands big_number_angle_to_match_length_y.kcl
"edge_id": "[uuid]",
"face_id": "[uuid]"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -28,5 +28,29 @@ description: Artifact commands boolean_logical_and.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

View File

@ -28,5 +28,29 @@ description: Artifact commands boolean_logical_multiple.kcl
"object_id": "[uuid]",
"hidden": true
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
},
{
"cmdId": "[uuid]",
"range": [],
"command": {
"type": "set_scene_units",
"unit": "mm"
}
}
]

Some files were not shown because too many files have changed in this diff Show More