BrodyEnli

BrodyEnli

0-day streak
I made a few final tweaks (assigning part numbers; changing out the resistors, LED, and button; adjusting some traces to meet JLCPCB's recommendations, etc.) and applied for the OnBoard grant for my LucidVR haptic glove control PCB manufactured and assembled through JLCPCB. I should have included this in my last "ship", but I forgot about the grant; oops. github.com/hackclub/OnBoard/pull/770, github.com/Glitch752/LucidGlovesPCB
https://scrapbook-into-the-redwoods.s3.amazonaws.com/a0324cfc-8d0b-4d65-a073-7833148cf6af-image.pnghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/15b25117-c59e-4b22-8ee8-7ac4330804fc-image.pnghttps://imgutil.s3.us-east-2.amazonaws.com/770be7f00229e1b9467bd202bb1bb31cdb553b988562ccf0eca2650cb5835e5b/3a9a48f5-47cb-4816-8aa0-342a3507c03f.png
https://scrapbook-into-the-redwoods.s3.amazonaws.com/355777bf-6cb6-4233-a89a-c9efcce352ea-image.pnghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/211103f4-0210-47e1-b7f3-74d0ee20e736-image.pnghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/1362961b-2752-4c97-9821-3d6a44bd56f8-image.pnghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/eb93c3cd-5a7c-4488-bb19-d6b89579da45-image.pnghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/ad774918-7740-4ee0-9627-b9ee32ca5840-image.pnghttps://imgutil.s3.us-east-2.amazonaws.com/b81f5ee12dec014418075a425493679732abaa461dbe10d9b6587cce5b59221e/c35b1287-2729-4046-be0b-fa54a27d9eb7.png
pcb emoji
summer-of-making emoji
pcb emoji
github emoji
I implemented a custom language lexer, parser, and tree-walking interpreter for a custom programming language to learn Zig: github.com/Glitch752/ZigCompiler I want to eventually add more advanced syntax concepts like classes (and proper functions), then switch to an actual compiler as the repository name would imply. The eventual goal would be self-hosting the language; I've never made it far enough in my language projects to achieve that. The current possible programs are minimal, but there's a pretty extensible internal API for defining native functions and behavior. I'm quite happy with the performance as well! This 64-layer Sierpinski Triangle I showed can be generated in an invisibly short time. Not that it's super impressive, but for a first attempt using a notoriously slow strategy, I didn't expect very much. The language includes relatively robust error reporting (both for runtime and during parsing) but is dynamically typed without static analysis yet. While I plan to continue with this project at some point, I hit my goal of running a nontrivial program (while learning a lot about Zig!) and I would consider this a "shippable" state.
https://scrapbook-into-the-redwoods.s3.amazonaws.com/3b4f543a-e1f3-412f-a155-4e03c4c62138-image.pnghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/c7ec3af9-45b9-42c4-90a5-a9b9985bff4e-image.pnghttps://imgutil.s3.us-east-2.amazonaws.com/bc450bb90550342185a60f2b95fb28fd555d995aa7578e126a5fdfd598ce04ec/1144f1e5-3b7b-44dc-8ac7-7f795283bcf6.png
summer-of-making emoji
github emoji
I built a customizable on-screen, GPU-accelerated, OCR application: github.com/Glitch752/OnScreenOCR Here's a non-exhaustive list of the features: • Fully GPU-accelerated rendering using wgpu • Live preview of the OCR result • Support for taking screenshots • Support for multiple OCR languages • Result fixing and reformatting ◦ Reformat to remove hyphens from end of lines, moving the word to fit entirely on the line ◦ Hopefully more in the future if I find any annoyances • Ability to copy without newlines • Ability to fine-tune Tesseract's parameters ◦ Ability to export in other Tesseract formats (TSV, Alto, HOCR) • Support for non-rectangular selections • Support for multiple monitors • Keybinds for common actions (Ctrl+C to copy, Ctrl+Z to undo, arrows to move selection, etc.) • Full undo/redo history • Stays in system tray when closed • Numerous intuitive selection-related interactions, including drawing outlines, shifting edges/vertices, removing edges/vertices, and more.
https://scrapbook-into-the-redwoods.s3.amazonaws.com/192ce3cb-d8fc-4c9b-809d-feb49b4bb5bb-image.pnghttps://imgutil.s3.us-east-2.amazonaws.com/69c5e4f8ecc3a1bb8887056929e8b28ee6407f86f524d8713c9dd2451ec44234/e15c8dfa-9e69-4a20-8830-e786aa383085.png
c emoji
summer-of-making emoji
github emoji
Since my last update, I did a few things: • Worked on my desktop-transferred version of the raycaster to make it multithreaded and rendered a really nice 1080p image to finish the book. This is still the exact same raycasting code from the calculator but modified to run in one process per thread and merge the images in the end. I tried to mimic the ending scene they have in the book. This is the first image attached. • Worked on my calculator version to make it progressively render images so I can pause it at any time and have a nice-looking image. I thought this would be simple -- store the accumulated color of every pixel in a list, add to it, and divide by the total samples each pass. However, the restricted environment I'm working with is starting to show. It turns out that Python code is only allowed an extremely small amount of memory -- around 20KB from my testing. This meant that, no matter how I stored the data (unless there's some magical way to losslessly and efficiently store a color per pixel that I'm unaware of), it ended up being a tradeoff of either rendering at full resolution and not using this new feature or rendering at quarter resolution. Overnight, I did a quarter-resolution render, and I'll probably go back and do a full-resolution one without the new progressive system. However, the quarter-resolution render still looks great! It intentionally has a pretty aggressive depth-of-field, so the blurred left and right spheres are expected. It's cool that stuff like this can be done on a calculator (and programmed on a calculator)! The second image is a screenshot through my fixed libnspire and the third is the same image on the calculator's screen. I've been doing a scrapbook post for every hack session (and often doing work without a hack session), so I misunderstood the proper flow there... However, consider this my true "ship" of this project idea. I'll probably keep adding to it with concepts from the later books, but I also have some other projects I'm excited to work on!
https://scrapbook-into-the-redwoods.s3.amazonaws.com/758f25c2-8aa1-4771-baa4-2a996e84a76f-image.pnghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/9560ab25-f8d7-4cbc-833c-46c266bd4dbe-image.pnghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/8a26526b-a4a0-4e91-b9ef-d46fdfc878e8-20240619_130029.jpg
Okay... this is cheating a bit, but I wanted to see how the scene would look if I spent a long time to properly render it out. I used my patched libnspire to download the tns file and extracted the contents. Besides changing out ti_draw to use PIL and increasing the resolution and sample count, this is the unmodified code from the calculator. I'll do a "real" render on the calculator overnight, but this is what the raytracer is capable of so far! I'm quite impressed with the versatility of the TI-Nspire's built-in Python runtime.
https://scrapbook-into-the-redwoods.s3.amazonaws.com/0c60e904-e795-4bf3-9341-ad1ec4f5f5f1-image.png
spring-of-making emoji
goose-honk-technologist emoji
python emoji
I finished my depth-of-field render! This is 100 samples -- much more than I've successfully done before (although it took 38 minutes...) The focus plane was unfortunately outside of the range of the spheres, but it still looks interesting despite being blurry!
Well... it's an understatement to say taking a screenshot was harder than I thought it would be. I basically needed to reverse-engineer the handshake since my TI-Nspire calculator doesn't respond how libnspire expects it to. After a couple hours of work (That I should have put in as arcade hours... whoops), I finally got a screenshot taken from my calculator without the paid student software! Here's a render taken directly from the calculator screen!
https://scrapbook-into-the-redwoods.s3.amazonaws.com/4d68c75f-c895-49fc-8754-7d708d15f7c4-render_test_fixed.png
spring-of-making emoji
The camera position and field of view can be changed now. I spent a while rendering a nice-looking (albeit still 1/16 resolution and low sample count) image with a lower field of view, which shows the dielectric material nicely! After this, the only thing left to implement from Ray Tracing in One Weekend is depth of field (which they call defocus blur). Maybe I'll implement some of Ray Tracing: the Next Week after this. Some of it does not apply to my renderer, like bounding volume hierarchies since I can't load 3D models... unless I convert the 3D models to a Python file and transfer it.
https://scrapbook-into-the-redwoods.s3.amazonaws.com/e1468e3f-1c96-410f-afe2-52e5c92a037e-20240618_152843.jpghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/5cab7227-fb25-47c9-b212-41e88cfdd36c-20240618_151644.jpg
python emoji
spring-of-making emoji
It's difficult to see through a picture (and the render definitely needs more samples but I opted for a higher resolution this time), but dielectrics are working! the sphere on the left is a hollow glass sphere that bends light as one would expect!
https://scrapbook-into-the-redwoods.s3.amazonaws.com/e1497b49-60d8-4cb1-9775-ab66afe4900c-image.png
I finished the schematic layout of my LucidVR haptic gloves PCB. I might add a few more buttons or status LEDs since there are some free GPIO pins.
https://scrapbook-into-the-redwoods.s3.amazonaws.com/a65964ab-2a11-4d79-8615-fc0de2fc62d2-image.png
spring-of-making emoji
pcb emoji
Here's a ~30x speed video of most of a higher-resolution and higher-sample-count render. Since my last update, I implemented fuzzy reflections on metal (the left sphere is a very fuzzy reflection and the right has almost no fuzz) and started work on dielectrics (although there are none in this scene).
https://scrapbook-into-the-redwoods.s3.amazonaws.com/fa6d45b6-19a3-4142-9cad-f09d640a8a29-20240617_171928.jpghttps://scrapbook-into-the-redwoods.s3.amazonaws.com/7f4e5d35-0b93-499d-92f5-203a64bd1e48-20240617_172025.jpg
spring-of-making emoji
Here's a video of it rendering a scene with reflective materials! I'm past here in Ray Tracing in a weekend: raytracing.github.io/books/RayTracingInOneWeekend.html#metal/ascenewithmetalspheres Since my last update, I implemented: • An abstract class for materials ◦ Lambertian and metal materials • A data structure to store material hit records, similar to the book's hit record for geometry • Material support on the geometry that has been implemented so far
spring-of-making emoji
github emoji
Whoops, I posted that one before I started a real session and how hakkuun won't recognize the session. Apparently, if I repost it, I can add it to the session?
https://scrapbook-into-the-redwoods.s3.amazonaws.com/30e4fd92-7434-4970-88ee-3d8649b30104-image.png
spring-of-making emoji
It doesn't appear very well through my camera for some reason, but I have lots more done! I'm past this point in the book now: raytracing.github.io/books/RayTracingInOneWeekend.html#diffusematerials/usinggammacorrectionforaccuratecolorintensity This includes: • An abstract class for objects that can be hit • An arbitrary number of objects in the scene • Multiple samples per pixel (although it's only at 10 in this picture, so it's quite grainy), including ray direction randomization in a disc pattern rather than the book's square, allowing for better antialiasing • Gamma correction (Not sure how much this matters with the low accuracy of this screen, but still nice) • Ray reflections with a limit • Proper Lambertian reflections
https://scrapbook-into-the-redwoods.s3.amazonaws.com/7486906e-2e68-4f0a-93cb-cbd2078e8644-20240617_145819.jpg
Here's where I'm currently at with the project (at raytracing.github.io/books/RayTracingInOneWeekend.html#addingasphere/creatingourfirstraytracedimage). I'm typing all of the code on the NSpire, and while the ABCD keyboard is tricky to work with, at least it has a clipboard. I can't really share real code or commits (since there's no easy way to get it off the calculator), but here's a screenshot of a render. I'm translating all the C++ to the Nspire's Python dialect, which thankfully supports classes and operator overloading (although it's some weird mix of Python 2 and Python 3). I've implemented: • A 3D vector class, with operator overloading and all that, which is also used for points and colors • A pixel rendering system using the built-in tidraw library • Final image positioning logic (which is surprisingly hard to get right since non-integer rectangle widths are rounded in tidraw and therefore some values will give vertical or horizontal lines) • Ray-sending logic • Split code into multiple files (which was also surprisingly hard because you need to install each file as a Python library for some reason?) • A ray class and some custom utilities on that to reduce how much code I need to write • Sphere-ray hit detection I haven't followed the manual exactly in order, mainly because I've done a lot of raytracing stuff before without it. Notably, I'm straying from their method of "Normal always faces the ray" because I find it easier to code and think about "Normal always faces outward"; this means I need to change some hit and reflection logic. The render is set to a very low resolution right now, and it will be especially slow once I add multiple samples per pixel. I want to add an ETA so I can run some renders overnight, but I'm not quite to that point yet.
https://scrapbook-into-the-redwoods.s3.amazonaws.com/559661a4-8735-42b7-a4d6-4485c1d755eb-20240617_143644.jpg