1. Post #1241

    May 2016
    118 Posts
    That's pretty neat though, I haven't tried to tackle rendering gcode yet but it's coming shortly. How were you streaming the gcode? Did you get to do anything fancy with rendering the stuff? Most visualizes are pretty spare and bland, and don't offer many useful features imo.

    My application is just a clone of Cura/Slic3r/Makerware's gcode generator. Lots of discrete geometry, which I can't even pretend to understand. The primary reason we wrote this custom slicer is that we need custom infill generation. My project is part of our contract to create 3D printed recyclable packaging for the ISS, so our packaging has to attenuate gforces from launch along with general vibration and sound pressure. So far, our method actually attenuates better than what NASA currently uses - and our material isn't trash.

    The other half of the contract is a hardware rack enroute to the ISS in 1/2018 (if the launch window holds). This hardware will take our packaging and recycle it into raw 3D printer filament. It's expanded a bit more into sub contracts for things like utensils and medical devices, which NASA is eyeing for their eventual Mars missions (the story of utensils on the ISS now is a bemusing one, lol). A few robotic assembly offshoots too - our division has a vision and the groundwork for what will, in my opinion, become the building blocks of orbital shipyards and long term habitation in space.

    I'm a bit worried and self-conscious, as I feel my app has gone way out of scope. Knowing what I know now about C++, I could have kludged what we need into a pre-existing slicer. I earnestly believe that less of my code is cancerous... But it's less code, overall. I don't have as many useful features as more developed slicers, and developing this alone is scary aw fuck because I'm entirely responsible for everything from features to bugs to testing. And there's few features, more than a few bugs, and hardly enough testing :v

    I'm hoping this at least looks kinda OK on a resume, but I feel like a sham regardless. So I busy my mind by doing graphics programming to avoid confronting wtf I do at work. I don't even want to consider how much this has cost to develop, in cash or man (literally singular) hours

    Edited:

    Also as cool/impressive as parts of that may sound, I got my job literally because I emailed my now-boss a week after sending my resume to make sure it was received. She was too busy to read resumes, thought me and one other intern who emailed seemed interested, so she offered us positions hardly an hour later

    Yeah my visualization wasn't too interesting either, basically just parsing the input for move commands and building a line mesh out of it. I was kinda proud of how I handled display filtering - as in, letting the user choose to just display infill, or just display top/bottom solid layer, inner/outer perimeter etc, as this was done completely on the GPU. Instead of having a mesh per line group I actually had each vertex contain the information of what group they were in and then passed a list of groups to display to the fragment shader - the group ID would get passed from the vertex to the fragment shader and then I'd discard fragments if they didn't belong to the groups selected to be displayed. This is extensible as well in theory to support user defined groups but I didn't implement that.
    Besides from that, it's pretty standard - you can look at individual layers and ranges of layers, nothing special really.
    Command streaming was a bit more frustrating eh I mean interesting - it was a bit of messing around with serial ports until I finally had a reliable way of streaming each command, guaranteeing that they were being interpreted by the printer correctly, etc. Lots of fiddling involved. In the end it came down to opening serial ports that printers connect to for read/write, and writing each command individually as ascii text, while async waiting for confirm messages to come in and then continue sending. In theory you can also interleave status poll commands and then parse the replies to display information like bed / head temperatures and stuff.
    It was all made a bit more interesting by the fact that it was implemented as a webtool - the backend ran on whatever machine was connected to the printer, in native C++, with a tiny web server on the side that hosted a WebGL app handling all the user interactions and rendering. So essentially you can just open it up in any browser that has WebGL support on a PC in the local network, and you're ready to upload gcode, look at it and print it directly from the tool. I wrote the frontend in C++ as well, using Emscripten to transpile it to JavaScript so I could share a lot of code between frontend and backend, but in hindsight it wouldn't have been the worst idea to just write it in TypeScript.
    In practice of course it's not as shiny as it sounds because it's very picky about the flavour of gcode it supports (mainly Slic3r output, with some hacks to make it work for at least one of the flavours Cura supports) and has tons of tiny issues.

    What you're doing sounds way more impressive tbh and I feel like you're suffering some sort of impostor syndrome which is completely normal. Give yourself some credit, from what you're posting it seems like you're doing a fine job and you're pulling off some impressive stuff in your personal project as well so don't beat yourself up too much.
    Reply With Quote Edit / Delete Reply Windows 10 Chrome Germany Show Events Programming King Programming King x 1 (list)

  2. Post #1242
    tschumann's Avatar
    March 2009
    521 Posts
    A little off topic... [rant] At work I suggested a feature that I think would really help our product, and that I'd even be willing to create a prototype in my free time. I have never had an idea shut down so immediately and without consideration before - especially one that addresses several key requirements that the project leads have been saying we 'need' for a very long time. I think they just like having something to complain about.. [/rant]

    How do you guys deal with that type of stuff? They've been asking for this sort of solution for months, and the second they get a viable proposal it's off the table without even being considered.
    I tend to avoid suggesting ideas personally, but I'd say just get on with it and don't say I told you so when they're wrong - they'll probably decide to go with a variation on the idea in a few months time.
    Reply With Quote Edit / Delete Reply Windows 10 Chrome Australia Show Events Friendly Friendly x 1 (list)

  3. Post #1243
    Gold Member
    Berkin's Avatar
    October 2013
    1,829 Posts
    Presented without comment.

    Reply With Quote Edit / Delete Reply Windows 10 Chrome United States Show Events Programming King Programming King x 2Winner Winner x 1 (list)

  4. Post #1244
    Gold Member
    paindoc's Avatar
    March 2009
    8,745 Posts
    Yeah my visualization wasn't too interesting either, basically just parsing the input for move commands and building a line mesh out of it. I was kinda proud of how I handled display filtering - as in, letting the user choose to just display infill, or just display top/bottom solid layer, inner/outer perimeter etc, as this was done completely on the GPU. Instead of having a mesh per line group I actually had each vertex contain the information of what group they were in and then passed a list of groups to display to the fragment shader - the group ID would get passed from the vertex to the fragment shader and then I'd discard fragments if they didn't belong to the groups selected to be displayed. This is extensible as well in theory to support user defined groups but I didn't implement that.
    Besides from that, it's pretty standard - you can look at individual layers and ranges of layers, nothing special really.
    Command streaming was a bit more frustrating eh I mean interesting - it was a bit of messing around with serial ports until I finally had a reliable way of streaming each command, guaranteeing that they were being interpreted by the printer correctly, etc. Lots of fiddling involved. In the end it came down to opening serial ports that printers connect to for read/write, and writing each command individually as ascii text, while async waiting for confirm messages to come in and then continue sending. In theory you can also interleave status poll commands and then parse the replies to display information like bed / head temperatures and stuff.
    It was all made a bit more interesting by the fact that it was implemented as a webtool - the backend ran on whatever machine was connected to the printer, in native C++, with a tiny web server on the side that hosted a WebGL app handling all the user interactions and rendering. So essentially you can just open it up in any browser that has WebGL support on a PC in the local network, and you're ready to upload gcode, look at it and print it directly from the tool. I wrote the frontend in C++ as well, using Emscripten to transpile it to JavaScript so I could share a lot of code between frontend and backend, but in hindsight it wouldn't have been the worst idea to just write it in TypeScript.
    In practice of course it's not as shiny as it sounds because it's very picky about the flavour of gcode it supports (mainly Slic3r output, with some hacks to make it work for at least one of the flavours Cura supports) and has tons of tiny issues.

    What you're doing sounds way more impressive tbh and I feel like you're suffering some sort of impostor syndrome which is completely normal. Give yourself some credit, from what you're posting it seems like you're doing a fine job and you're pulling off some impressive stuff in your personal project as well so don't beat yourself up too much.
    That's really neat, regardless. Was it some kind of indirect drawing? What you're doing sounds really really useful, and my first attempts at rendering this stuff (in OpenGL, ages ago) barely refined the primitive single-mesh-per-layer bit: I used a mesh per toolpath type. I might borrow your concept, if you don't mind. Command streaming does sound frustrating though, and that sort of stuff used to really irritate me when I did embedded development in the past. We just use pronterface, but that hasn't been updated in ages and it has issues with the newest boards like the Duet and Smoothieboard (which run at high baud rates and clock frequencies, afaik). I like your webtool integration as well, as I had hoped to get mine working like that eventually. GCode interpretation is also rather difficult, as each slicer does things a little differently and then you have to still consider things like firmware type of the Gcode too - I cheated, in my past visualization efforts, and used verbose Gcode output so that I could fairly easily infer what a set of commands were doing.

    I have a couple things in my code I'm proud of, especially the removal of that nasty as hell Slic3r method that involved "while(1)", "for(;;)", reckless threading and pointer shenanigans, and goto's to finish the mess. I've tried to make note of sections I'm particularly proud of, so I can mention them if given the chance in a job interview or something. Not sure how that stuff goes, though. I'd buy at least some imposter syndrome being involved: to be an intern going from no C++ experience to where I'm at is a sign of determination and ability to learn on-the-job, if nothing else.

    I also can't get too high and mighty or proud of my work. It stands on the shoulders of giants, and I got much of the algorithmic outline and systemic layout from other projects on github (same is true for all my projects, tbh). I don't think I'm that great of a programmer for what I've done, but I try to give myself at least a bit of credit so I don't demoralize myself to death haha