-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ideas for future of interactive/streaming robot log/report #9
Comments
Thanks for starting this discussion @bollwyvl I was kind of thinking of doing the same as in robokernel for now, in order to get a smooth transition from robokernel to xeus-robot. At the same time, when going through robokernel code I thought we could do something better (not having a button that opens an HTML file). I am not super familiar with robotframework (yet?) so I am not super opinionated on any approach. Maybe we should include some robocorp people in the discussion (@aikarjal @xylix @mikahanninen @osrjv) |
I've been a "long-time" proponent of having some sort of inline viewer for RF executions inside notebooks. I think it goes against the core ideas of notebooks to have the results in a separate openable window. Some opinions/notes:
|
I'd be interested in looking into this, if you think that's the right approach |
oh, of course, that's what i want to get to... but it seems in this case, the default output (e.g. robot
Luckily, browsers are quite good at parsing XML! I know it's no longer "en vogue," but clientside XML and XPath with the standard API is... acceptable.
Sure, but it's a working tool, we had to build it first, a couple times. And man, that DOM picking 👨🍳👌 . SO ANYHOW: So anyhow: from an implementation perspective, i'm thinking
Then you can do whatever you want, but have a useful viewing tool, in its own right. As to what actual pixels get drawn? Who knows yet!
A follow-on, that would help folk consuming lots of robot reports from CI would be finding robot reports in an archive e.g. |
Yeah, that's a great starting point. Having that actual run button says widgets sooner, rather than later. Some thoughts:
Over on robotframework-jupyterlibrary, i formalized some experiments with display-based stuff for a lightweight magic (e.g. makes a tempfolder, runs in it, displays the output as links). Not playing with the robot api directly, yet, nor does it do anything other than just run it, but it was interesting to put some of those things down in code and see how they feel.
Yeah, it does... but can and has changed, no? I guess it is just to be taking into account as a maintenance concern, but then the XML format has also changed. Anyhow, i think to violently agree:
|
Hi, folks! I'm super impressed by the work here! Thanks so much!
I'm sure it's on the roadmap, but the robot log/report output is a huge selling point I use to get robot adopted.
I don't have a big driver (or chunk of time) to do this work at present, but wanted to capture some of these ideas I've been mulling over.
On irobotframework/robotkernel, in addition to some "live" outputs with widgets/display updating, we ship the generated HTML log/report as embedded HTML in embedded javascript embedded in HTML. This approach probably doesn't have long-term legs, as the browsers seem to be cracking down on these kinds of tricks.
For some time, I've been meaning to formalize a mimerenderer for robot XML, skipping the HTML step entirely. This is complicated by things like screenshots and other attachments, which would need further special treatment, but often this is what you want.
The report would probably just be rendered as an HTML table, but perhaps eventually a lumino
DataGrid
, with a datasource driven by the XML could work.The log would presumably be heavily inspired by the existing jquery-based tree view.
DataGrid
could be an option, if it better supported trees, e.g. a CellRenderer that knew how many levels of grid lines to show, but the fact remains that HTML is often created there, which would complicated matters.Beyond that, another interesting approach: doing a robot dry-run first would fill out the "shape" of the suite execution. With that, it could draw Gantt chart views, like other test tools provide, or even something based on TimelineJS. Watching little bars fill up while screenshots appear would be very therapeutic vs
... FAILED
.Another feature to be gained by doing more client-side processing of the canonical XML would be potential LSP integration, e.g. I see:
Foo a bar ${baz}
Foo a bar
, select Go to Definition.py
or.robot
file)Looking forward to a bright robotic future!
The text was updated successfully, but these errors were encountered: