Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ideas for future of interactive/streaming robot log/report #9

Open
bollwyvl opened this issue Oct 7, 2020 · 6 comments
Open

Ideas for future of interactive/streaming robot log/report #9

bollwyvl opened this issue Oct 7, 2020 · 6 comments

Comments

@bollwyvl
Copy link

bollwyvl commented Oct 7, 2020

Hi, folks! I'm super impressed by the work here! Thanks so much!

I'm sure it's on the roadmap, but the robot log/report output is a huge selling point I use to get robot adopted.
I don't have a big driver (or chunk of time) to do this work at present, but wanted to capture some of these ideas I've been mulling over.

On irobotframework/robotkernel, in addition to some "live" outputs with widgets/display updating, we ship the generated HTML log/report as embedded HTML in embedded javascript embedded in HTML. This approach probably doesn't have long-term legs, as the browsers seem to be cracking down on these kinds of tricks.

For some time, I've been meaning to formalize a mimerenderer for robot XML, skipping the HTML step entirely. This is complicated by things like screenshots and other attachments, which would need further special treatment, but often this is what you want.

The report would probably just be rendered as an HTML table, but perhaps eventually a lumino DataGrid, with a datasource driven by the XML could work.

The log would presumably be heavily inspired by the existing jquery-based tree view. DataGrid could be an option, if it better supported trees, e.g. a CellRenderer that knew how many levels of grid lines to show, but the fact remains that HTML is often created there, which would complicated matters.

Beyond that, another interesting approach: doing a robot dry-run first would fill out the "shape" of the suite execution. With that, it could draw Gantt chart views, like other test tools provide, or even something based on TimelineJS. Watching little bars fill up while screenshots appear would be very therapeutic vs ... FAILED.

Another feature to be gained by doing more client-side processing of the canonical XML would be potential LSP integration, e.g. I see:

  • three tests failed
  • open the first test
  • see it failed on keyword Foo a bar ${baz}
  • click on the Foo a bar, select Go to Definition
  • see the keyword definition (either in a .py or .robot file)

Looking forward to a bright robotic future!

@martinRenou
Copy link
Member

Thanks for starting this discussion @bollwyvl

I was kind of thinking of doing the same as in robokernel for now, in order to get a smooth transition from robokernel to xeus-robot.

At the same time, when going through robokernel code I thought we could do something better (not having a button that opens an HTML file). I am not super familiar with robotframework (yet?) so I am not super opinionated on any approach.

Maybe we should include some robocorp people in the discussion (@aikarjal @xylix @mikahanninen @osrjv)

@osrjv
Copy link
Contributor

osrjv commented Nov 27, 2020

I've been a "long-time" proponent of having some sort of inline viewer for RF executions inside notebooks. I think it goes against the core ideas of notebooks to have the results in a separate openable window.

Some opinions/notes:

  1. I personally don't think it has too much value to have parity with robotkernel, unless it can be done really quickly until something else is implemented. The UX with it is not too great, and it has a lot of hacky solutions for specific things.
  2. RF has quite good internal APIs for traversing the results and exporting various formats and statistics. Converting to JSON for instance is quite trivial. Making your own XML parser is probably not worth doing client-side.
  3. ..and even better than having some post-run XML parsing, you can make a listener that is attached to the execution that can stream results in real-time. Earlier in the year I made a small PoC that shows real-time run results in a browser.

@martinRenou
Copy link
Member

..and even better than having some post-run XML parsing, you can make a listener that is attached to the execution that can stream results in real-time. Earlier in the year I made a small PoC that shows real-time run results in a browser.

I'd be interested in looking into this, if you think that's the right approach

@bollwyvl
Copy link
Author

stream results in real-time

oh, of course, that's what i want to get to... but it seems in this case, the default output (e.g. robot XML), has literally milions of examples. I propose that whatever streaming model is crated, can also be populated "offline" with the XML.

Making your own XML parser

Luckily, browsers are quite good at parsing XML! I know it's no longer "en vogue," but clientside XML and XPath with the standard API is... acceptable.

lot of hacky solutions for specific things.

Sure, but it's a working tool, we had to build it first, a couple times. And man, that DOM picking 👨‍🍳👌 .


SO ANYHOW:

So anyhow: from an implementation perspective, i'm thinking

  • first build a basic MimeRender viewer of a valid output.xml
  • handle the "last render" of a robot cell as XML, and that will just work
  • refactor the data model out, handle incomplete/invalid data
  • refactor to work as an jupyter-widget
  • maybe do custom comms (but probably keep it a widget)

Then you can do whatever you want, but have a useful viewing tool, in its own right.

As to what actual pixels get drawn? Who knows yet!

  • some timeline (or audio/video timeline control)
    • should be able to show screenshots/other rich assets
  • some network-style diagrams
    • jupyterlab-drawio and ipyelk are two rendering options that can operate at basically interactive speed (graphs with >1k nodes can get above a comfortable 300ms rerender) but both support "hiding" complexity with nesting, etc.
  • linking to source
    • being able to get from these visualizations/log views, back to their source code
      • obviously could be a jupyterlab-lsp thing, but if there's a way to make it work with a simpler, a href-based approach, that's better.
  • some kind of tree grid
  • (facteted) search
    • tags are magic
  • some knowledge of history
    • compare two runs, etc.

A follow-on, that would help folk consuming lots of robot reports from CI would be finding robot reports in an archive e.g. FAILED robot windows.tgz or .zip.

@osrjv
Copy link
Contributor

osrjv commented Dec 2, 2020

Don't get me wrong, parsing output.xml is a valid option and there are quite a few reference implementations to look at. And like you said, it has the benefit of being an "offline" viewer as well.

What I was trying to get at is the fact that RF has a rich internal tree-like model for representing results, and that's what gets exported as XML to disk. If the kernel already has that execution on hand, it might not be the best option to do the extra round trip of writing it out and then parsing it back. In addition, when/if we get to the real-time part, it would most likely be handling the same (partial) result models as that's what the listener API exposes.

Whatever the approach, here's an old draft I made for a results widget:
image

Wouldn't probably be exactly like that, but I think something close to the collapsible tree structure in the default log would be a good initial approach.

@bollwyvl
Copy link
Author

bollwyvl commented Dec 2, 2020

old draft I made for a results widget:

Yeah, that's a great starting point. Having that actual run button says widgets sooner, rather than later.

Some thoughts:

  • part of what's marvelous about the classic log html is all the tags, etc. that get generated
    • but putting it at the top makes things hard as it pushes the "good stuff" down
    • maybe a collapsible "metadata" sidebar?

Over on robotframework-jupyterlibrary, i formalized some experiments with display-based stuff for a lightweight magic (e.g. makes a tempfolder, runs in it, displays the output as links). Not playing with the robot api directly, yet, nor does it do anything other than just run it, but it was interesting to put some of those things down in code and see how they feel.

rich internal tree-like model

Yeah, it does... but can and has changed, no? I guess it is just to be taking into account as a maintenance concern, but then the XML format has also changed.

Anyhow, i think to violently agree:

|--------|     |---------|     |----------|
|        |     |         |     |          |        |--------|
|  view  |-----|  model  |-----|  file    |--------| server |
|        |     |         |---+ |          |        |--------|
|--------|     |---------|-+ | |----------|
                           | |
                           | | |---------|         |--------|
                           | +-| display |---------| kernel |
                           |   |---------|         |--------|
                           |                         |
                           |   |--------|       |--------|
                           |---| widget |-------| widget |
                               | view   |       | model  |
                               |--------|       |--------|

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants