Skip to content

Commit

Permalink
Docs for latest tag
Browse files Browse the repository at this point in the history
  • Loading branch information
Hartorn committed Jul 22, 2024
1 parent 7645f8e commit d12cb01
Show file tree
Hide file tree
Showing 15 changed files with 13 additions and 47 deletions.
Binary file removed docs/_images/api_key.png
Binary file not shown.
Binary file removed docs/_images/dataset_conversation.png
Binary file not shown.
Binary file removed docs/_images/local_eval_conv.png
Binary file not shown.
Binary file removed docs/_images/new_project.png
Binary file not shown.
9 changes: 2 additions & 7 deletions docs/_sources/guide/local-evaluation.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ We can now launch the evaluation run:

.. code-block:: python
eval_run = evaluation = hub.evaluate(
eval_run = hub.evaluate(
model=my_local_bot,
dataset=dataset_id,
# optionally, specify a name
Expand All @@ -124,9 +124,4 @@ the evaluation run to complete and then print the results:
Evaluation metrics output

You can also check the results in the Hub interface and compare it with other
evaluation runs. For example, you can inspect each conversation and see the:

.. figure:: ../_static/guide/local_eval_conv.png

Example of conversation evaluation. Note that our "echo" agent was used to
generate the response.
evaluation runs.
4 changes: 2 additions & 2 deletions docs/_sources/guide/run-evaluations.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -76,15 +76,15 @@ We can now launch the evaluation run:

.. code-block:: python
eval_run = evaluation = hub.evaluate(
eval_run = hub.evaluate(
model=model.id,
dataset=dataset_id
# optionally, specify a name
name="staging-build-a4f321",
)
The evaluation run will be queued and processed by the Hub. The ``evalute``
The evaluation run will be queued and processed by the Hub. The ``evaluate``
method will immediately return an :class:`~giskard_hub.data.EvaluationRun` object
while the evaluation is running. Note however that this object will not contain
the evaluation results until the evaluation is completed.
Expand Down
22 changes: 2 additions & 20 deletions docs/_sources/quickstart.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -19,12 +19,6 @@ Get your API key
Head over to your Giskard Hub instance and click on the user icon in the top right corner. You will find your personal
API key, click on the button to copy it.

.. image:: /_static/quickstart/api_key.png
:width: 779px
:scale: 50%
:align: center
:alt: ""

.. note::

If you don't see your API key in the UI, it means your administrator has not enabled API keys. Please contact them to get one.
Expand Down Expand Up @@ -79,12 +73,7 @@ Create a project
description="This is a test project to get started with the Giskard Hub client library",
)
That's it! You have created a project. You will now see it in the Hub UI project selector:

.. image:: /_static/quickstart/new_project.png
:scale: 50%
:align: center
:alt: ""
That's it! You have created a project.

.. tip::

Expand Down Expand Up @@ -139,13 +128,6 @@ These are the attributes you can set for a conversation (the only required attri
You can add as many conversations as you want to the dataset.


Again, you'll find your newly created dataset in the Hub UI:

.. image:: /_static/quickstart/dataset_conversation.png
:align: center
:alt: ""


Configure a model
-----------------

Expand Down Expand Up @@ -198,7 +180,7 @@ If all is working well, this will return something like
Run a remote evaluation
-----------------------

We can now lunch a remote evaluation of our model!
We can now launch a remote evaluation of our model!

.. code-block:: python
Expand Down
Binary file removed docs/_static/guide/local_eval_conv.png
Binary file not shown.
Binary file removed docs/_static/quickstart/api_key.png
Binary file not shown.
Binary file removed docs/_static/quickstart/dataset_conversation.png
Binary file not shown.
Binary file removed docs/_static/quickstart/new_project.png
Binary file not shown.
11 changes: 2 additions & 9 deletions docs/guide/local-evaluation.html
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ <h2>Run the evaluation<a class="headerlink" href="#run-the-evaluation" title="Li
</span></code></pre></div>
</div>
<p>We can now launch the evaluation run:</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><code><span id="line-1"><span class="n">eval_run</span> <span class="o">=</span> <span class="n">evaluation</span> <span class="o">=</span> <span class="n">hub</span><span class="o">.</span><span class="n">evaluate</span><span class="p">(</span>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><code><span id="line-1"><span class="n">eval_run</span> <span class="o">=</span> <span class="n">hub</span><span class="o">.</span><span class="n">evaluate</span><span class="p">(</span>
</span><span id="line-2"> <span class="n">model</span><span class="o">=</span><span class="n">my_local_bot</span><span class="p">,</span>
</span><span id="line-3"> <span class="n">dataset</span><span class="o">=</span><span class="n">dataset_id</span><span class="p">,</span>
</span><span id="line-4"> <span class="c1"># optionally, specify a name</span>
Expand All @@ -213,14 +213,7 @@ <h2>Run the evaluation<a class="headerlink" href="#run-the-evaluation" title="Li
</figcaption>
</figure>
<p>You can also check the results in the Hub interface and compare it with other
evaluation runs. For example, you can inspect each conversation and see the:</p>
<figure class="align-default" id="id2">
<img alt="../_images/local_eval_conv.png" src="../_images/local_eval_conv.png"/>
<figcaption>
<p><span class="caption-text">Example of conversation evaluation. Note that our “echo” agent was used to
generate the response.</span><a class="headerlink" href="#id2" title="Link to this image"></a></p>
</figcaption>
</figure>
evaluation runs.</p>
</section>
</section>
</div><div class="flex justify-between items-center pt-6 mt-12 border-t border-border gap-4">
Expand Down
4 changes: 2 additions & 2 deletions docs/guide/run-evaluations.html
Original file line number Diff line number Diff line change
Expand Up @@ -176,15 +176,15 @@ <h2>Launch a remote evaluation<a class="headerlink" href="#launch-a-remote-evalu
</span></code></pre></div>
</div>
<p>We can now launch the evaluation run:</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><code><span id="line-1"><span class="n">eval_run</span> <span class="o">=</span> <span class="n">evaluation</span> <span class="o">=</span> <span class="n">hub</span><span class="o">.</span><span class="n">evaluate</span><span class="p">(</span>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><code><span id="line-1"><span class="n">eval_run</span> <span class="o">=</span> <span class="n">hub</span><span class="o">.</span><span class="n">evaluate</span><span class="p">(</span>
</span><span id="line-2"> <span class="n">model</span><span class="o">=</span><span class="n">model</span><span class="o">.</span><span class="n">id</span><span class="p">,</span>
</span><span id="line-3"> <span class="n">dataset</span><span class="o">=</span><span class="n">dataset_id</span>
</span><span id="line-4"> <span class="c1"># optionally, specify a name</span>
</span><span id="line-5"> <span class="n">name</span><span class="o">=</span><span class="s2">"staging-build-a4f321"</span><span class="p">,</span>
</span><span id="line-6"><span class="p">)</span>
</span></code></pre></div>
</div>
<p>The evaluation run will be queued and processed by the Hub. The <code class="docutils literal notranslate"><span class="pre">evalute</span></code>
<p>The evaluation run will be queued and processed by the Hub. The <code class="docutils literal notranslate"><span class="pre">evaluate</span></code>
method will immediately return an <a class="reference internal" href="../reference/entities/index.html#giskard_hub.data.EvaluationRun" title="giskard_hub.data.EvaluationRun"><code class="xref py py-class docutils literal notranslate"><span class="pre">EvaluationRun</span></code></a> object
while the evaluation is running. Note however that this object will not contain
the evaluation results until the evaluation is completed.</p>
Expand Down
8 changes: 2 additions & 6 deletions docs/quickstart.html
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,6 @@ <h2>Install the client library<a class="headerlink" href="#install-the-client-li
<h2>Get your API key<a class="headerlink" href="#get-your-api-key" title="Link to this heading" x-intersect.margin.0%.0%.-70%.0%="activeSection = '#get-your-api-key'"></a></h2>
<p>Head over to your Giskard Hub instance and click on the user icon in the top right corner. You will find your personal
API key, click on the button to copy it.</p>
<a class="reference internal image-reference" href="_images/api_key.png"><img alt='""' class="align-center" src="_images/api_key.png" style="width: 389.5px; height: 217.0px;"/></a>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>If you don’t see your API key in the UI, it means your administrator has not enabled API keys. Please contact them to get one.</p>
Expand Down Expand Up @@ -176,8 +175,7 @@ <h3>Create a project<a class="headerlink" href="#create-a-project" title="Link t
</span><span id="line-4"><span class="p">)</span>
</span></code></pre></div>
</div>
<p>That’s it! You have created a project. You will now see it in the Hub UI project selector:</p>
<a class="reference internal image-reference" href="_images/new_project.png"><img alt='""' class="align-center" src="_images/new_project.png" style="width: 464.0px; height: 293.0px;"/></a>
<p>That’s it! You have created a project.</p>
<div class="admonition tip">
<p class="admonition-title">Tip</p>
<p>If you have an already existing project, you can easily retrieve it. Either use <code class="docutils literal notranslate"><span class="pre">hub.projects.list()</span></code> to get a
Expand Down Expand Up @@ -227,8 +225,6 @@ <h3>Import a dataset<a class="headerlink" href="#import-a-dataset" title="Link t
<li><p><code class="docutils literal notranslate"><span class="pre">demo_output</span></code>: A demonstration of a (possibly wrong) output from the model. This is just for demonstration purposes.</p></li>
</ul>
<p>You can add as many conversations as you want to the dataset.</p>
<p>Again, you’ll find your newly created dataset in the Hub UI:</p>
<img alt='""' class="align-center" src="_images/dataset_conversation.png"/>
</section>
<section id="configure-a-model">
<h3>Configure a model<a class="headerlink" href="#configure-a-model" title="Link to this heading" x-intersect.margin.0%.0%.-70%.0%="activeSection = '#configure-a-model'"></a></h3>
Expand Down Expand Up @@ -276,7 +272,7 @@ <h3>Configure a model<a class="headerlink" href="#configure-a-model" title="Link
</section>
<section id="run-a-remote-evaluation">
<h3>Run a remote evaluation<a class="headerlink" href="#run-a-remote-evaluation" title="Link to this heading" x-intersect.margin.0%.0%.-70%.0%="activeSection = '#run-a-remote-evaluation'"></a></h3>
<p>We can now lunch a remote evaluation of our model!</p>
<p>We can now launch a remote evaluation of our model!</p>
<div class="highlight-python notranslate"><div class="highlight"><pre><span></span><code><span id="line-1"><span class="n">eval_run</span> <span class="o">=</span> <span class="n">hub</span><span class="o">.</span><span class="n">evaluate</span><span class="p">(</span>
</span><span id="line-2"> <span class="n">model</span><span class="o">=</span><span class="n">model</span><span class="p">,</span>
</span><span id="line-3"> <span class="n">dataset</span><span class="o">=</span><span class="n">dataset</span><span class="p">,</span>
Expand Down
2 changes: 1 addition & 1 deletion docs/searchindex.js

Large diffs are not rendered by default.

0 comments on commit d12cb01

Please sign in to comment.