Skip to content

Commit

Permalink
multiple corrections
Browse files Browse the repository at this point in the history
  • Loading branch information
oualib committed Oct 24, 2024
1 parent e7bf52e commit add5b4f
Show file tree
Hide file tree
Showing 24 changed files with 389 additions and 546 deletions.
4 changes: 2 additions & 2 deletions docs/source/contribution_guidelines_code_auto_doc_example.rst
Original file line number Diff line number Diff line change
Expand Up @@ -315,13 +315,13 @@ And to reference a module named vDataFrame:
.. seealso::
:py:mod:`vDataFrame`
:py:func:`~verticapy.vDataFrame`
**Output:**

.. seealso::

:py:mod:`vDataFrame`
:py:func:`~verticapy.vDataFrame`

Now you can go through the below examples to understand the usage in detail. From the examples you will note a few things:

Expand Down
2 changes: 1 addition & 1 deletion docs/source/examples_business_booking.rst
Original file line number Diff line number Diff line change
Expand Up @@ -279,7 +279,7 @@ It looks like there are two main predictors: 'mode_hotel_cluster_count' and 'tri
- look for a shorter trip duration.
- not click as much (spend more time at the same web page).

Let's add our prediction to the :py:mod:`vDataFrame`.
Let's add our prediction to the :py:func:`~verticapy.vDataFrame`.

.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/source/examples_business_churn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ ________
Machine Learning
-----------------

:py:func:`~verticapy.machine_learning.vertica.LogisticRegression` is a very powerful algorithm and we can use it to detect churns. Let's split our :py:mod:`vDataFrame` into training and testing set to evaluate our model.
:py:func:`~verticapy.machine_learning.vertica.LogisticRegression` is a very powerful algorithm and we can use it to detect churns. Let's split our :py:func:`~verticapy.vDataFrame` into training and testing set to evaluate our model.

.. ipython:: python
Expand Down
4 changes: 2 additions & 2 deletions docs/source/examples_business_football.rst
Original file line number Diff line number Diff line change
Expand Up @@ -979,7 +979,7 @@ To compute a ``k-means`` model, we need to find a value for 'k'. Let's draw an :
model_kmeans.fit("football_clustering", predictors)
model_kmeans.clusters_
Let's add the prediction to the :py:mod:`vDataFrame`.
Let's add the prediction to the :py:func:`~verticapy.vDataFrame`.

.. code-block:: python
Expand Down Expand Up @@ -1983,7 +1983,7 @@ Looking at the importance of each feature, it seems like direct confrontations a
.. raw:: html
:file: /project/data/VerticaPy/docs/figures/examples_football_features_importance.html

Let's add the predictions to the :py:mod:`vDataFrame`.
Let's add the predictions to the :py:func:`~verticapy.vDataFrame`.

Draws are pretty rare, so we'll only consider them if a tie was very likely to occur.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/examples_business_insurance.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ You can skip the below cell if you already have an established connection.
vp.connect("VerticaDSN")
Let's create a new schema and assign the data to a :py:mod:`vDataFrame` object.
Let's create a new schema and assign the data to a :py:func:`~verticapy.vDataFrame` object.

.. code-block:: ipython
Expand Down
8 changes: 4 additions & 4 deletions docs/source/examples_business_movies.rst
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ You can skip the below cell if you already have an established connection.
vp.connect("VerticaDSN")
Let's create a new schema and assign the data to a :py:mod:`vDataFrame` object.
Let's create a new schema and assign the data to a :py:func:`~verticapy.vDataFrame` object.

.. code-block:: ipython
Expand Down Expand Up @@ -349,7 +349,7 @@ Let's join our notoriety metrics for actors and directors with the main dataset.
],
)
As we did many operation, it can be nice to save the :py:mod:`vDataFrame` as a table in the Vertica database.
As we did many operation, it can be nice to save the :py:func:`~verticapy.vDataFrame` as a table in the Vertica database.

.. code-block:: python
Expand Down Expand Up @@ -754,7 +754,7 @@ Let's create a model to evaluate an unbiased score for each different movie.
.. raw:: html
:file: /project/data/VerticaPy/docs/figures/examples_movies_filmtv_complete_model_report.html

The model is good. Let's add it in our :py:mod:`vDataFrame`.
The model is good. Let's add it in our :py:func:`~verticapy.vDataFrame`.

.. code-block:: python
Expand Down Expand Up @@ -926,7 +926,7 @@ By looking at the elbow curve, we can choose 15 clusters. Let's create a ``k-mea
model_kmeans.fit(filmtv_movies_complete, predictors)
model_kmeans.clusters_
Let's add the clusters in the :py:mod:`vDataFrame`.
Let's add the clusters in the :py:func:`~verticapy.vDataFrame`.


.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/source/examples_business_smart_meters.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ You can skip the below cell if you already have an established connection.
vp.connect("VerticaDSN")
Create the :py:mod:`vDataFrame` of the datasets:
Create the :py:func:`~verticapy.vDataFrame` of the datasets:

.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/source/examples_business_spam.rst
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ Let's compute some statistics using the length of the message.
.. raw:: html
:file: /project/data/VerticaPy/docs/figures/examples_spam_table_clean_2.html

Let's add the most occurent words in our :py:mod:`vDataFrame` and compute the correlation vector.
Let's add the most occurent words in our :py:func:`~verticapy.vDataFrame` and compute the correlation vector.

.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/source/examples_business_spotify.rst
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ Create a new schema, "spotify".
Data Loading
-------------

Load the datasets into the :py:mod:`vDataFrame` with :py:func:`~verticapy.read_csv` and then view them with :py:func:`~verticapy.vDataFrame.head`.
Load the datasets into the :py:func:`~verticapy.vDataFrame` with :py:func:`~verticapy.read_csv` and then view them with :py:func:`~verticapy.vDataFrame.head`.

.. code-block::
Expand Down
4 changes: 2 additions & 2 deletions docs/source/examples_learn_iris.rst
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,7 @@ Let's plot the model to see the perfect separation.
.. raw:: html
:file: /project/data/VerticaPy/docs/figures/examples_model_plot.html

We can add this probability to the :py:mod:`vDataFrame`.
We can add this probability to the :py:func:`~verticapy.vDataFrame`.

.. code-block:: python
Expand Down Expand Up @@ -275,7 +275,7 @@ Let's create a model to classify the Iris virginica.
.. raw:: html
:file: /project/data/VerticaPy/docs/figures/examples_iris_table_ml_cv_2.html

We have another excellent model. Let's add it to the :py:mod:`vDataFrame`.
We have another excellent model. Let's add it to the :py:func:`~verticapy.vDataFrame`.

.. code-block:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/source/examples_learn_pokemon.rst
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ In terms of missing values, our only concern is the Pokemon's second type (Type_
.. raw:: html
:file: /project/data/VerticaPy/docs/figures/examples_pokemon_table_clean_2.html

Let's use the current_relation method to see how our data preparation so far on the :py:mod:`vDataFrame` generates SQL code.
Let's use the current_relation method to see how our data preparation so far on the :py:func:`~verticapy.vDataFrame` generates SQL code.

.. ipython:: python
Expand Down
4 changes: 2 additions & 2 deletions docs/source/examples_learn_titanic.rst
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ Survival correlates strongly with whether or not a passenger has a lifeboat (the
- Passengers with a lifeboat
- Passengers without a lifeboat

Before we move on: we did a lot of work to clean up this data, but we haven't saved anything to our Vertica database! Let's look at the modifications we've made to the :py:mod:`vDataFrame`.
Before we move on: we did a lot of work to clean up this data, but we haven't saved anything to our Vertica database! Let's look at the modifications we've made to the :py:func:`~verticapy.vDataFrame`.

.. ipython:: python
Expand All @@ -322,7 +322,7 @@ VerticaPy dynamically generates SQL code whenever you make modifications to your
vp.set_option("sql_on", False)
print(titanic.info())
Let's move on to modeling our data. Save the :py:mod:`vDataFrame` to your Vertica database.
Let's move on to modeling our data. Save the :py:func:`~verticapy.vDataFrame` to your Vertica database.

.. ipython:: python
:okwarning:
Expand Down
4 changes: 2 additions & 2 deletions docs/source/examples_understand_africa_education.rst
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ Eight seems to be a suitable number of clusters. Let's compute a ``k-means`` mod
model = KMeans(n_cluster = 8)
model.fit(africa, X = ["lon", "lat"])
We can add the prediction to the :py:mod:`vDataFrame` and draw the scatter map.
We can add the prediction to the :py:func:`~verticapy.vDataFrame` and draw the scatter map.

.. code-block:: python
Expand Down Expand Up @@ -500,7 +500,7 @@ Let's look at the feature importance for each model.

Feature importance between between math score and the reading score are almost identical.

We can add these predictions to the main :py:mod:`vDataFrame`.
We can add these predictions to the main :py:func:`~verticapy.vDataFrame`.

.. code-block:: python
Expand Down
6 changes: 3 additions & 3 deletions docs/source/examples_understand_covid19.rst
Original file line number Diff line number Diff line change
Expand Up @@ -283,14 +283,14 @@ Because of the upward monotonic trend, we can also look at the correlation betwe
covid19["elapsed_days"] = covid19["date"] - fun.min(covid19["date"])._over(by = [covid19["state"]])
We can generate the SQL code of the :py:mod:`vDataFrame`
to see what happens behind the scenes when we modify our data from within the :py:mod:`vDataFrame`.
We can generate the SQL code of the :py:func:`~verticapy.vDataFrame`
to see what happens behind the scenes when we modify our data from within the :py:func:`~verticapy.vDataFrame`.

.. ipython:: python
print(covid19.current_relation())
The :py:mod:`vDataFrame` memorizes all of our operations on the data to dynamically generate the correct SQL statement and passes computation and aggregation to Vertica.
The :py:func:`~verticapy.vDataFrame` memorizes all of our operations on the data to dynamically generate the correct SQL statement and passes computation and aggregation to Vertica.

Let's see the correlation between the number of deaths and the other variables.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide_data_ingestion.rst
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ In the following example, we will use :py:func:`~verticapy.read_csv` to ingest a
titanic = load_titanic()
To convert a subset of the dataset to a CSV file, select the desired rows in the dataset and use the :py:func:`~verticapy.to_csv` :py:mod:`vDataFrame` method:
To convert a subset of the dataset to a CSV file, select the desired rows in the dataset and use the :py:func:`~verticapy.to_csv` :py:func:`~verticapy.vDataFrame` method:

.. ipython:: python
Expand Down
2 changes: 1 addition & 1 deletion docs/source/user_guide_data_preparation_decomposition.rst
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ Notice that the predictors are now independant and combined together and they ha
model.explained_variance_
Most of the information is in the first two components with more than 97.7% of explained variance. We can export this result to a :py:mod:`vDataFrame`.
Most of the information is in the first two components with more than 97.7% of explained variance. We can export this result to a :py:func:`~verticapy.vDataFrame`.

.. code-block::
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Features engineering makes use of many techniques - too many to go over in this
Customized Features Engineering
--------------------------------

To build a customized feature, you can use the :py:func:`~verticapy.vDataFrame.eval` method of the :py:mod:`vDataFrame`. Let's look at an example with the well-known 'Titanic' dataset.
To build a customized feature, you can use the :py:func:`~verticapy.vDataFrame.eval` method of the :py:func:`~verticapy.vDataFrame`. Let's look at an example with the well-known 'Titanic' dataset.

.. code-block:: python
Expand Down
Loading

0 comments on commit add5b4f

Please sign in to comment.