From 94df5edd3485b0dd325feaa753aca19cfb1714e6 Mon Sep 17 00:00:00 2001 From: Mamta Wardhani Date: Mon, 16 Dec 2024 14:16:20 +0530 Subject: [PATCH 01/40] [Term Entry] PyTorch Tensor Operations .index_reduce() (#5818) * [Term Entry] PyTorch Tensor Operations .index_reduce() * Update content/pytorch/concepts/tensor-operations/terms/index-reduce/index-reduce.md Co-authored-by: Pragati Verma * Update content/pytorch/concepts/tensor-operations/terms/index-reduce/index-reduce.md Co-authored-by: Pragati Verma * Update content/pytorch/concepts/tensor-operations/terms/index-reduce/index-reduce.md Co-authored-by: Pragati Verma * Update content/pytorch/concepts/tensor-operations/terms/index-reduce/index-reduce.md Co-authored-by: Pragati Verma * Update content/pytorch/concepts/tensor-operations/terms/index-reduce/index-reduce.md Co-authored-by: Pragati Verma * Update content/pytorch/concepts/tensor-operations/terms/index-reduce/index-reduce.md Co-authored-by: Pragati Verma * Update index-reduce.md * Update index-reduce.md --------- Co-authored-by: Pragati Verma --- .../terms/index-reduce/index-reduce.md | 64 +++++++++++++++++++ 1 file changed, 64 insertions(+) create mode 100644 content/pytorch/concepts/tensor-operations/terms/index-reduce/index-reduce.md diff --git a/content/pytorch/concepts/tensor-operations/terms/index-reduce/index-reduce.md b/content/pytorch/concepts/tensor-operations/terms/index-reduce/index-reduce.md new file mode 100644 index 00000000000..a10127a73e8 --- /dev/null +++ b/content/pytorch/concepts/tensor-operations/terms/index-reduce/index-reduce.md @@ -0,0 +1,64 @@ +--- +Title: '.index_reduce_()' +Description: 'Reduces a tensor along a specified dimension using indices to map input elements to positions in the output tensor, applying reduction operations such as sum, product, or mean.' +Subjects: + - 'Computer Science' + - 'Data Science' +Tags: + - 'Data Structures' + - 'Functions' + - 'Index' + - 'Values' +CatalogContent: + - 'intro-to-py-torch-and-neural-networks' + - 'paths/computer-science' +--- + +In PyTorch, **`.index_reduce_()`** performs an in-place reduction operation (such as sum, product, or mean) on a [tensor](https://www.codecademy.com/resources/docs/pytorch/tensors) along a specified dimension. It uses an index tensor to map input elements to positions in the output tensor, effectively aggregating values with the same index. + +## Syntax + +```pseudo +Tensor.index_reduce_(dim, index, source, reduce, *, include_self=True) +``` + +- `dim`: The axis of the tensor along which the reduction is performed. +- `index`: A 1D tensor containing indices that map the elements in the source tensor to specific positions in the current tensor. +- `source`: The tensor whose values are reduced and added to the current tensor at positions specified by `index`. +- `reduce`: Specifies the reduction operation to apply. Possible values include: + - `'prod'`: Product of elements with the same index. + - `'mean'`: Mean of elements with the same index. + - `'amax'`: Maximum of elements with the same index. + - `'amin'`: Minimum of elements with the same index. +- `include_self` (Optional): Determines whether the existing values in the current tensor are included in the reduction operation. + - If `True`, the values already present in the tensor are included. If no value is provided for the parameter, `include_self` defaults to `True`. + - If `False`, only the `source` tensor values contribute to the reduction. + +## Example + +The following example demonstrates the usage of the `.index_reduce()` method: + +```py +import torch + +# Define the target tensor +target = torch.zeros(2) + +# Source tensor +source = torch.tensor([1.0, 2.0, 3.0, 4.0]) + +# Indices mapping source to target +index = torch.tensor([0, 1, 0, 1], dtype=torch.long) # Ensure index tensor is of type 'long' + +# Perform in-place reduction using 'mean' along the 0th dimension (rows) +target.index_reduce_(dim=0, index=index, source=source, reduce='mean') +print(target) +``` + +The above code produces the following output: + +```shell +tensor([1.3333, 2.0000]) +``` + +This code reduces the `source` tensor along dimension 0 by averaging (`'mean'` reduce) the values mapped to the same indices in the `index` tensor, updating the `target` tensor in place. From ab6246a9036fe80ecad10793da5e04f2b0b552ba Mon Sep 17 00:00:00 2001 From: Mamta Wardhani Date: Mon, 16 Dec 2024 14:18:49 +0530 Subject: [PATCH 02/40] [Concept Entry] Sklearn multioutput-regression (#5820) * [Concept Entry] Sklearn multioutput-regression * Update content/sklearn/concepts/multioutput-regression/multioutput-regression.md Co-authored-by: Pragati Verma * Update content/sklearn/concepts/multioutput-regression/multioutput-regression.md Co-authored-by: Pragati Verma * Update content/sklearn/concepts/multioutput-regression/multioutput-regression.md Co-authored-by: Pragati Verma * Update content/sklearn/concepts/multioutput-regression/multioutput-regression.md Co-authored-by: Pragati Verma * Update content/sklearn/concepts/multioutput-regression/multioutput-regression.md Co-authored-by: Pragati Verma --------- --- .../multioutput-regression.md | 166 ++++++++++++++++++ 1 file changed, 166 insertions(+) create mode 100644 content/sklearn/concepts/multioutput-regression/multioutput-regression.md diff --git a/content/sklearn/concepts/multioutput-regression/multioutput-regression.md b/content/sklearn/concepts/multioutput-regression/multioutput-regression.md new file mode 100644 index 00000000000..e1764d5b1c8 --- /dev/null +++ b/content/sklearn/concepts/multioutput-regression/multioutput-regression.md @@ -0,0 +1,166 @@ +--- +Title: 'Multioutput Regression' +Description: 'Multioutput regression is a type of regression task where the model predicts multiple dependent variables (outputs) simultaneously for each input.' +Subjects: + - 'Data Science' + - 'Machine Learning' +Tags: + - 'Classification' + - 'Multitask Learning' + - 'MultiTaskLasso' + - 'Scikit-learn' +CatalogContent: + - 'learn-python-3' + - 'paths/data-science' +--- + +In [sklearn](https://www.codecademy.com/resources/docs/sklearn), **Multioutput Regression** is a type of regression task where the model predicts multiple dependent variables (outputs) simultaneously for each input, allowing for the modeling of relationships between multiple target variables and the features, which can improve prediction accuracy when outputs are correlated. + +This can be achieved using the `MultiOutputRegressor` class, which wraps a single-output regressor (like [`LinearRegression`](https://www.codecademy.com/resources/docs/sklearn/linear-regression-analysis) or [`DecisionTreeRegressor`](https://www.codecademy.com/resources/docs/sklearn/decision-trees)) and fits a separate model for each target variable. The model then predicts all outputs at once for each input, treating each target independently. + +## Syntax + +```pseudo +from sklearn.multioutput import MultiOutputRegressor + +multi_output_regressor = MultiOutputRegressor(estimator, n_jobs=None) +``` + +- `estimator`: The base regressor that is used to fit each target independently. This can be any regression model that supports single-output regression (e.g., `LinearRegression`, `DecisionTreeRegressor`, etc.). +- `n_jobs`: The number of jobs to run in parallel for fitting the models. + - If `None`, it defaults to 1 (single-threaded). + - If `-1`, it uses all available processors. + - If `int > 1`, it uses that many processors for parallel computation. + +## Example + +In the following example, a multi-output regression model is trained using `MultiOutputRegressor` with `LinearRegression` as the base `estimator` to predict two target variables from a dataset with 100 samples and 10 features: + +```py +from sklearn.datasets import make_regression +from sklearn.linear_model import LinearRegression +from sklearn.multioutput import MultiOutputRegressor + +# Generate a dataset with multiple targets +X, y = make_regression(n_samples=100, n_features=10, n_targets=2, random_state=42) + +# Create the base regressor +base_regressor = LinearRegression() + +# Initialize MultiOutputRegressor with the base regressor +multi_output_regressor = MultiOutputRegressor(base_regressor) + +# Fit the model +multi_output_regressor.fit(X, y) + +# Make predictions +predictions = multi_output_regressor.predict(X) +print(predictions) +``` + +The code above generates output as follows: + +```shell +[[ 120.68784134 275.71483026] + [ 253.98296996 563.67307766] + [ 37.84961654 83.88732044] + [-116.11399733 -517.34439275] + [ 292.0729889 342.1211352 ] + [ 126.5187794 394.60780464] + [ 74.14550766 117.86120484] + [ 34.74603745 293.23646551] + [ -1.57480398 -146.33293402] + [ 287.3570598 248.60804028] + [ 24.46227084 20.53247664] + [ 57.5037778 -52.07222977] + [ 33.46389769 59.9293089 ] + [-184.35231748 -145.97759938] + [ -18.0696738 -233.21065317] + [ 97.75493413 216.27609409] + [-224.33987424 -283.50617896] + [ -44.21983413 116.33800462] + [ 37.40886282 177.30394333] + [ 245.13484874 296.85999202] + [ -87.59651931 -39.75259675] + [-202.99155718 -222.10609199] + [ 41.24869185 181.88917186] + [ -17.87045638 -20.97891509] + [ 48.61661067 -165.6237776 ] + [-295.61268808 -528.01153829] + [ 58.07439548 173.69529786] + [ -71.14511833 -132.69257743] + [ -56.87043841 -190.48556695] + [ 49.51678317 137.10430708] + [ 26.66526388 83.57299169] + [ 1.14129753 36.65874573] + [ -7.74468723 6.85375096] + [ 3.73294889 261.10555969] + [ 56.44376756 40.51403006] + [ -1.99224336 151.40524829] + [-131.39863716 -331.62729808] + [ 109.99484706 384.60778547] + [ -7.74961445 107.97786082] + [ 193.82103464 316.71111332] + [ -7.79813083 7.15370226] + [ -52.15779501 -96.20796676] + [ 152.86738501 104.18711697] + [ 191.36728076 288.45882916] + [ 20.20018313 27.74645933] + [ 146.58558363 -117.63456814] + [-354.50728717 -533.4900471 ] + [ 14.97567883 -95.0910446 ] + [ -35.43101502 -118.48757456] + [ 5.35705289 42.88613639] + [-161.09291025 -117.90429652] + [ 172.2775084 396.90747784] + [ 162.61929411 209.92836958] + [-182.68456133 -163.2811691 ] + [ 89.07535864 -21.14848815] + [ -46.75916029 -110.53894603] + [ 231.09730211 319.15982778] + [ -40.108541 -84.98166962] + [-166.45390997 -265.05555636] + [ 0.97586946 -214.40604796] + [ 97.63593301 501.80797772] + [ 3.7398609 -72.64375758] + [ 130.65561152 66.64815668] + [ -85.31407057 -168.81530534] + [ -7.2468998 73.28377393] + [ 22.33697872 145.21764028] + [-120.51168929 -342.963189 ] + [ 121.12613888 65.01661617] + [ 124.10868505 354.92584718] + [-147.66348249 -294.81859794] + [ 61.14523063 60.52117341] + [-126.37893383 -334.70135616] + [-111.77099591 -81.93814188] + [-109.83747752 -237.97526597] + [ 8.00415806 91.38676316] + [ -26.37947013 36.09839868] + [ 106.36699275 130.83993429] + [ 69.06778835 125.59665375] + [ 134.03028548 319.28586998] + [ 130.75716498 15.34231243] + [ -86.46672131 -139.61281879] + [ -7.33734137 -226.69848199] + [ 199.71269604 357.97063185] + [ 100.94948846 -32.96835461] + [-257.05342439 -386.6851282 ] + [ -99.42556327 -108.57915827] + [ 224.41784227 425.50742575] + [-269.92957188 -202.28685621] + [-109.21584421 -225.03205094] + [-118.45089966 -420.99745962] + [ -29.83876402 19.58063146] + [ 95.06986687 70.5609531 ] + [ 41.32888453 4.51642366] + [ -10.61243193 289.0884239 ] + [ 73.11234969 158.84947994] + [ 10.45019796 260.51876186] + [-226.04884764 -372.71451196] + [ -17.30979575 -146.3735002 ] + [ -13.07113033 -42.21748842] + [ -59.54942557 -102.03957313]] +``` + +> **Note:** The output will vary each time the code is run unless a fixed `random_state` is set in `make_regression()`, ensuring reproducibility as shown in the example. From c0de186449a1570fd7841b28952a5d4a1d696ffc Mon Sep 17 00:00:00 2001 From: arisdelacruz <115809819+arisdelacruz@users.noreply.github.com> Date: Mon, 16 Dec 2024 17:16:49 +0800 Subject: [PATCH 03/40] Masked select entry for pytorch (#5743) * Create masked select entry for pytorch * Update masked-select.md * Update content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md Co-authored-by: Mamta Wardhani * Update content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md Co-authored-by: Mamta Wardhani * Update content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md Co-authored-by: Mamta Wardhani * Update content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md Co-authored-by: Mamta Wardhani * Update content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md Co-authored-by: Mamta Wardhani * Update content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md Co-authored-by: Mamta Wardhani * Update content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md Co-authored-by: Mamta Wardhani * Update content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md Co-authored-by: Mamta Wardhani * Update content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md Co-authored-by: Mamta Wardhani * Update content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md Co-authored-by: Mamta Wardhani * Update masked-select.md minor fixes * Update content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md * Update masked-select.md --------- --- .../terms/masked-select/masked-select.md | 57 +++++++++++++++++++ 1 file changed, 57 insertions(+) create mode 100644 content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md diff --git a/content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md b/content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md new file mode 100644 index 00000000000..7310c4060c5 --- /dev/null +++ b/content/pytorch/concepts/tensor-operations/terms/masked-select/masked-select.md @@ -0,0 +1,57 @@ +--- +Title: '.masked_select()' +Description: 'Selects elements from a tensor, based on a boolean mask, and returns them as a 1D tensor.' +Subjects: + - 'Computer Science' + - 'Data Science' +Tags: + - 'Data Structures' + - 'Functions' + - 'Index' + - 'Values' +CatalogContent: + - 'intro-to-py-torch-and-neural-networks' + - 'paths/computer-science' +--- + +In PyTorch, **`.masked_select()`** is a function that selects elements from an input tensor based on a boolean mask of the same shape. It returns a new 1D tensor containing the elements where the corresponding mask value is `True`. + +## Syntax + +```pseudo +torch.masked_select(input, mask, *, out=None) +``` + +- `input`: The input tensor from which elements will be selected. +- `mask`: A boolean tensor of the same shape as input, where `True` indicates the elements to be selected. +- `out` (Optional): A tensor to store the result. If provided, the selected elements will be written to this tensor instead of creating a new one. + +## Example + +Here's an example of using `.masked_select()` in PyTorch: + +```py +import torch + +# Create an input tensor +input_tensor = torch.tensor([1, 2, 3, 4, 5]) + +# Create a mask tensor with boolean values +mask = torch.tensor([True, False, True, False, True]) + +# Use masked_select to extract elements from the input tensor where the mask is True +selected_elements = torch.masked_select(input_tensor, mask) + +# Print the selected elements +print(selected_elements) +``` + +The code above generates the output as follows: + +```shell +tensor([1, 3, 5]) +``` + +In this example, the `input_tensor` contains elements `[1, 2, 3, 4, 5]`, and the `mask` tensor contains boolean values `[True, False, True, False, True]`. The `masked_select()` function selects elements from the `input_tensor` where the corresponding mask value is `True`, resulting in the tensor `[1, 3, 5]`. + +The `.masked_select()` function is useful for filtering elements from a tensor based on conditions specified by the mask tensor. It can be applied in various scenarios, such as selecting specific elements for further processing, analysis, or model training. From 80c55b7d9b0c1f35ec63402605750aec9d0adfb3 Mon Sep 17 00:00:00 2001 From: Goodluck Somadina Chukwuemeka <105865699+Good-Soma@users.noreply.github.com> Date: Mon, 16 Dec 2024 10:32:12 +0100 Subject: [PATCH 04/40] Create deg2rad.md (#5726) * Create deg2rad.md * Update and rename deg2rad.md to deg2rad.md format * Update deg2rad.md * Update deg2rad.md * Create mixedlm.md * Update mixedlm.md * Delete content/python/concepts/statsmodels/terms/mixedlm/mixedlm.md * Update deg2rad.md minor fixes * Update deg2rad.md * Update deg2rad.md * Update deg2rad.md * Update deg2rad.md * Fix lint errors --------- --- .../math-methods/terms/deg2rad/deg2rad.md | 65 +++++++++++++++++++ 1 file changed, 65 insertions(+) create mode 100644 content/numpy/concepts/math-methods/terms/deg2rad/deg2rad.md diff --git a/content/numpy/concepts/math-methods/terms/deg2rad/deg2rad.md b/content/numpy/concepts/math-methods/terms/deg2rad/deg2rad.md new file mode 100644 index 00000000000..2432eca0f88 --- /dev/null +++ b/content/numpy/concepts/math-methods/terms/deg2rad/deg2rad.md @@ -0,0 +1,65 @@ +--- +Title: '.deg2rad()' +Description: 'Converts angles from degrees to radians.' +Subjects: + - 'Computer Science' + - 'Data Science' + - 'Web Development' +Tags: + - 'Math' + - 'NumPy' +CatalogContent: + - 'learn-python-3' + - 'paths/computer-science' +--- + +In NumPy, the **`.deg2rad()`** function converts an angle from degrees to radians. + +> **Note:** In NumPy, the default unit for angles is radians. Therefore, the `.deg2rad()` function is used to convert angle values from degrees to radians. + +## Syntax + +```pseudo +numpy.deg2rad(x, out=None) +``` + +- `x`: The input array (or scalar) containing angles in degrees that need to be converted to radians. +- `out` (Optional): A location where the result is stored. If not specified, a new array is returned. + +## Example + +In this example, the code converts an angle measured in degrees to radians using the `numpy.deg2rad()` function: + +```py +import numpy as np + +# Angle of the board on a table measured in degrees +angle_degrees = 45 + +# Convert the angle to radians +angle_radians = np.deg2rad(angle_degrees) + +# Output the result +print(f"Angle in degrees: {angle_degrees}") +print(f"Angle in radians: {angle_radians}") +``` + +The code above produces the following output: + +```shell +Angle in degrees: 45 +Angle in radians: 0.7853981633974483 +``` + +## Codebyte Example + +Run the codebyte example below to understand how the `.deg2rad()` function works: + +```codebyte/python +import numpy as np + +degrees = 170 +radians = np.deg2rad(degrees) + +print(f"{degrees} degrees is {radians} radians.") +``` From 07ec82a28a0d5ceae2c7550cff89411a28c70b46 Mon Sep 17 00:00:00 2001 From: SaiTeja-002 <95877599+SaiTeja-002@users.noreply.github.com> Date: Mon, 16 Dec 2024 18:02:16 +0530 Subject: [PATCH 05/40] Feat: [Term Entry] PyTorch Tensor Operations .select() (#5783) * [Term Entry] PyTorch Tensor Operations .select() * Fix Linting Issues * Update select.md minor fixes --------- --- .../tensor-operations/terms/select/select.md | 59 +++++++++++++++++++ 1 file changed, 59 insertions(+) create mode 100644 content/pytorch/concepts/tensor-operations/terms/select/select.md diff --git a/content/pytorch/concepts/tensor-operations/terms/select/select.md b/content/pytorch/concepts/tensor-operations/terms/select/select.md new file mode 100644 index 00000000000..947c581f81c --- /dev/null +++ b/content/pytorch/concepts/tensor-operations/terms/select/select.md @@ -0,0 +1,59 @@ +--- +Title: '.select()' +Description: 'Selects a specific slice along the given dimension in a tensor.' +Subjects: + - 'Computer Science' + - 'Machine Learning' +Tags: + - 'Functions' + - 'Machine Learning' + - 'Methods' + - 'Python' +CatalogContent: + - 'intro-to-py-torch-and-neural-networks' + - 'paths/computer-science' +--- + +The **`.select()`** method in PyTorch returns a specific slice of a [tensor](https://www.codecademy.com/resources/docs/pytorch/tensors) along a specified dimension, reducing the dimensionality of the output tensor by one compared to the input tensor. + +## Syntax + +```pseudo +torch.select(input, dim, index) +``` + +- `input`: The input tensor. +- `dim`: The dimension along which to select. +- `index`: The index of the slice to select along the specified dimension. + +## Example + +The following example illustrates the usage of `.select()` method: + +```py +import torch + +# 2D tensor +tensor = torch.tensor([[10, 20], [30, 40], [50, 60]]) +print("Input Tensor: ", tensor) + +# Select a row (dim=0) +row = torch.select(tensor, 0, 1) +print("\nSelected Row (dim=0, index=1):", row) + +# Select a column (dim=1) +col = torch.select(tensor, 1, 0) +print("\nSelected Column (dim=1, index=0):", col) +``` + +The above code gives the following output: + +```shell +Input Tensor: tensor([[10, 20], + [30, 40], + [50, 60]]) + +Selected Row (dim=0, index=1): tensor([30, 40]) + +Selected Column (dim=1, index=0): tensor([10, 30, 50]) +``` From e86127fc0086893a5db8021d67e400284462d8de Mon Sep 17 00:00:00 2001 From: gustavo-crespo Date: Mon, 16 Dec 2024 09:22:44 -0500 Subject: [PATCH 06/40] Add example and usecase using Partition by (#5677) * Add example using partition by * Addressed admin feedback: Tables format and headers * Update content/sql/concepts/window-functions/terms/lag/lag.md * Fix formatting of SQL queries * Update lag.md * Update lag.md * Update lag.md * Update lag.md * Update lag.md * Update lag.md * Fix lint errors --------- --- .../window-functions/terms/lag/lag.md | 56 +++++++++++++++++-- 1 file changed, 50 insertions(+), 6 deletions(-) diff --git a/content/sql/concepts/window-functions/terms/lag/lag.md b/content/sql/concepts/window-functions/terms/lag/lag.md index a1ebf767cba..50c0e56f1fd 100644 --- a/content/sql/concepts/window-functions/terms/lag/lag.md +++ b/content/sql/concepts/window-functions/terms/lag/lag.md @@ -43,19 +43,63 @@ Users Table | kyle | xy | 60 | ```sql -SELECT *, +SELECT + first_name, + last_name, + age, LAG(age, 1) OVER ( - ORDER BY age ASC) AS previous_age + ORDER BY age DESC + ) AS previous_age FROM Users; ``` -The output is a table that features a new column `previous_age`, which holds the values from the previous records. The first record is null because a default was not specified and the previous row would be out of range. - -Output +The output of the above code is a table that features a new column `previous_age`, which holds the values from the previous records. The first record is null because a default was not specified and the previous row would be out of range. | first_name | last_name | age | previous_age | | ---------- | --------- | --- | ------------ | -| kyle | xy | 60 | null | +| kyle | xy | 60 | NULL | | jenna | black | 35 | 60 | | chris | smith | 30 | 35 | | dave | james | 19 | 30 | + +### Using `PARTITION BY` Clause + +This example demonstrates how to use the `LAG()` function to create a new column, `previous_position`. + +The `PARTITION BY employee_id` clause ensures that the `LAG()` function operates within each group of rows that share the same `employee_id`. The `ORDER BY promotion_date` ensures the rows are processed in chronological order. + +`Promotions` Table + +| employee_id | promotion_date | new_position | +| ----------- | -------------- | ------------ | +| 1 | 2020-01-01 | Junior Dev | +| 1 | 2021-06-01 | Mid Dev | +| 1 | 2024-03-01 | Senior Dev | +| 2 | 2019-05-01 | Intern | +| 2 | 2022-11-01 | Analyst | +| 2 | 2024-11-20 | Data Analyst | + +```sql +SELECT + employee_id, + promotion_date, + new_position, + LAG(new_position) OVER ( + PARTITION BY employee_id + ORDER BY promotion_date + ) AS previous_position +FROM Promotions; +``` + +Within each group defined by `employee_id`, the `previous_position` column holds the value from the previous row based on `promotion_date`. The first record in each group is `NULL` because there is no preceding row. + +The above code generates the following output: + +| employee_id | promotion_date | new_position | previous_position | +| ----------- | -------------- | ------------ | ----------------- | +| 1 | 2020-01-01 | Junior Dev | NULL | +| 1 | 2021-06-01 | Mid Dev | Junior Dev | +| 1 | 2024-03-01 | Senior Dev | Mid Dev | +| 2 | 2019-05-01 | Intern | NULL | +| 2 | 2022-11-01 | Analyst | Intern | +| 2 | 2024-11-20 | Data Analyst | Analyst | From 8f8429d383a8205d08cacfb5199ac12f804a3de0 Mon Sep 17 00:00:00 2001 From: ralfze <147318493+ralfze@users.noreply.github.com> Date: Tue, 17 Dec 2024 16:56:18 +0100 Subject: [PATCH 07/40] [Term Entry] Python SQL Connectors: SQLite3 * Create sqlite3 term for python sql connectors * Fix formatting and ran scripts * Added requested edits * Update content/python/concepts/sql-connectors/terms/sqlite3/sqlite3.md * Update content/python/concepts/sql-connectors/terms/sqlite3/sqlite3.md * Update content/python/concepts/sql-connectors/terms/sqlite3/sqlite3.md * Update content/python/concepts/sql-connectors/terms/sqlite3/sqlite3.md * Update content/python/concepts/sql-connectors/terms/sqlite3/sqlite3.md * Update content/python/concepts/sql-connectors/terms/sqlite3/sqlite3.md * Update content/python/concepts/sql-connectors/terms/sqlite3/sqlite3.md * Update content/python/concepts/sql-connectors/terms/sqlite3/sqlite3.md * Correct prettier format * Minor changes --------- --- .../sql-connectors/terms/sqlite3/sqlite3.md | 147 ++++++++++++++++++ 1 file changed, 147 insertions(+) create mode 100644 content/python/concepts/sql-connectors/terms/sqlite3/sqlite3.md diff --git a/content/python/concepts/sql-connectors/terms/sqlite3/sqlite3.md b/content/python/concepts/sql-connectors/terms/sqlite3/sqlite3.md new file mode 100644 index 00000000000..b7309b108dd --- /dev/null +++ b/content/python/concepts/sql-connectors/terms/sqlite3/sqlite3.md @@ -0,0 +1,147 @@ +--- +Title: 'SQLite3' +Description: 'SQLite3 is a library used to connect to SQLite databases.' +Subjects: + - 'Computer Science' + - 'Data Science' +Tags: + - 'SQLite' + - 'Documentation' +CatalogContent: + - 'learn-python-3' + - 'paths/computer-science' +--- + +The **`sqlite3`** library is used to connect to SQLite databases and provides functions to interact with them. It can also be used for prototyping while developing an application. + +## Syntax + +```pseudo +import sqlite3 +``` + +The `sqlite3` library handles the communication with the databases. + +## Create a Connection + +To work with a database, it first needs to be connected to using the **`.connect()`** function: + +```py +import sqlite3 + +con = sqlite3.connect("mydb_db.db") +``` + +## Create a Cursor + +A cursor is required to execute SQL statements and the **`.cursor()`** function creates one: + +```py +curs = connection.cursor() +``` + +## Create a Table + +The **`.execute()`** function can be used to create a table: + +```py +curs.execute('''CREATE TABLE persons( + name TEXT, + age INTEGER, + gender TEXT) + ''') +``` + +## Insert a Value Into the Table + +To insert values into the table, the SQL statement is executed with the `.execute()` function: + +```py +curs.execute('''INSERT INTO persons VALUES( + 'Alice', 21, 'female')''') +``` + +## Insert Multiple Values Into the Table + +To insert multiple values into the table, the SQL statement is executed using the **`.executemany()`** function with an array of values: + +```py +new_persons = [('Bob', 26, 'male'), + ('Charlie', 19, 'male'), + ('Daisy', 18, 'female') + ] + +curs.executemany('''INSERT INTO persons VALUES(?, ?, ?)''', new_persons) +``` + +## Commit the Transaction + +The **`.commit()`** function saves the inserted values to the database permanently: + +```py +con.commit() +``` + +## Check the Inserted Rows + +To check all the inserted rows, the **`.fetchall()`** function can be used: + +```py +result = cursor.execute("SELECT * FROM persons") + +result.fetchall() +``` + +## Close the Connection + +After completing all the transactions, the connection can be closed with **`.close()`**: + +```py +connection.close() +``` + +## Codebyte Example + +Here's a codebyte example showing how to connect to an SQLite database, create a table, insert/query data, and close the connection: + +```codebyte/python +import sqlite3 +# Create a connection to the database +con = sqlite3.connect("mydb_db.db") + +# Create a cursor to execute SQL statements +curs = con.cursor() + +# Ensure to create a new table +curs.execute('''DROP TABLE IF EXISTS persons''') + +# Create a new table +curs.execute('''CREATE TABLE persons( + name TEXT, + age INTEGER, + gender TEXT) + ''') + +# Insert a value into the table +curs.execute('''INSERT INTO persons VALUES( +'Alice', 21, 'female')''') + +# Insert multiple values into the table +new_persons = [('Bob', 26, 'male'), + ('Charlie', 19, 'male'), + ('Daisy', 18, 'female') + ] + +curs.executemany('''INSERT INTO persons VALUES(?, ?, ?)''', new_persons) + +# Commit the transaction to database +con.commit() + +# Check the inserted rows +result = curs.execute("SELECT * FROM persons") +printout = result.fetchall() +print(printout) + +# Close the connection +con.close() +``` From 9cbf3a8c617a0e856110c44855e3385c41fdf90a Mon Sep 17 00:00:00 2001 From: arisdelacruz <115809819+arisdelacruz@users.noreply.github.com> Date: Wed, 18 Dec 2024 00:20:32 +0800 Subject: [PATCH 08/40] [Concept Entry] TensorFlow: Math * Create general math ops for TensorFlow * Update math.md * Update content/tensorflow/concepts/math/math.md Co-authored-by: Savi Dahegaonkar <124272050+SaviDahegaonkar@users.noreply.github.com> * Update content/tensorflow/concepts/math/math.md Co-authored-by: Savi Dahegaonkar <124272050+SaviDahegaonkar@users.noreply.github.com> * Update content/tensorflow/concepts/math/math.md Co-authored-by: Savi Dahegaonkar <124272050+SaviDahegaonkar@users.noreply.github.com> * Update content/tensorflow/concepts/math/math.md Co-authored-by: Savi Dahegaonkar <124272050+SaviDahegaonkar@users.noreply.github.com> * Update content/tensorflow/concepts/math/math.md Co-authored-by: Savi Dahegaonkar <124272050+SaviDahegaonkar@users.noreply.github.com> * Update math.md * Update content/tensorflow/concepts/math/math.md * Update content/tensorflow/concepts/math/math.md * Update content/tensorflow/concepts/math/math.md * Update content/tensorflow/concepts/math/math.md * Update content/tensorflow/concepts/math/math.md * Update content/tensorflow/concepts/math/math.md * Update content/tensorflow/concepts/math/math.md * Update content/tensorflow/concepts/math/math.md * Update content/tensorflow/concepts/math/math.md * Update content/tensorflow/concepts/math/math.md * Fix markdownlint issues * Minor changes --------- --- content/tensorflow/concepts/math/math.md | 144 +++++++++++++++++++++++ documentation/catalog-content.md | 81 +++++++------ 2 files changed, 188 insertions(+), 37 deletions(-) create mode 100644 content/tensorflow/concepts/math/math.md diff --git a/content/tensorflow/concepts/math/math.md b/content/tensorflow/concepts/math/math.md new file mode 100644 index 00000000000..9f4823a51e1 --- /dev/null +++ b/content/tensorflow/concepts/math/math.md @@ -0,0 +1,144 @@ +--- +Title: 'Math' +Description: 'Mathematical computations on tensors using TensorFlow.' +Subjects: + - 'AI' + - 'Data Science' +Tags: + - 'Arithmetic' + - 'Arrays' + - 'Deep Learning' + - 'TensorFlow' +CatalogContent: + - 'intro-to-tensorflow' + - 'tensorflow-for-deep-learning' +--- + +In TensorFlow, **math operations** are fundamental for performing various mathematical computations on tensors. Tensors are multi-dimensional arrays that can be manipulated using various operations. + +TensorFlow offers a rich set of mathematical operations under the `tf.math` module. These operations include arithmetic, trigonometric and exponential functions, and more. + +Some of the key mathematical operations available in TensorFlow are listed below. + +## Arithmetic Operations + +TensorFlow provides a wide range of arithmetic operations that can be performed on tensors, including addition, subtraction, multiplication, division, and more. Here are some examples of arithmetic operations in TensorFlow: + +```py +import tensorflow as tf + +a = tf.constant([1, 2, 3]) +b = tf.constant([4, 5, 6]) + +# Arithmetic operations + +tf.math.add(a, b) # Element-wise addition +tf.math.subtract(a, b) # Element-wise subtraction +tf.math.multiply(a, b) # Element-wise multiplication +tf.math.divide(a, b) # Element-wise division +``` + +## Element-wise Operations + +Element-wise operations are operations applied to each element of a tensor individually. These operations include computing each element's power, calculating each element's square root, and returning the absolute value of each component. Here are some examples of element-wise operations in TensorFlow: + +```py +import tensorflow as tf + +a = tf.constant([1, 2, 3], dtype=tf.float32) + +# Element-wise operations + +tf.math.pow(a, 2) # Element-wise power +tf.math.sqrt(a) # Element-wise square root +tf.math.abs(a) # Element-wise absolute value +``` + +## Trigonometric Functions + +TensorFlow supports trigonometric functions such as sine, cosine, tangent, and their inverses, which have domain constraints. These functions are useful for various mathematical computations. Here are some examples of trigonometric functions in TensorFlow: + +```py +import tensorflow as tf + +a = tf.constant([0.0, 1.0, 2.0]) + +# Trigonometric functions + +tf.math.sin(a) # Element-wise sine +tf.math.cos(a) # Element-wise cosine +tf.math.tan(a) # Element-wise tangent +tf.math.asin(a) # Element-wise arcsine +tf.math.acos(a) # Element-wise arccosine +tf.math.atan(a) # Element-wise arctangent +``` + +## Exponential and Logarithmic Functions + +TensorFlow offers functions to compute exponentials and logarithms of tensor elements, widely used in mathematical and scientific computations. Here are some examples of exponential and logarithmic functions in TensorFlow: + +```py +import tensorflow as tf + +a = tf.constant([1.0, 2.0, 3.0]) + +# Exponential and logarithmic functions + +tf.math.exp(a) # Element-wise exponential +tf.math.log(a) # Element-wise natural logarithm +tf.math.log10(a) # Element-wise base-10 logarithm +tf.math.log1p(a) # Element-wise natural logarithm of (1 + x) +``` + +## Reduction Operations + +Reduction operations compute a single result from multiple tensor elements. These operations include sum, mean, maximum, minimum, and more. Here are some examples of reduction operations in TensorFlow: + +```py +import tensorflow as tf + +a = tf.constant([[1, 2, 3], [4, 5, 6]]) + +# Reduction operations + +tf.math.reduce_sum(a) # Sum of all elements +tf.math.reduce_mean(a) # Mean of all elements +tf.math.reduce_max(a) # Maximum value +tf.math.reduce_min(a) # Minimum value +``` + +## Comparison Operations + +TensorFlow supports comparison operations that compare tensor elements and return boolean values based on the comparison results. Here are some examples of comparison operations in TensorFlow: + +```py +import tensorflow as tf + +a = tf.constant([1, 2, 3]) +b = tf.constant([3, 2, 1]) + +# Comparison operations + +tf.math.equal(a, b) # Element-wise equality +tf.math.less(a, b) # Element-wise less than +tf.math.greater(a, b) # Element-wise greater than +tf.math.not_equal(a, b) # Element-wise inequality +``` + +## Special Functions + +TensorFlow offers a variety of special mathematical functions such as `Bessel` functions, `error` functions, and `gamma` functions. These functions are useful for advanced mathematical computations. Here are some examples of special functions in TensorFlow: + +```py +import tensorflow as tf + +a = tf.constant([1.0, 2.0, 3.0]) + +# Special functions + +tf.math.erf(a) # Element-wise error function +tf.math.lgamma(a) # Element-wise natural logarithm of the absolute value of the gamma function of x +tf.math.bessel_i0(a) # Element-wise modified Bessel function of the first kind of order 0 +``` + +By leveraging these mathematical operations, a wide range of computations on tensors can be performed in TensorFlow, making it a powerful tool for scientific computing, machine learning, and deep learning applications. diff --git a/documentation/catalog-content.md b/documentation/catalog-content.md index 44c6da9ba57..955d04c350f 100644 --- a/documentation/catalog-content.md +++ b/documentation/catalog-content.md @@ -9,63 +9,63 @@ These slugs may vary for different topics. Feel free to add suggestions for new slugs to the lists as part of your PR! Be sure to insert them alphabetically. -### C +## C ``` - 'learn-c' - 'paths/computer-science' ``` -### C++ +## C++ ``` - 'learn-c-plus-plus' - 'paths/computer-science' ``` -### Cloud Computing +## Cloud Computing ``` - 'foundations-of-cloud-computing' - 'paths/back-end-engineer-career-path' ``` -### Command Line +## Command Line ``` - 'learn-the-command-line' - 'paths/computer-science' ``` -### CSS +## CSS ``` - 'learn-css' - 'paths/front-end-engineer-career-path' ``` -### Cybersecurity +## Cybersecurity ``` - 'introduction-to-cybersecurity' - 'paths/fundamentals-of-cybersecurity' ``` -### Dart +## Dart ``` - 'learn-dart' - 'paths/computer-science' ``` -### Emojicode +## Emojicode ``` - 'learn-emojicode' - 'paths/computer-science' ``` -### Git +## Git ``` - 'learn-git' @@ -73,7 +73,7 @@ Feel free to add suggestions for new slugs to the lists as part of your PR! Be s - 'paths/computer-science' ``` -### Go +## Go ``` - 'learn-go' @@ -81,77 +81,77 @@ Feel free to add suggestions for new slugs to the lists as part of your PR! Be s - 'paths/computer-science' ``` -### HTML +## HTML ``` - 'learn-html' - 'paths/front-end-engineer-career-path' ``` -### Java +## Java ``` - 'learn-java' - 'paths/computer-science' ``` -### JavaScript +## JavaScript ``` - 'introduction-to-javascript' - 'paths/front-end-engineer-career-path' ``` -### JavaScript:D3 +## JavaScript:D3 ``` - 'learn-d3' - 'paths/data-science' ``` -### Kotlin +## Kotlin ``` - 'learn-kotlin' - 'paths/computer-science' ``` -### Markdown +## Markdown ``` - 'learn-html' - 'paths/front-end-engineer-career-path' ``` -### Open Source +## Open Source ``` - 'introduction-to-open-source' - 'paths/code-foundations' ``` -### PHP +## PHP ``` - 'learn-php' - 'paths/computer-science' ``` -### PowerShell +## PowerShell ``` - 'learn-powershell' - 'paths/computer-science' ``` -### Python +## Python ``` - 'learn-python-3' - 'paths/computer-science' ``` -### Python:Matplotlib +## Python:Matplotlib ``` - 'learn-python-3' @@ -160,7 +160,7 @@ Feel free to add suggestions for new slugs to the lists as part of your PR! Be s - 'paths/data-science-foundations' ``` -### Python:Numpy +## Python:Numpy ``` - 'learn-python-3' @@ -169,7 +169,7 @@ Feel free to add suggestions for new slugs to the lists as part of your PR! Be s - 'paths/data-science-foundations' ``` -### Python:Pandas +## Python:Pandas ``` - 'learn-python-3' @@ -178,7 +178,7 @@ Feel free to add suggestions for new slugs to the lists as part of your PR! Be s - 'paths/data-science-foundations' ``` -### Python:Pillow +## Python:Pillow ``` - 'learn-python-3' @@ -187,7 +187,7 @@ Feel free to add suggestions for new slugs to the lists as part of your PR! Be s - 'paths/data-science-foundations' ``` -### Python:Plotly +## Python:Plotly ``` - 'learn-python-3' @@ -196,7 +196,7 @@ Feel free to add suggestions for new slugs to the lists as part of your PR! Be s - 'paths/data-science-foundations' ``` -### Python:PyTorch +## Python:PyTorch ``` - 'intro-to-py-torch-and-neural-networks' @@ -208,7 +208,7 @@ Feel free to add suggestions for new slugs to the lists as part of your PR! Be s - 'paths/machine-learning' ``` -### Python:Seaborn +## Python:Seaborn ``` - 'learn-python-3' @@ -217,13 +217,20 @@ Feel free to add suggestions for new slugs to the lists as part of your PR! Be s - 'paths/data-science-foundations' ``` -### Python:Sklearn +## Python:Sklearn ``` - 'paths/intermediate-machine-learning-skill-path' ``` -### R +## Python:TensorFlow + +``` +- 'intro-to-tensorflow' +- 'tensorflow-for-deep-learning' +``` + +## R ``` - 'learn-r' @@ -232,14 +239,14 @@ Feel free to add suggestions for new slugs to the lists as part of your PR! Be s - 'paths/computer-science' ``` -### React +## React ``` - 'react-101' - 'paths/front-end-engineer-career-path' ``` -### Ruby +## Ruby ``` - 'learn-rails' @@ -247,14 +254,14 @@ Feel free to add suggestions for new slugs to the lists as part of your PR! Be s - 'paths/full-stack-engineer-career-path' ``` -### Rust +## Rust ``` - 'rust-for-programmers' - 'paths/computer-science' ``` -### SQL +## SQL ``` - 'learn-sql' @@ -263,28 +270,28 @@ Feel free to add suggestions for new slugs to the lists as part of your PR! Be s - 'paths/data-science-foundations' ``` -### Swift +## Swift ``` - 'learn-swift' - 'paths/build-ios-apps-with-swiftui' ``` -### SwiftUI +## SwiftUI ``` - 'learn-swift' - 'paths/build-ios-apps-with-swiftui' ``` -### TypeScript +## TypeScript ``` - 'learn-typescript' - 'paths/full-stack-engineer-career-path' ``` -### UI/UX +## UI/UX ``` - 'intro-to-ui-ux' From 7eb32b34f9e306cb1891380465df14193a2d62f5 Mon Sep 17 00:00:00 2001 From: Pragati Verma Date: Wed, 18 Dec 2024 18:31:06 +0530 Subject: [PATCH 09/40] [Term Entry] Neural Networks ai gradient descent (#5785) * Add concept entry for ai gradient descent * Review Fixes * Fix lint errors * Fix lint errors * Fix lint errors * Fix lint errors * Update gradient-descent.md --------- --- .../gradient-descent/gradient-descent.md | 112 ++++++++++++++++++ 1 file changed, 112 insertions(+) create mode 100644 content/ai/concepts/neural-networks/terms/gradient-descent/gradient-descent.md diff --git a/content/ai/concepts/neural-networks/terms/gradient-descent/gradient-descent.md b/content/ai/concepts/neural-networks/terms/gradient-descent/gradient-descent.md new file mode 100644 index 00000000000..ef4f66b5957 --- /dev/null +++ b/content/ai/concepts/neural-networks/terms/gradient-descent/gradient-descent.md @@ -0,0 +1,112 @@ +--- +Title: 'Gradient Descent' +Description: 'Gradient Descent is an optimization algorithm that minimizes a cost function by iteratively adjusting parameters in the direction of its gradient.' +Subjects: + - 'Machine Learning' + - 'Data Science' + - 'Computer Science' +Tags: + - 'AI' + - 'Machine Learning' + - 'Neural Networks' + - 'Functions' +CatalogContent: + - 'paths/data-science' + - 'paths/machine-learning' +--- + +**Gradient Descent** is an optimization algorithm commonly used in machine learning and neural networks to minimize a cost function. Its goal is to iteratively find the optimal parameters (weights) that minimize the error or loss. + +In neural networks, gradient descent computes the gradient (derivative) of the cost function with respect to each parameter. It then updates the parameters in the direction of the negative gradient, effectively reducing the cost with each step. + +## Types of Gradient Descent + +There are three main types of gradient descent: + +| Type | Description | +| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| **Batch Gradient Descent** | Uses the entire dataset to compute the gradient and update the weights. Typically slower but more accurate for large datasets. | +| **Stochastic Gradient Descent (SGD)** | Uses a single sample to compute the gradient and update the weights. It is faster, but the updates are noisy and can cause fluctuations in the convergence path. | +| **Mini-batch Gradient Descent** | A compromise between batch and stochastic gradient descent, using a small batch of samples to compute the gradient. It balances the speed and accuracy of the learning process. | + +## Gradient Descent Update Rule + +The basic update rule for gradient descent is: + +```pseudo +theta = theta - learning_rate * gradient_of_cost_function +``` + +- `theta`: The parameter (weight) of the model that is being optimized. +- `learning_rate`: A hyperparameter that controls the step size. +- `gradient_of_cost_function`: The gradient (derivative) of the cost function with respect to the parameters, indicating the direction and magnitude of the change needed. + +## Syntax + +Here's a basic syntax for Gradient Descent in the context of machine learning, specifically for updating the model parameters (weights) in order to minimize the cost function: + +```pseudo +# Initialize parameters (weights) and learning rate +theta = initial_value # Model Parameters (weights) +learning_rate = value # Learning rate (step size) +iterations = number_of_iterations # Number of iterations + +# Repeat until convergence +for i in range(iterations): + # Calculate the gradient of the cost function + gradient = compute_gradient(X, y, theta) # Gradient calculation + + # Update the parameters (weights) + theta = theta - learning_rate * gradient # Update rule + + # Optionally, compute and store the cost (for monitoring convergence) + cost = compute_cost(X, y, theta) + store(cost) +``` + +## Example + +In the following example, we implement simple gradient descent to minimize the cost function of a linear regression problem: + +```py +import numpy as np + +# Sample data (X: inputs, y: actual outputs) +X = np.array([1, 2, 3, 4, 5]) +y = np.array([1, 2, 1.3, 3.75, 2.25]) + +# Parameters initialization +theta = 0.0 # Initial weight +learning_rate = 0.01 # Step size +iterations = 1000 # Number of iterations + +# Cost function (Mean Squared Error) +def compute_cost(X, y, theta): + m = len(y) + cost = (1/(2*m)) * np.sum((X*theta - y)**2) # The cost function for linear regression + return cost + +# Gradient Descent function +def gradient_descent(X, y, theta, learning_rate, iterations): + m = len(y) + cost_history = [] + + for i in range(iterations): + gradient = (1/m) * np.sum(X * (X*theta - y)) # Derivative of cost function + theta = theta - learning_rate * gradient # Update theta + cost_history.append(compute_cost(X, y, theta)) # Track cost + return theta, cost_history + +# Run Gradient Descent +theta_optimal, cost_history = gradient_descent(X, y, theta, learning_rate, iterations) + +print(f"Optimal Theta: {theta_optimal}") +``` + +The output for the above code will be something like this: + +```shell +Optimal Theta: 0.6390909090909086 +``` + +> **Note**: The optimal `theta` value will be an approximation, as the gradient descent approach iteratively updates the weight to reduce the cost function. From 9bce36b821d7b76da40d8f24ed94fdd2257dc66b Mon Sep 17 00:00:00 2001 From: Savi Dahegaonkar <124272050+SaviDahegaonkar@users.noreply.github.com> Date: Wed, 18 Dec 2024 20:48:25 +0530 Subject: [PATCH 10/40] [Term Entry] Plotly Graph Objects: .Histogram2dContour() * New file has been added. * Update user-input.md * Update user-input.md * The entry has been added successfully. * Minor changes --------- --- .../histogram2dContour/histogram2dContour.md | 64 ++++++++++++++++++ media/histogram2dcontour-example.png | Bin 0 -> 30235 bytes 2 files changed, 64 insertions(+) create mode 100644 content/plotly/concepts/graph-objects/terms/histogram2dContour/histogram2dContour.md create mode 100644 media/histogram2dcontour-example.png diff --git a/content/plotly/concepts/graph-objects/terms/histogram2dContour/histogram2dContour.md b/content/plotly/concepts/graph-objects/terms/histogram2dContour/histogram2dContour.md new file mode 100644 index 00000000000..1205fe0c484 --- /dev/null +++ b/content/plotly/concepts/graph-objects/terms/histogram2dContour/histogram2dContour.md @@ -0,0 +1,64 @@ +--- +Title: '.Histogram2dContour()' +Description: 'Creates 2D histograms with contours for visualizing density distributions in data.' +Subjects: + - 'Data Science' + - 'Data Visualization' +Tags: + - 'Data' + - 'Data Structures' + - 'Plotly' +CatalogContent: + - 'learn-python-3' + - 'paths/data-science' +--- + +The **`.Histogram2dContour()`** method in Plotly's `graph_objects` module creates a 2D histogram with contour lines to visualize the joint distribution of two variables. It uses a grid where color intensity represents the count or aggregated values within each cell, while the contour lines indicate regions of equal density. This method helps visualize relationships and density in bivariate data, helping to uncover patterns and trends. + +## Syntax + +```pseudo +plotly.graph_objects.Histogram2dContour(x=None, y=None, nbinsx=None, nbinsy=None, colorscale=None, contours=None, ...) +``` + +- `x`: Input data for the x-axis. +- `y`: Input data for the y-axis. +- `nbinsx` (Optional): The number of bins (intervals) used to divide the x-axis range. If not specified (`None`), Plotly automatically calculates an appropriate number of bins based on the data. +- `nbinsy` (Optional): The number of bins (intervals) used to divide the y-axis range on the data. +- `colorscale` (Optional): Defines the color scale for heatmap. +- `contours` (Optional): Configuration for contour lines (e.g., `levels`, `start`, `end`, `size`). + +> **Note**: To personalize the scatter plot on polar axes, there are more possible options than those mentioned above, as indicated by the ellipsis in the syntax (...). + +## Example + +The following example showcases the use of the `.Histogram2dContour()`: + +```py +import plotly.graph_objects as go + +# Sample data +x = [1, 2, 2, 3, 3, 3, 4, 4, 4, 4] +y = [10, 9, 8, 7, 6, 5, 4, 3, 2, 1] + +# Create the histogram with contours +fig = go.Figure( + go.Histogram2dContour( + x=x, + y=y, + nbinsx=5, + nbinsy=5, + colorscale='Viridis', + contours=dict(start=0, end=4, size=1) + ) +) + +# Show the figure +fig.show() +``` + +The example demonstrates how to use `.Histogram2dContour()` to create a two-dimensional histogram that includes contour lines which are used to visualize the joint distribution between two variables. + +The above code generates the following output: + +![Histogram2dContour in Plotly](https://raw.githubusercontent.com/Codecademy/docs/main/media/histogram2dcontour-example.png) diff --git a/media/histogram2dcontour-example.png b/media/histogram2dcontour-example.png new file mode 100644 index 0000000000000000000000000000000000000000..ed21540c92736284f30471b58799600319f69d94 GIT binary patch literal 30235 zcmeFZbz79(_dYBQL&MM=${--!-G~B1NOwyM(p^JIDJ4h?qS7@W9RkuQ-AH#y*K>_` ze7wJJ;rabB2gk&<_g-tSIM=z?CJL&mhzp_w-Me=W_py?k#=U#U;lK+Q6CLC{h#4U$glEaX0 zp+U&toO@^xDI^e1O;9IPX!LlfP4k}*gKO`}GLRwv{`#+9$}&v+_3OXhmVsSy{}|%n zl>5)GgIHT_K=pWt!T9T{y$$oMUv3$asS`bkXC39|DSK! z!VOXWSw1jBqksROUz>ss-pZ7s{4+N&FHENN-=zR=jagSH{}vvxzQtXO|9q_(Gz4~~ z`9nOe|0AvPf0f}(=k}*E(vbZM6Sn^pFZaLF>ii#NT&4Y~j8^a>Z@3}GAL8NsA89rI ztBjfM|0tvIUN-BB$RFaFLjEgl@_&^vFR}KgGPuyrpE67!{lBgM%iRCp*8gD%|Ih3H zP{#k?ydG2*9N%C};1r5GOJqyIG5Q{4SCqHsn74;vmxaEy_ux8L2nSB(R7STaMgE#u z`nBP1AkOjK7JoJ6nxuQRzI!z#ZuKXT_xdQe=~NO%Awvgk7=c%Ge# z32w3!ii8zvvn{fOFkXu!s7$c>t_wBJp(cI!1n1#f_l+ssx)Vy7FZc`)Oy!%+P^mLL z<=;sJpTHCXHlV}_HxIxG$3L_ucTZB6o6?jP)(sV;R;1^Sld?n&mcn`6$5FJ9d8A*0 z+EgCfdkRXoBaNmF3n4F~_lQU>L1dSW3}9j8{Ia=ebh1n00q|0EQgzVSFs3)*Q&k3n zCrso|TUU=T?arDTjVIK5+4HNtiL+#I6TcwKN1xnwilO{AMoer#FhU+cIBqa<5xG@T zPA++A2S+!7k{TnJB7CR}a|^GwQR7=~U42mR;KQU4GH&LjlwvvujL%(Fh=fo64p>E+ zIPh8+hRGa9n~oIZQroqY#L}lpgq8C={G}1OsP(H_EL$(A)&q+{MW&cE*aUR?%g^E6 zZwMhmW(wJv-Pr@dWVl;$LE-NCu#gX7A%@{k)Cp7v=$Yf&mMOA0j6vbJV8PP1k`zJ? z7(t3<8%d|@@7>u*K}^pqyJQr=8L+x7BqkbVa?}4||9U3X#GfiV@KA04`f{_1<*+8T zIS6_%F2LGxaQdRHNA`{TXg*R;{xN364}V`xT?g;8(l2@I&`CWxH+$l5A+K`4dvo6= zd44Nt=Sfhmi^)@bvB+G4Ruli@ne#1^Yj>*6n3mhVeXQ`c{Q)}4?&G0%@_J>}C_i;Y zf>WD+2`fglf6A40Tb~)iLhrU_h^Mw|;qK2q18F^cIC>J};`mu9DIfEomM0Yxm?JnC z3)qJdUhv+Ff&*h?UgqYb>3UHe>`adWF6R{}<~COz&&nYaG*8o<%|YptyEwM{s+gGQ zedbT<=LMn8b>hsH)DIBjlw!XW1=kA1yEnyOJs;-xaF>j5zafXUEK}}9gtfo&XF=n zaD{ASTz|**GF6p%px>#^T+w#VqZX92!e~qW!J+lt_*g}CgxBZ><(6Aio`Jc*dxW5U z&%Sdf>CYyt4wNcI&0?;u6eAzsL_Sz^III`0uyP!*YpRavKbau>=AFRh$)@RGGJRqg!mGhJ?f@tYB zBt&y2^++awW;)h00eS;vcEXz{dzlr640b6-gtJkE$LleIt!Ep06zw7hvJ}E*dNBt# zOnW_3=V29JqF=g$oxbs=sU8#~kfg&SV4VWNh?s?f8~!ONoU%mvt#GidtFa3+J6(0q z#j~WjzB0zdW2T3P@=4M|7(;4Eph-2ravTN`!l4s^_+{lEXatG*Qy-2HhcKDekD*qw ztwT+WrM{%YbdK~&e0P4^TOSCJN(C2?%EUz0>?_1Vu(+LdhgMLkx^dTt*?)X9pF8A}dNxtHe_&%hp!b(*s30WQ+6Ijw??yov!H z5DMPmMS~0rA|9~&{Xn?P9K=HOwW94B=FiRLkS5D|V$@3ewPzg}4hIN{52#1qsu!8Y6o==oGfv0ZA|tjqmE^QRrKHCVLxhPCjw3`zaD=mI#r&un?=0 z$#oG;QuOBu`L&{_^!iCT3p<@P(y={cYTKDH>jjhf0ab5DBdL%F2&W1$(+)3SaWoL{ zcr5z3q821Vm60fubO))A^*}RtZp_n^y~Mm)1Vnb1*H$166qRog&x9eKai<1h&-1ui zpld`(vMdr$otl4Lp`pb9h015 zM+V)eOVMYs)!B_|U?Ys?!vuY_Bh_?7nU@~F0NumoHYfld zpy$HLJ@{JYN8_B?0CT>cicz}X(9nS@o4c5IQ@0&{Rt@zWs}Lx)*qU?;;i!6t+(gz> z+88r^r1Jjr^Yiyo`)WPBzhOvj&=1*M3e1+1OdT%qL~F>-l8A`Q56_5pSUu~>QxalD z3BSctU8Udv?9oqXJH|H*?Qb8vR`{~;Vf_M8)4x<{{g!)B)_mbOt83WN9k>^I9$M!A zCb!)Fb72hb91Vy}V|xm&gIXPVd9oOH0xo2?80>NkJ=2ElZj!Y3GPyk{a!jW!Panq5cqk)MV2#-mcM22XldOTCe%bz_fBT z`25z8(Z>5pab1a&{>AtIgwL9k$eEfX8HNy~Lk>3b9A5a{-kIcZ zxUu_uCp=((KvaJ1S$o;-SCf{qL}M|X9x@e??9XRfX}KkDF2 zY2_3H!yLmw8KX$eYI*Px2Mp7<$leFgvvGh&3lyZ{VYQtY(HCV^6tV-nQIYDyPu z{(^Yxr)YwkRHX#+bx`~om}Hr_};Ey=ngm9I~IiZen5Dy*Uz1G0u`LKWcRjG!0L<$dv=Xw%~pwi>SG>i z=dn3f5+b%6gU}Qr7CEhEk4P^rmz^z&u186xZ0sL`n$TWp6I%}fS*?#-Bsu}rZy3jmP>0FdJY`puM#{iMvXCZ0eHDAS#n%GHlI*g_4eq4wX5Q02l_gWX9jK9sLLLDuq{t1UFA)eRD2RQ?w*elokB~XGVMz!* zOMH`%PaMKg`3`xI#FpX$a%jccQv+Kz`h40E8zOMntYmsI5R1O@Dh@ zwj7VYM)@;faZ|{#U*cxhNJa&95+IQ4(#on28Tt8A3y73jL;>1A@;ziT>0X+8e7$8O zaYH-Sa=8XS9BVn>K5g;6+MdrxK=Sh_^f7J$ccwXs4j=FWi zGaB1hX@!lZMA%ZQh z6=Zfe|{w2rK4c z?@~g)Sj6MtJ;b&P^MECo!6_#oiMyZKKc_yZ&ei*PE%h=spRY=G>$F@`dHp)yxpJag z5T(huKk5X0&^gSE_bOBo;MFQ$ngYRkY2!+F1uJae3 zw3$b;jrNzHnejp^zPRAGwltNqIh0iOK zJ%oNezGa{kj{s_Sz%oh@5U`~*qJ3uW8!7;XfVYbt=W(- z2ak~JMTlZi8(%wnp${c$m|`L#%7aV>49grdmP-zZ`LP}Bq*>eY~en;+N6hVg8iA%0Ny@lZ#1 zY&n`u;%WyYFuIJ9ScJ>G)Vn1Y-NMXeYDR>LzgYu8X4@Ci*wB!kD9fM~N|?%^MjH+w zdsD#VEY?C7v0H65kt_Cw?=5Le^RNFTWr(jDyfHE+2T_-(Hvwk_YC_c|{jW zl(}Ni7joQtbcJ25uQIxQe{@?!e}Gb70j0z`-@m+X_qlVyiU-Ex@wWq?l9t?;4Di|c z-Ur2B5^Q5zIbINZm->Sy^$;b+DWH^Vc0!Odey$at`T;ms;iVT5E99GAa~h*iONX&> z$%^{NPs7{ZS_Po3;U&;F7e%I^A>592v}@Yd1p2Wpu21ec#rYpef?P?Y)lkjUh}ukv zBupkN-|2gNq`^yWq>9o8J#u1no~k$Rh8!RYg^H6lK#evZp(RG|qKZLG^Ag_@~84cx3_RDyt2D#kZOB9-o5)xC!aay=NyWve&8O46MH6v>4 zxYt@j75Vsi&ykQ=)(xCV=zUyoO+VvkfhXaZUnE?C98dK7avPYu5ur$Gz;-V51iaqR@K|ot7abqaSUtqJ83j5`coPDgd8aACFQN0CtT8QLg+Li%BB1C7A6NN78hyRK zsE3e3^1-FDw4yk|5_18kF_IksUeZ!$ca(8-$|Xb=(3Yt2^V2sIN_6lb~W+{@@loz2vEDOX8&mfKBqbVaYNWv@0l<$j> zy&Y&PdD{H((I!G@dY=?H+9u;BHQx7%qYmLrhOrdm6mQ4o!M0B1>vY`6v4#vzDRkY} zeXe@V@(2_FV>R6lXDR$MFnzst01_6h>(^nxE})n?gilpuRR{D>Q?eGda2>rrj|HWe zNH^EVYU>Bv1bo?s!{lF36(HNKm$s&UWTFtpX<10mcGrTJ$rv;Td8sSRDn;XEoB+W$ z72$|di2BNb4j8}r;&GGA;&>nHQJuH1ceKF_W@)^1%ETx6qDv5;1;%`b+riLa`)b8$ zs@T@2=q*t)`fce~opBwjmGh+BFtb*ii16emN?@x_9D4~e;hvgqH%(J0!ds6J2q+E| z4rlEzD+K4%ny4N=(Ab5i)^pb`%4}NH*T+6BtV9(@_rh0U?qFq*- zE&M7(2hXwPKEyY2Su3#&@jNp6)8OZPv(p zmuSlsx}YQz;x{VVEX4?KM?Y&S$~_%N>+A*njABa3BtD)}ao>K}Yd=9aArq;L6Ooe1 z?$KX?(2iyw;DmyFxd%>#>FV+fP#9@xYj%1Fb%q?Ib5{g4Rik@#z#6yV!XARwcaSN9 zWRiMDvQsB3{;0^X1!?&wQPO@YroW5@FF90TWDQTe-ht^v%57;qBTxR$O$8q?*D#cwGLTCMfic9C1UJLs8>eW zbH1E4tjIwM45cNa!kXYlVshpej2MNh~Z1 zD3QK1?@Mz2NT#k@PvEtJI(r*rb;(~82ha)jW!bXt!Ii(NU~qrRSkUM|Uy)NWqaLPg zO0h4tAJnb)SVKw+h`Ub{fIJir;D>&$DJCS>IKwE0kg1*@+(q$3fwv9W3oTtrJ(_!Z3TjjsX#eS%cZSa&@ z^!a1N<~>L7B|soMu$T1R7=db}Lm$LRKc@X@b8;`uxn!QbDQ@@&U6{_2n@#4U3WfT} zt&DYo&vK4#W5d)ICE7t>0w_v$Gf&r#WnRy24ttBC!Hf|J4Z-9A{5$CK?J7q;8m=BV z34J#IIpm`^YG_COp>zJT4wV~RxoltB*Uauerc>Gf=>?GX^vVGEgP*;WU5snZu+#cW z?Gn2lXG2&;Q;bl@(!nFsxN9ju6&U6Xnw`Iu0om|$_Wm}U^N@sZnE|rN1ZC`~!pOex z*c#}HC%o!$Gt#D+U!(NFs(}y6vZ5wFowUcMJEj`n>bh^r5+kQB4Xy0nJy9V zO}GXBHVNqPzVzyvj^MncEG_kO?VvVY%A>Xnv469`3pgSP3GhwMfTuib zfyulNZr$L^ixO6p@T@Ct#yYF|u!K9aF0gLwW>d4^%uiX=>`&XwLG$q(%d1iPv-@H0 z&Bu-UQ*74KIlrs7vM^Z#6*|!w7GNK_Z4q@WOh&pzwfDGGY_tf^#D{1kv7>hKjg#au z8+g2&S1;J_=g1B9pam-0)>|5_1+jP`1;Y3#i>vg8h~jO%qHVIJDURAQ>dX7S;RbD+ z7Q$$>4*^VCD}?|jz^qz!kRYWbCWbv81gd8! zb#iU>9@#BY-Wq?hcd_@ED|-D&%z<^UBv!jVKSH>KPA#zA3@QfK%`p^G=t!kSSG(0?lg1#IU}u z;~7m=)|Me)DlMZxK@@6{Gv7<(4-E+koW!({)?WmKy=T=G1qH#IU8DO zF23&rYrf+t+PaF?)sY7lAACLV1R5oSwVTlWmnijP43xZW2(f0IpS;(2PPMg#hRVaw z{7~KN{=zN#P(Ebw9>Vn=BjP6DdcU^i>n{qDiUlLIHG;+5W=XS0W;h<@2(8Q7?zwO( zO7_)mTSQt=Ei3)l}Lgty&CHWl-QQFRJvPFKDM|dpJ_zVTa?c z=sco-FOXQ&xtQ2UN@_C!p?`p=OS=E|wTbwn!V#(YD102oi`XkM2S8=d1LwwtfufH9KgCJ^`ix#(t@VCA zdtv_`)btLyg8;;7sMlF`b4vHKfK0NRE&R%aa!&YrWBgsO{rbW0^F^*8-@pOrB#jMQ zwFci2#Oq0sqCX~3GZAIO`mAEtOZ(L_tRBV`eF!-iROujtcYBCxr%eR>%2gkQHow4~ zr)i?W3(d8ZRGk@fTRHCQ;cSkvosXAeLVEoA%d`YlzLX^D7aGvr#J*o1-CV$e@NJpe zZO7H^-0n;l+x865_k;45w6#&$_O|g?!BEZJ8G^Z_^V`kvEG6XumBnJjq=y+zuB5ta5KW;b?#ntn_lW zD+haY2f2FnIP!TwC)HzEhqM~z&_W`DFYR=`9-_%?F~BIhy9o3*)ok>irp?N1bfe48J-AmS@Q}NV)~=0bOI>p1up@y-sJrkX|VDHg&;w!+W41MUZr2&#-(;o8{u^676d^alJPC^7oqi5}K1G$u?CUTMD@8 z2BC#o&f>m8u02FU@iqFm;Ob|V+uI3U#4bas?b+oEPw)i{@UHfp#Ix-qWLu_XdT}Cx z4TczJHd)}lH`+q%VTqK?5^gf!Bd|xB>ftR}NR)wKMQUlk>0Z=XsA? zbS|27&<9@L3f|rL0-D*6Dj&X3hM{v5T%UD*Hx5ClED~WRV zi_cIZ;BFlBI_|^ey#z{2?N{>ce+=e8JnRKs1_@i+ir+m%prUy zdjU&h*ZkS6I}7Pw+zMxOuq3bAS@s&2Eap{jf2Q71o7@~ZFODo%`CWP4J2p?&OwTYU zGt|GDl4Ye06%)air?`f@0{AjIoZnUfeL#*N>*D#jqC!c#!01iUM~aP!Qih`X=^^UN z8$ttbas}+IL4L3{CsS@XA@9yv|ND~Gsi`=$ca$Jen2;%wGu=LtP2ust?v5|eB9`2a zQYwIv`gXzKkmP=$drq3y?Zt;gF2i` zTAU{Gb<~|ow(~?oMIJv#9L52n%%owYxr90q*v!M_ia7+sTlG)2qe?k?tuEDO5<)9L zuSN}{&aUVTJdgRRQWUGAHm6}Q8)c}N1imoOIc^b^ zJD?{4*Vt$DG~u2el()<#suZJ_qi{iJgCAO=GWfAK9&LQnX%3%>hlU_${+Eg{FKGoi zb0O+ceahNn;Cy*N2%#W=|6kxr&EWh17Nb;-4mR~cSGFSsDiRYRE)&&nIs(vBokbD8 z=vHRIF4|F%5{HspkHd$IC*(v&Vq&zuj@+ZRT0k*dTM;5B4P&2jX3LTu5FhddIGOg) z`ngi`zcGQu1qaeV4ETdb?5+P(Ys;xq%l$d(JsBooSbk@hwsZ;|C0pC8?qt?_I#g1%AX~Kod5ysbG3jK z%sakqRb&qgKP^?s%)~xrpREv_TUGsE0K(qV5@bd`oFF@*6P%ckOYMWYeqm^YA#-0c&BdBR{$mKAHH{hK*_E?4B zUk#mcB1|Zr{YV+DZG-}{*^zP%b1E$lo4X{MRV*YFc}f0g-&AmGzNN@5ZZwq)+VB#W z`#Jcbw^^GEM{wWag4@6^^Op_Yv$tU=fM}g18 zY~x>t55$Lzz<<~8sgq;XeDCKV z1KmfNED@OQ(LrlRZasxZVMR!^Lu;THZSbhBC&s||>HXn(OkZ!3c^MRCUcCE!)=_F4 zdmoO!WI0C24mnlO;N84#lDLCsb04Msa}@GdWm?hbiTsHzrdT6{$DO(&F&_m#VyzE# zsuvX9jO?elPWW!oIHD=z1_M4j>aJBgK4NGnpYv0ClotjSa*PqjKXF}SSt<{c<414l zT-U~^fED|x0+@d4?{fN=h4((17Hep+hEqg`?H0LVY1XgYGlnS zKh}b?zQtynBKwDSR8+Q)${1`Rkzamb_T{H6_*Pmv6*jvfHNT3J;|I}oECp#NoTvhH zkpRoT2DG0YWVa&|JC;N310Gp1yi>hS;QbhGWtm}b8{19r?rpfBT^ZbpBY3hvb=0+? z$gw!Tz7h-V1!NxFURmROl3_Bfo;uW#`k7ums*Po8duG@uT|~g`_lbflf;~mlCKldz zXgqr~;GgI}-E3xb!ix_yA3DNKlt12YJWx?#NT7wwz+tucWBsI*pQ&t%9T>NirSW1J z!ITxniWPNZ&93A<+KB+-o+2 zIl5zwzEo)~@<2(urzL6IH(>+E0-m&~on1Y)87kY-;{2NA=aPXCjPIDJ=1t<8NCuoF z0y9I@=x(^3pSfWjC?eDKul}Lb5ml?Mgf5a>cE{8Am_sXkub1QTYI!{~o2H5xbc~<3 z)a;1+VYWR#u9t0oA<~1zla9M#EoG=5Dys9``2|zoGQisla(?}ZK3Z^$xvH-;Y7X>K zN`F6`^#=p`>G)TSdVW+s9on~baVv=**gZ3#9HsOuj`_(cgzqvVSc-K~93y>4ij&lh zOd={O(E`$HEv+c=>urHTWZ=VE$R zYtQKN_~xXVj|RIV5Zh+waYVTS(>M!sEn#+cqytoawxv#td)XmGZ!2fvsn{1JMSW!S z>-U-Mu zj$^`TCEm%flQ?3R_oXuAvPM3P%B)l7M~d$$^oMq$i3EncJCk}$$5)q)y5wHKCJX+x zNq8X;_9rnN+X7cm^O%lPcHUq3&l*~hSQX&ieX}_D`Z{H2oy()w&$qt04t=3E&e9gI zEvH7}MVe0P3_O|1o0G_$mm1Gwf6&a3sfm&oh&1f~Mw=3jghNa9%s58ztM^u-`XwtB z`3rLQmlO`pxxJXi{dmVKgT!}qIkIJM+Ml6n(SPjkSci`v=UmJ(9`%tCjQJ^z68rsV zim=Z$Q2~x~9wUx)5FAy->?JbLmXxW5X3-r8Kmd0^jM5EZ2Oa0UtRp6`9da=RVsZe>9 zd0E5}lKZhivkq@8TmGOz@q#BhJjLK-jS*SLP3eBIr|5-%<|8*61XiyhFjp}at{y(} zHq`;pqjl83W_t?OedTO^)c?WZc;{J@aaT$JZq4Dn_V{-pop2S7ii1zsPa@g*32;^P z^CZu6*Ww#>$k+o zNrXW<%*eE*&eH!xvKIACX#M^;vBVJsPiSi9q@oKPi?_)L{f=Iodyl!~9P1bx2f2QX zw49aLE}ANaJF}%uIM+I}DW;DEQy#6r5tv&W`G0b9meWp#9i0R1SG%5~1J|y7+IwOT z-$;k4id)}(E8oP_uX*@z;flf9LT$zg6*v~CpW6n|8DlOQ1s;Z2C?F&8-&D;;x9b(Q zRxGGi!TqV&CHD+oq3&#FiVa$?qJ+Z?_hi%Z7W@vcTOkX+%S8#0?{BFf&W~n&))I8^ zFnCgdad!UImnWp&Q(KT>qz4I zrT28!yS_%(B5UV)pS5xVnPR6Fwhi&%P0f5@xPOT$c28K4ca4kZ_tuR=8%N0;lHG0> zac}B*2%L6v!m?IhCFySSDlL9`|MW2QRX_LY7}|;sCxbdBlYa72ir5V!&Gaz>f*{(n z5lEb_FO#VpQf7J`rPj84Iu!u@Tn`WW5Z6JbHXXq3K`zDLfO57i$W16|5wYhntH=+J zZ0~v&#z(K6vXqKU18)nt0T9jJ(5AE^qL1dUXdsL{vxj>=D1oF$;n|tfZwe82N1hjI zu^idP;xK9m>&%-u@+FBD0mo1c*E+S9q!9FB5Tt-d9_R(eBg&B1|EA7*H&l=UZNVGX zSFc3ZIp$Y0>DtalSe8Jrcf9JP)Q>mECb&l4MJ#X{lzYzYYZSZ=t@S02-Q|$O4vBRSBHI0 z7z>H%HXdJTJ5>X1^3YW1WF1}swNy3r&l0OP#F-_EC1f_jM6Yb|nDdqu=qf8X9Wl8+ zG3hD`So^Viceh+hb~=KLpBr?H29OH&&v7;5{o7xJHdWkkV&1hMyLdOoQO)FFX!)#n zE7_Q1_I2D$waJ*#glA^Bth&o#OF&fS^(A_yE&u6eIy-2f!OZsG8qK8l^27oZglt*k zhT0#Sld9ms?SA=YkYZF#!Mypg57U{yoaAa_1+HtQ{)ve4-GMl@>WjdhnPX_lN-l}E z`jmdC#xF(p|IgUe0TU4Ep8)-8K>koyYj@p|q;@MXyz-wjn@Nqfg;J$!eH#$k8=ad|`vM=Z7}Z zNEY2Gj(?WU4r^};qrJTX+CaA%spe>!SrxXY+-2LkVC|LRbG#SNy_O$}Ei=8|1f7K!-Q zEwNXf+hij~aYCuwZHa2VPAxuYj9>a|d`4;g4R8;jNg~-|T2NmF-2XYkeq!n2MkM$K z+A5g3pJfVQ)zv{Z;JQYG31s$L(fN*ZbJ)Vowav72q7Km5LjO6F3CjCvlAO)ri0~=^ z#k-}dBFmSo1b?t2m3qn>mO%a#6q}bsSJgh4OcWK><&3~{=_QJ2#uR0~4GMf&CulN1r2Hsg|0?b1UDy!W{LqqEAvQ9=2d#BYe%(c@ zAq}BCd z=%!}2Ygdp6wfxnEjGM(a{^3RY)yOl?+C_g0EQ@bHXnKzhjIq)H^2CE|M8gd(@p&S$ zbvGzg6OjvX{w5Edk#7PcB8^(MU;Q1EtOYZ-{T?%B1=Tt(zL?asvHaHtW`MR15 z!Df3VX-YKZXzR%&w9I<+^vclth&4T^MFBiN5JJQsT5HeQ}OE9p10ZZ^aSpmUUKL!_>aZq0+)yrm{XN_ z1Z7|e+Wby&sw@Cg6KKh!o?BV zIby=IGo@OjWQ1QnnK645Qx!W{NFlF&#{*rE@KxzSyan@~p*_OB3-1OZ&HTrfo0=M% zY>}~(&*8wSWZ*DRE+OeT`39e+KD8Dn&$!c@i7zlf=~*eu_u>m%-0_2_S3eW2$)u^9 zuZxOWp)fef$n-A{@;-s~mF3jA(=6p-RG1S4u-Xb&nqh{Y1vtSo-T-b_^bHEZ>~WpJ zwoD8@JXe=n?x=cx67axL_*g`YP6Rie#}Gc6;X_dqc#VV?i`Tu5cUDFE&g0d^S|yMU zOC1Oi(HLCSPtZUEiod=Hr0kh2z3U1LSo9uP#~#QAE+*BNp2g-u6OwYF1PQCPWV-l{ z46c_p1ejfpk||DjIa5Vu71s}vvICnl2J{;^I;0MrTVm>eMbI<|SJtrPv^4RaiH}x3 zDROkCH=oyc>O>E)ub;_vIa;{By3M;?aj*}(x|=_1w5Fe(9h|=yb%Kimr(qDF>iI2{ zVf-VD(PT0aN3$?Ns8ayMz%fQ~%XNKzdGpn~d z*7VluX;wsqKA85~%*-2!oD-*rcEy%o-6#~@;bqt3SA#R#Qnpz@cY@0o#0iw1lReV0 zcl*vOjC{p&v8B)luA-h;$%>koPgSOTO3$|ZE*kDaq86DBE_M!6wv3mT_UWEV^>NA? zG=v0A%N6I*53jG|Wm1BCz5CAh-daP%Ccm_3e*G!c!HNGKNuX7lxK#=%Ua*CF&X2W?YjA>;)KIZ*!A=l z0*1D{E_c3_E+_y+@ihuirU2S8z;-wnAe6WQ%j6Oo#OF_BmQA=9k8;Wc40>~#B;9ti zrJYI$BrP=VoqTyAbG1gQa*^!Q=KN1D0C1(u3Zw*H&5Dp^fwc#eq5!Acj=)PQWk%mr zTCc5hO08p;w*Oexqx}bHyXXBA%xn>>4^EQRWtV3rik~E4&@>a57ya)~ZRx*c$p( z1*T;Yq7!x12-^t|;idj$ptCr~%BY^O^yb=#h6rdvX8AX2?Xp36Ks`ed30eN%n?uQ^ zhB6QQ%r3?zxb-unxRR)Y=+!n+k$9=Cb>`(=KEM*~^6U+!05DsE_b=-;qeE}MljFU{ z_46CkDNW}+AEy8c^N*K!N$Zk4K~*u?-~B#Xj`GKtCImwb-*AABKKF3klE2^&@?vAa z0mwy#|CNVVAO~;wQs6barS~YX7Uc$*!w>V=WMf z65bJjx?U1kQ1k3#&Ut1kcZLtlnPTR7;#6_j+JngkHs_9}tgp8G{yFo!VA{xCRl+}% zRsO!%;#Yt7;-QG!ep5hy{}&(H0T%o=I{4m9L#zYdn9h8s!=+xpmr}Zzkaekw@~F9O z$@U`3@Q}e$WuSKFp#MW{rrecLQB~PteXF(m^z6gOU;FWw0{2+!LK(1stu+}Hv2(P! z2ass8Qq8T^zGD=92@TFnh6g9BywW+liv$79%r(wHGxI+cKr^yn9$T95RSXcn{V~iE zvrGp3@JCF}!wT5@_On78N7|LH>*SLrW6i)^*)0QxH%JfMSkKQY{DBNR`S4!~u-+fd zsZ*Ql_`GG6xB6W2ZAB#m`0_RX-RHGb4vIAECVelphH>T~Jeh8ak#;!1an$ zp!60yFM~tQ06!3hNMc<;YW&@Qq3dK4T(E)uAp1OfNAzCINuvg*`Is9`;)g(MQ}NHc zPWhLY4Qa2{j&YZR2awgS>4W>H8(cUX8K3zd*zg0HdK~{x<&*1mPrl8L=q4@)6gGZr zR}%EhF!(4J|D?(0vSEG6fWyph<>|uE0I`N?3o@EJ$>rE_8f5Nn*Amu#qdb2n^r!Oq ztD`30+SJ`wN&QexgJY1I>$lQjR)f))PPjOhlP`b~6sIkZ*{J#*`=hud=x`Z#PMt02%%N zkTb_@v?9INY-^Nr(vv`7m??k_um3w^xY)NJsHM%VL8>wtI3#)p0YYIb;sUxu@G7{xOzIniF_1az zS3kzCC}zg5#*re^DldvL$YN%OH2uwO#Lhmga#Mn$g+I!`fKgblMCYoW6R*XadTT*Z zvdQMthvuJg8^2#R493uD9B!oE)_vaHg;!vkT$J32P9U+-!$o9LuHe{ zD{P?MiLFGMm&YZZIY<{%<9wXuNMD|&)QT8@BTd_GDMI}J3hd)b7x7(``){q&O^^^jCg^-nuF8@>5UcC93s?Ibw! zptUb?+VT+}R>Nnb>8X_%z&;1oed0T7!~7-N6xRo1ce7(Jc)=dYKQ3aq!6Jci!tG6H zN4Gq`n6O^Gr<=MmrIiM*$nkB798L4^S&%8oFQwCl23mq24c)+T?8df>tZRq$=TSY5 zBEM)v%GuPNsKu|}t_EIe=nVZ=hBKCOVzzcSuPiG1{rDKgDY+btZ%ih-3f~OE&d$ZI zBk*X7)|0=Y@s+a6l=el+;V}?IQWw#XiW?$Ykivm~-oh=}?9UF+IyP9v^1PUIzBrX) zd&R*o_+$~+z7oF~%ip(ab$0+Y|I*dLvCE*cH8OH+!OD6{8`?M5Fz~J6yIBT5;_S)n z-(c>U)#_tO)|MYH9VK?XtGgcij>$DTD7_)d!40>$n=afrPjI`gDfoPbEa8_b18KG! zXs)#z=olA3``-|StsW4Jeig<gh^te`oAU);zenNX`j;iVnp4(E$Ap=w$FhUs-ePP|Nv;^XVrh zt-y_UN1dbHWxRXd8&D8m>;n%=l7T2xg)Ulq=e<5Ukgl{8Td5h#JL!#n);bjUDFyW} zwJHsT)e<-I5yc$Ho6xBFHy}a z+KdyII-#adnj(*?@t5Aj>k;Mq-BQcgyT4zay{dYH1=P)Q+c(`j;JuE0pbYT4h`*(~ zrn}qJIrz$XM|QopF4+Mz>$og4fa|iQx>9AmJ7aoQxWiWW9FC=*gLZUs#){4K{IN^B z;~BY}cG+Ge*E}PaY>wMp!B}$jJ{9fEl|Zl-m%!Q%cu|(fTCUg6w^9R9HWSGut}-}* zJXj*Et$7dl#{~{bviC`(9Fmq2p5#daSAk$B(^jZi7xc5zzJB;$)F0(t_8O$&WQ7fN z@5k)dXaEhbL_X*HykLn=8Tc%<2M{8Yp5+EtZb2 zBYZ*=+|L$aNH0zVNvW+^H3AZljqgj?ynw;=e2aO1nmpv))^3!;yvSdeO`>zCQTg1y zqjSqqB#l{j?!`oCEZ#%r&xiR{BXej4zivLEu_~LO zgqwvY+hn4|#?(F-a0j<8z>w%<%`kMul8}*+ zVP^E4+dX|GUgnP>70<(Dn2uQDd&JJ*!XKeEOCb%^NI+L_ z;kQ4AFd~99DEIU|;{(%$Z2ontLMm0?hF6tjvfoH*cSIevu#j4~bNVNbU#drfs>D{$ zAZ5 z#7L4qW`PZTMEBxJxO}=*i3*y4+q$~=TNxMYI%Ey^7f82H^yZb(fx_`LBK2S16TFcU z2`l zJtQSpt#mZ*%w#Y`@r4~~C~aJnD|TWH%tUpt zclIC>v|SpTo5ffrnakyrnxh5-C?Oo-*QUOFOKjc|E|;6Km>Amc6e+sGwqVy&&{C&?O(@nW6A63!?#fOcEP;J3^J9-HZ(K zNhPdJB@Br-AYCB1)5JnNyk^Be*8A8OyChM-n6B_F822zQu5)(hYPd~*z`zrqQXtnP zIsTK-(Ef@Kl*Gc!bKZY84VRU9TmJB^ix}B^yaD_wcbuT{q#whl;9b&~J3lJ}nJqg% zT2r^To(qyk%ZmPR+B=^D+HjS6FA7zBz`M>s1M3K3mll;`J>-q4GMhNW#qv5V#+PWo zs-~3OF<1}d{C$6&$43Fv&$w z8Lv}jNhNnPKy`*Lnmqsm1>!i+_mycU>>Q}5SPJ7jWsCk*+xmTd54TNj-b=UZ+=iR2 za=9iSD%QNwH^H6If0p2VKh3A@z`}R@qU$j4l3BP!0M0aPMgW$nGpim|+nRMFUODYy z6C{cQa9O{1DBlR}5^RK{G)vJ<!T{GFcS)!kG~7&_wyLxXE(Dn6SKOae ze43luJY(uhbxB`%nJCUr*idNoCi@dc;LgetF==6aOTR=BfK211{*%j$B?M@J`lkPe zv9Rrnn*4iKd2Zp~^X%S%4j+8W#k>Hs$1!~qoK<0>2Qh^oI?;EG&_jU#&@zVC0K)3zm%g=N16Z8 zSdX>90Pr%(xG*r$U&WhL{O@NYHZ!oHiy#%5KypWZeWwnv#0!U!H|OhGOLhFJ~AQcd#i{+jH8xdla6-9TEF9 zqJjTeeBXElwPeN6T{~)ooy1dQeeJ0x{W2>N=ocNS$HV1J6E!Le?@!V*nFQ66Y1R=e z9-!;}cyH2<;p2)8s9Sb}5_TwI<+(dvH_)Fs4X*J-9M)%6dIkQxeSG@-_wF`JrtN?f zdTaNzIfx=nGu_=X<;0@KWqET9DqJ~Q^?@r^RyZ$R-6Ar!!Z8gWTF686GGkzS8|A4d zW+8+KZ`+bSP1VRq864nh9sf)OP{x%2NvR;$MI|TqOxjLB=^@AwS?i|KH%7Z`8y66y z@}*@6B)bPzmaO)}tYokZB?cq#Ay47nHBRR!Mo>oZiiou9yb)hi#_{m?ur^Q0EXFA+ zSo!1okCd-KbcyuZUJ1kde|V9RZ>+p1ENmak4L|tFIt!gCks?;zscY)gu=Dss+Vq{P zJ$I8EvrAriTq`kYGYOQA(hhP3e~g5ms46)+PgvNsd~Vt^vAbP_wk~r!l)_D?I8<&C znjpM7QJ9{N^&B@^^M1@NEVPKJNMb#tVLW*rbp;*h@O$_iD?u@0uwwl`R{0@9-F-EC_e59tC zTGa4qy54GO`5OzPG_iorC~@z_;@ex+BeFB=hmZAmIh(i7f~yyd&ZvX2#l`R(}E5G2w_YX*q&hMpuaZf7}2kN^> z6lD)~ynb&cR^$YoxvPh5#-Uj~)>G{%c$x#&%SxhJP2^{5%5v%-ezS=RGKiM3JnX`w zNWOVXCQSZM)ym|7xGT6bq7L6RH25VKIU};3VBXMq>kK38og2wYI6A&a@6?(P1u14x zN2NJSNW!Pc4L<61Q;dT^dGwEnrvGNmE&qhw_?P^%Fw-HjR_cJTL-r zE&>8)vx_H&fprk+B-3}ZOY7mpAr%5{pIuzueB(-bNYs) zF2$a%JlMGD=ac=`tO=c)yX&_fhkG@(guEIi4^Q26R^s-+0dKMpB2Cx^;&fcM>de14 zP6ighqqg=2yFs!YrzE0oc-pc-FD0{CQ9sykZlr*1|3GN{l7_y0`?q+%MtoiQq3GDW zrMa-cjc94hB<+vRsr2$PvBpV@J}nKdEYq z@8G4u0M=0>t*xsL9#QSJ(ITqK?SW+!A|RZ) z4K9uMc;)k$n*6*?C0A?kwEG?$SJ63Dc_%7S`M0v{;O6ox-zFL5kB45N0oZ&^!!&{m z3)oEw+?rUbcAvDCKIvz-suaKQ-i#~^{Xjf;3e1gpNI+=J2` zqg?MFPCHfIUg7RC__bw#*Oe_2vu+O7Mxh3S}P^i4>QZpBA;Id11s_mISB-ga5U?aV8VQbTqh!ocEz(4{e zJ7MuXY2>557$0>pegVJ6n>L*(abPqwcn}j<<91EEkQ&(7ysuwf_`HX9n%zmOFs+XY zd#DbZuD%1?dcB|qF>S+9rbqiW=_7}&flt1Np7X^4^IeHi2jgS7UHf=X~olq3Q*Fyd0QQP#unrKO&o+l&tv!AljBeDT=WgdrV zlVy5Tpfy5EAwo*q`jDXvnHIWbw^eDY10@T*z7(l%nm(mer^GywF3))=5!Yo?C^HZA0lEtUo!jf&oQ2V2iTl~l0qN7X@x;ghe z_8&fk^{-#aVZZei zA1sZUXX{)@+5oUj2AYNXHjSZ)G7t&5Z2tB__V`N{Rmgz8IZVS$H_ceqorm-_VlQ>j zn~lW=p61!Zg$Q9WT5xq=^S!G}YOsk<>fFM+b80XcD-cS}l+6}?&h5PE*z{QRjvmuW zRBL5tX;46&huS-Kp@944&YJd8Zqe{SVyJcZGhi^s-$=oW@HIu!{6Vz6i)BXI zTgUMQoin0+S!|-zEoY{v(!g+?lGwPjR@s3~yu;SRPk@>+-vITT0WL5o7T5(ANHV19%bK<{0NY>*d3bWV=55Y zIXB>YCGnTOYa>0jALs|B{nmT5_Ax|88xd?Tx<^I`~jLcC@{z&mG-vKpP^V-z_$S-pv>mR_&M;F3nN=pue6(&`_?x(_H-fL)q)e znNY!0)T;Jvnq&dd|GD!o&z{1ynIQeI1F(AbUNYOth!#*wD{^4un202WFR_8yuH{`jWZwY61TWqmAfNWWiuJ6u*TTx-dp z@@$vBdL%e=E-D-eq6YOuxdZBc?3&l#AMHj+d7t`4eTK^&n82y5)#wk{$jW+?U?<(M ze_u0s8U*#qCit}BZ9+IL$aC?O84y<|_2osAY%_7Zql|}TD{y21y+w@u(>|Og&z1JuhWc$EK=N|*9Ekb?#rc>C)5AXuS!N~ zjgF^Bi=zL4#_g^MsXFfSRof#-Z4OU)tfnP%x1p6+ze}1(_bJ^~=5~0pw&)1RF9-k_ zvOsE(=?k$GDgD(#Mdm2eA3&X;CurHrQ?Iz%Y*EH%lPG>1m8D(NH|FTQaF(1#11mSE zr&tAS?9ynWJ}Y1H4gEXF2@3k+yEprc!RF7}J-^*|$K;&c^BqAWYK}uYbK}C~tDv0! z*#=~hD9_(|#DD7?1bv(A{2SBt#7mX*Q(arQ%z8vp-b-J_(C!C{{+%WdSSP=*(ImgW zWR`Nf_+Mrj$e5238wh3>Q#|CXBxK8Y&<6Xm^XvCA?wAHVw-_-25lQa05AQPY?BMkIclF(uc=kn%zrlPAw|oI&=|sX+14jKRRXDP1$A&3;_B@0Jl!G@lLA$Q%0%1 zWd$)b(iYVua7p8Y@3~00JvZ7q)%KN|2G#(!sxcg>Ux%+slp8_5sYfOH1_dd_jcMB3 zA5pFucSZn#MXsUe?j<|zZ~D4|RdJL);tGRaCE?O`E-IHUZb{oXw{$yc{<12bFt})I z)BLgV+d`9*yFBduElt*x6_qe~i^q&?A#zpt`PuZ<8!f>o2Vdnefki=E*uB!_K#JO2 z1jcVf7&+VV5v11j(zd6tnBzMi*XWXqV3fxWCEK8XycsNOF6v1)i3saZ%O4!r2h?c+ zh@JnqT0os%n+gxW8fhZk&<48^t!2frACmc7;cYRRfs%(mCs9cKvDjVlTy(H9q`tDl z|DbDaqw+yy)FzNy8UY7DUXl)*k-Gh>eADbn*}(v6UgsN6=>+Zbj#|znxrQ4Zy`O~* z+Msoh_i~iZy$kM%4So*vFb9&pi6X$E{quAoNnew&D(cL|&G3GUQQ+hR!bQ@hn&oDG zT>JxQtwTcRHYELZeethh@wh(IK{>l81kICR1iu_`=-(w1!4rZUe|-~hddR~(VGhh< z1e~|aDPVoiw!-Im%LX;bX`-kUY^q?G>!5nP?zu=kI?aI+j z%+R%3t;9bT-1iryni`{dgbo4QS_44i-{884zl$QppsGw~x2<=rJzi*MzWBv!lBhk{ zm=o}zP_^cc^L=hzc+tEYrtleF@QYbDwPtAr%xS3ofhB=EMCY|q>+wF~7n8}*?0~Ef z@^!0U$^a;3>5Q=>(X(5~Yo->p6pzo1R@ z-AZ6g-&<|(!c`)Hbo5f@cadf^(&Wuje7H^Ya-Ynhvfh%ReaC`@g4W-3fSImYk+)ujKLuQpvKEX`cfcK6$ZDd;6Y!L~nu zph1`e0I_1y)b|>%yT8fCk8L;mjb2kyAmj%XPbnWE-I1Tp&esyBZi4LDx1VWZBhyb- z=`V1tK4BTzdpZ?yZQBT77TTO!c!Un5v2ZJso9SbJKxC^?$w9=`kzb)`>HLk|?-;)T z{m%Ig@X=_Y`$?B}aK5w5{*u1J>|uJKMrTxO;RO!>GmEYfnN@?a1GzV43=q9--^;!9 z?TMlnu3h>Rzn9O0&&1=hGQW%TC{)Jl+Xw0{EFjsF?XgFe$@f@KPe#s1l(#IqhJy>NYv%kxClP+lpfn$kF8Ty(M}qx z0tj!6Sgi&3=8?J=&x_{c^JswG3ECoH`t6nFcGdv?k<>w ztu|Vyb^YGkUjssMB$~yAJa*Gn`e-MTBPB=Q|gyoQy7l9auv$K7+H~t`RTe zWP9$sNX2;O2BBc{VV07hXJNkEX*ll`Bgj|t6%+GkeY)A*3}6L~A;eI}^845E@CcJ{ z{P5pRo=6=!N>@%P$|r_IQYK353q@*#(-e4>)%zpfwux{#*%746w2?BU)CD?x4Ps(torX@g8PH(Lc=>qzCJ-&jc4B{vJ!t21)zrE1k2Yo?OnBXQ@o@yO9lNiej&HtKe*CK92;TKlP*~bvCC*%01wUGrFo=| z(7No!HrT-hbRKqQ&(9mli|$E{-U>CRVPTIkP9DTeh*tZ$%Aq`Ez?F>G0dyb3ektS} z+^AzV1pAsl`EGGoJNETvBp=b@UDGdHznhfYBGIxwe3U>~cm?p`|N6@fu9&^MHDboY zd6Kt2xBBmwDOaO^FnrP`jDS{KYn;n}CzW1>D;1LgG>{TtsQ-=Q0TB%2IwM>>b?K!>_g zr3(iv)$lae(p`YqTOxf44~Z$8u~X{%31`l}r*IwZ7w@ixaR|^oOm5tcII#vXa+6mF z@B`q-{L+nF;^j7qXV<~1-`<-{thh)a8ow?t|g zJGfx06-ez5um$0}Tz_0=q_tt^Qn_ApKQS>-|9FKof-@B4KYBg?crCT8saDOFQ9<*(iWKLMTHBkGEBm@x4k^Q5lhXl zolIf!uME1YY~mU{^eU>|d#~RBZQM!%5+%*Q!+1zY&cf>F-+^|%;a6!HP$ghn;sj61 zr}<0OQl6?K?RDl!GW)(+Yvp638*jQht!pntE6B?9XU&4xm@lRzFOFCc-p>lQbck*n zAPF&Ki%~P~$ISB(#e*UiIY3S``cEA+01_bYviTEFsb; z;P{5>+g%Jfd6BaVQ%1l2wu?%xb!jC9Gl7` zTz%(ndY*Wg2i#bly57w*Vx)!6O9Lr)_};goZH&979Dg@1Sth$hC*gK|jJ+TgdeeoS zX7t#?&`RiHp8*o+PdbE<5n$%|R8O>3l|UHsKWHL879qTI=Fq+`{0BTF0u7RmICbY- z6+fdu2^<)|rpOzTUVxrV0A-AS4eAR1UHtmj8ig%QTGHa4iq~i&A`21T+MNLpa*YMK z)6#YQUTNj#*0t4#oII3X-SJpFPY9F$V*2GGRQy8;5s%;ap(|uaz8FissE=>HH4`Qe zdo>31>-#CWY-XwcB-+nwq!m0U>ZGnzTp0)I996j-|G9lULPK_NBkuk}A2$o{&mnp( z7Z>Ai)9gf{1SR7p-Tu~9AG+?|<3qTpjCdHr)Dlj*0$E%$%sA@8+LWrDrsFT=P?WYc za*d00M?*>dCi$9!q=wSJVlvVTxI46ojrY>y#(F8cv|9ioqgZ={gZ`_ z)z6(vrpM8JTc$}$xi55bjdf~tPPaZ6EoA8wa%V(!o34CrU5;y=`P4eEtv^y0;y1i- zoEhKxiwKAsKm|oe7JwgD@CYe^AAMpaRyckhc?F;N!a#`Yrf2}!yFD_3c~sH(OcAL5#{&SzkOev8Wqv>?o8!CmnbLI1U3aq0cY;9>7{ z>@zZo15s3XTL)44--T0s(qk53CtJhkl33h^KF;u`&Rf|JY7Yd4O^?-wvA)~O3gP@7 zIf~NwqFD0RBGeWfCs;)Eb|2L5O4-C)@HpEG$6X`^4uG%z`yrB;J#YO6W*|9G_sGs+ zfYd|7WN`G#o;!n9L1#Tz3=0a+KMeFbrs!j56Mio0;s0L#aix`jw06I?&Wr4|$eDpo zS?c*l>3PB(c~~HN(z6)aRwlLYeX^(>T-vxb)zTvAJuyuTRp+IL(e* zWl}Jw7T~ed3K4>;x7B18(}AN{g1=NP3Y3>1IPJk@v8avGGx5C)+&TSTTz*S-ne>i$ zFAb8j#ZgoqChZ{qDl}fd_np+)R|p&yLs0qn$%hnbkbpU3?Qi}#_RLq}HuhwADg3Pm zeYj6BY#iUsu(rDoQurHpFb@)@Na69yGcCj{P;MV8#aq2OEP_o*4fD-ZpGDFwe zDc~qiJAoM7jB;mP>%Q={%!~5>OpWt-r z{Wm~NPv+EKg_;RWr-H}LXC#ng|J4BMGV;mG^qkrj=JZylr%v@K#Ed5*PC;Yd0;MP> zoU!e6I(R&c<&fqPWHjqW(3!xSk}3g6xkiB}@+11Y#N)C`rd}GSVq9GFuB2@c3n0$% zojq4-TLIqLPW0_@38dT4I`SMkXSY3PoC;7>pM~4l);o-vZ;K#-G^Ya>jGeJTwt6pl zwuT~~`fxm*TYJpyFy)Z@2Dl@MP2oCHf6& Date: Wed, 18 Dec 2024 21:15:14 +0530 Subject: [PATCH 11/40] [Term Entry] Python SQL Connectors: pyodbc * Add pyodbc entry * Minor changes --------- --- .../sql-connectors/terms/pyodbc/pyodbc.md | 108 ++++++++++++++++++ 1 file changed, 108 insertions(+) create mode 100644 content/python/concepts/sql-connectors/terms/pyodbc/pyodbc.md diff --git a/content/python/concepts/sql-connectors/terms/pyodbc/pyodbc.md b/content/python/concepts/sql-connectors/terms/pyodbc/pyodbc.md new file mode 100644 index 00000000000..3596bad0f66 --- /dev/null +++ b/content/python/concepts/sql-connectors/terms/pyodbc/pyodbc.md @@ -0,0 +1,108 @@ +--- +Title: 'pyodbc' +Description: 'pyodbc is a library in Python that provides a bridge between Python applications and ODBC-compliant databases, allowing efficient database operations.' +Subjects: + - 'Data Science' + - 'Web Development' + - 'Developer Tools' +Tags: + - 'Database' + - 'SQL' + - 'Python' +CatalogContent: + - 'learn-python-3' + - 'paths/data-science' +--- + +**`pyodbc`** is a Python library that enables Python programs to interact with databases through **ODBC (Open Database Connectivity)**, a standard API for accessing database management systems (DBMS). It provides a powerful and efficient way to execute SQL queries, retrieve results, and perform other database operations. + +## Installation + +To install `pyodbc`, `pip` can be used: + +```bash +pip install pyodbc +``` + +## Syntax + +A basic connection to an ODBC database and query execution with `pyodbc` follows this structure: + +```pseudo +import pyodbc + +# Connect to the database +connection = pyodbc.connect("Driver={Driver_Name};" + "Server=server_name;" + "Database=database_name;" + "UID=user_id;" + "PWD=password;") + +# Create a cursor object +cursor = connection.cursor() + +# Execute a query +cursor.execute("SQL QUERY") + +# Fetch results +rows = cursor.fetchall() + +# Process results +for row in rows: + print(row) + +# Close the connection +connection.close() +``` + +## Key Parameters + +- `Driver`: Specifies the ODBC driver to use for the connection. +- `Server`: The database server's address or name. +- `Database`: The name of the database to connect to. +- `UID` and `PWD`: The username and password for authentication. They are case-sensitive in most databases. + +> **Note**: Connection string formats depend on the database type. Refer to [connectionstrings.com](https://www.connectionstrings.com/) for specific examples. + +## Example + +The following example demonstrates connecting to a Microsoft SQL Server, querying a table, and printing the results: + +```py +import pyodbc + +# Define connection string +connection_string = ("Driver={ODBC Driver 17 for SQL Server};" + "Server=localhost;" + "Database=TestDB;" + "UID=sa;" + "PWD=your_password;") + +try: + # Establish connection + conn = pyodbc.connect(connection_string) + cursor = conn.cursor() + + # Execute a SQL query + cursor.execute("SELECT * FROM Employees") + + # Fetch and print results + for row in cursor: + print(row) + +except pyodbc.Error as ex: + print("An error occurred:", ex) + +finally: + # Close the connection + if 'conn' in locals(): + conn.close() +``` + +## Use Cases + +Here are some use cases for `pyodbc`: + +- Connecting to a variety of databases (e.g., SQL Server, MySQL, PostgreSQL) via ODBC +- Executing dynamic SQL queries +- Efficiently handling large datasets From 079265992fbbdcc9891a075136cf9abe23698182 Mon Sep 17 00:00:00 2001 From: Savi Dahegaonkar <124272050+SaviDahegaonkar@users.noreply.github.com> Date: Thu, 19 Dec 2024 16:48:04 +0530 Subject: [PATCH 12/40] [Term Entry] Python statsmodels: OLS (#5739) * New file has been added. * Update user-input.md * Update user-input.md * File has been modified. * Update content/python/concepts/statsmodels/terms/ols/ols.md Co-authored-by: Daksha Deep * Update content/python/concepts/statsmodels/terms/ols/ols.md Co-authored-by: Daksha Deep * Added the changes. --------- --- .../concepts/statsmodels/terms/ols/ols.md | 80 ++++++++++++++++++ media/ols-model-example.png | Bin 0 -> 32189 bytes 2 files changed, 80 insertions(+) create mode 100644 content/python/concepts/statsmodels/terms/ols/ols.md create mode 100644 media/ols-model-example.png diff --git a/content/python/concepts/statsmodels/terms/ols/ols.md b/content/python/concepts/statsmodels/terms/ols/ols.md new file mode 100644 index 00000000000..6dcb36866cd --- /dev/null +++ b/content/python/concepts/statsmodels/terms/ols/ols.md @@ -0,0 +1,80 @@ +--- +Title: 'Ordinary Least Squares' +Description: 'Uses Ordinary Least Squares (OLS) to perform linear regression in order to reduce prediction errors and evaluate associations between variables.' +Subjects: + - 'Computer Science' + - 'Data Science' + - 'Data Visualization' + - 'Machine Learning' +Tags: + - 'Data' + - 'Linear Regression' + - 'Machine Learning' +CatalogContent: + - 'learn-python-3' + - 'paths/data-science-foundations' +--- + +**Ordinary least squares** (OLS) is a statistical method that reduces the sum of squared residuals to assess the correlation between independent and dependent variables. In linear regression, it is widely used to predict values and analyze correlations between variables. + +## Syntax + +Here's syntax to implement Ordinary Least Squares in Python: + +```pseudo +import statsmodels.api as sm # Import the statsmodels library + +# Add a constant to the independent variable(s) for the intercept +X = sm.add_constant(X) # Method to add a constant to X + +# Fit the OLS model +model = sm.OLS(y, X).fit() # `OLS` function applied to y (dependent variable) and X (independent variables) + +# Access the model summary +model.summary() # Method to get summary statistics +``` + +- `sm.add_constant(x)`: Adds an intercept (constant term) to the independent variables X. +- `sm.OLS(y, X)`: Creates the OLS model with y as the dependent variable and X as the independent variables. +- `model.summary()`: Displays the model's results, including coefficients and `R-squared`values. + +## Example + +Here's an example predicting `test_scores` based on `hours_studied`: + +```py +import statsmodels.api as sm +import matplotlib.pyplot as plt +import numpy as np + +# Hours studied and corresponding test scores +hours_studied = [1, 2, 3, 4, 5] +test_scores = [50, 55, 60, 65, 70] + +# Add a constant to the independent variable +hours_with_constant = sm.add_constant(hours_studied) + +# Fit the OLS model +model = sm.OLS(test_scores, hours_with_constant).fit() + +# Display the summary of the model +print(model.summary()) + +# Predict the test scores using OLS model +predicted_scores = model.predict(hours_with_constant) + +# Plot the data and line +plt.scatter(hours_studied, test_scores, color='blue', label='Observed data') +plt.plot(hours_studied, predicted_scores, color='red', label='Fitted line') + +# Displaying the plot +plt.xlabel('Hours Studied') +plt.ylabel('Test Scores') +plt.title('OLS Regression: Test Scores vs Hours Studied') +plt.legend() + +# Show the plot +plt.show() +``` + +! [Regression plot](https://raw.githubusercontent.com/Codecademy/docs/main/media/ols-model-example.png) diff --git a/media/ols-model-example.png b/media/ols-model-example.png new file mode 100644 index 0000000000000000000000000000000000000000..7de8a60a43c3c02717e0af85310a6f5d406d87d8 GIT binary patch literal 32189 zcmaI8by!yK);0PNq8K2dAQFm%NlKRrN=YaNqLd&YAl+>ef(WQcV}Mc;ih!hoAd*T; zNeI#{J5zp<%n<~mi3CMmR($#T$%T{6zKC0CtOI)w(iu-KBDvC!o%c`4hg@K`I3)iGhIxJN#1>W z?B4mS7yWlyXsXg%JdW<>n5kNwQ#tig+3MN+%v3IIM1-(#T%LW{vTbdIy=6J`T|YlR zCDG8*HZuId@3k2B2~JMV(6NJL^mKF~mWtGm@wdiODn1Gd3S8~y$hq*_wny~)=;-Jw zot}{V@sB^gqEy0fv)J%Y_jF_!mS(hB*3!r;od5A}s3Eh>@}946dy9x!y${t6me<3s zmZE!0AM8+VGBVkv;!|eTUF7L7_Uq~OuK_*7!vS$|9J-$z4DpxL-^R6`h?ioTmdcU!B>Z=e_F4r(fhz;n`P5xBfjsnD=YIu2+h|?9tM$({j2g z9Z@AECw$#Q8!eXp4JFwX(aNnn{`!!KH-E)@u|WCM4wmD;*On(V9Im>!h>J!AX~?Y} zk9oB;(c9a*FxPHPO+%At@cB-EU0ht&jjEq%SH(`9IyL%dZE1x6!H&HHk6$Y_hcvX2 zDV1l~^qi7(99H;zXVXyId)|{LPvUun9fvOr)Wy|}w!OdEkf;{$$hGIorsT`%>JARa z>tvUY;zJTeh8%1C=|5G@Eqw*^UG9{dHrcK&IpXV{%X&)g6|?HElUpyV zsTtGbmuPSqNy}jtGWL64N;0T^E=c2F#$UYh1&>yOHB#^Sa(`^ZqUBx8;lbs}Qd-yH zq&H@rpJsKd5}OCrvWnJLmxZl5#C5XGPlxmAe#44wreQT`%ewL8=#|h&5exE5z)oiH#kh^&|$NiR-Zl<4<u9{l`P||U@1KLBQ~k>M71b-kNsy96~m;Q$Bo(E+^BkXAzhEnN3YQBT&|7bnIM+0 zq1>uw*yyaj#q>bIYsFq{&d&1=QoJW>qa`XMgpF2`yJG#KWj!M~ycTXZr(Jzh#Z(sa zx_R)J$hADX`vtBuiS3J7Q7MSk5}$rBs+L5SAc<6z5n~uBOWVL6^A6Pw6&S}`1t15 z)mEc?eJf;Ix*Ah1*LkgYFE39N6RUYpJ7b@fl@*>OhFkR+SF3kjtW0CNK8Lfja{v>6 zEk0HE;bsHRdvAK6*A~+@GNP=p(N#omd zWnaHaL|e>_wp~cM6ojDY>*+a}pz?H~_aXmiyLDGV0T_KU;-Osx!pWX8-;;P-&m|Z|0-p$R&2ye!8uU2TybrxF%i8%PK7- zTNumfetS8c-F0T*YgN_5OKCcp-s@hzLR0gD>BVa=?FVZa4{G}0F%pN}L!L9~dxUr_ zPTS57H&M3Z9Nh~Jrp?UEBw2PB#SGL$X(3i{lhj+c_O_YEsZF=8f9)ytz1i}PZPT{h zTpF>`jj30{;=I?|UYxzYs!wWur~9zR#AxhS_MGEXjg6Gk=o~JY4StCf3Y}EVs zHpYVn`Z?C!vcru@@(3Rs_AOY+8{-{$AFMhf@zkwR6Y8&O+v(~=vkudQ1C`ruDSK}Bi~AN&?_V5ljDNyNNypQL2uSbNS)1&4P% zLY%AnE>5zbQTg^*x8Xy2u77rYwCenhkGBIj z>A6;;g{37SWGN31k5=55YB*nZchtp{OFy!1sH!|WT8Ubf>omqfe4pcRV^6T$nv!N; z(>qOQ` zGd5Q9c)$GQ^fafmboBjgOkJ~jm_)vuMuZ? zkFd#P8H0v^_u2}h*sBUD?=|UTS3kZss(3_{#FwWZcFj0$qNF4VSa(VP%C-nMR{L2> zD|Z%qv^ZVG@7fn6_Ez%+xzB2Mzoh5V@f~|b`;jN-iq*n+)Ko61*?t zDg%zKv8G%~6T)r=OL(p1M#y;=P0h`TzI*p>0EuQU3%f7+^zcQYmUR7`^oI{03TIoi zh~D`ABs1L|dmLC}B8<)dkd758K!S=Vs}4D!I66W>PQKmX8NdGc&K5tq180$QYChlH z%+^f1N2n|}H}^RfZjVvm9svWNsmV!!9IH;1X%-Hb-()j?Uj=tfdAiJZI@c6?UvwV( zb%WH9a+%-P4htdbvxtmEf30+Bc4Ta%Hde+0xwPg;dO>f7_lXle)LXXX8Xe79?hn&o z?aG;)wCU^Zy)L`(o5I)5eSTa-!hYaGdUu}v8?tJYq;Z!O{BPuVXFl(dXZ>DXR-K=2 z+_`h-EUuI1GWD|bV{Weaavh>n@N@v9;9y-`PI`NxdroD@0pFqKv?x#AmDQofq_EP= z^UH;~y4S8nAG(zK92q&SbSck%P_kp@^uyhqv(6%iP=m!v*!1H(ru_?7N^Vo^Fc7tD zHPCk+`pM0}$Y_iVqh@KDDmdU>+`hi%DO~8BZw5V=o=v`X2rqRZb-^=g25lXv zd8BV~a_ekWU#){5JyM^`M6&pf^%`n;L(9akXZ+!A~gsOB#2 z?tK#zby^xSr?Ke(8q`2?n^SAki&y9NA}tO6Oi+!Ma(;y3V!O8D8YyP2Zf5p&H7OH7 zODpBlvE&O$*X6tjh=vco9HiO0^(B%j%?K)tVximYLCtud)oX*bF|Y7lIJH?`t8S!% zf&$dM0D1f7G5oPL$C~5Cix-t{{3XLK+Xc+(_xaAI7ff$GaVPQ<)~W7=Q(b%eM#^27 zH1xIW>FHU|n==I5vvIKW@N3(5#JP)X;eYa$XJFX#4PP1u>J=53%@DUSz09h3&^+rN6VFV9a*WUM1TwT2F7 z@o=ALyXJA7A#Mvdcb@;xF%M7AJ#t>s=p`+RHMMXVvlh1 zLf&g01jwk3ks7Qwc0^SP6f9VsYez^(6?A#d>ymI}x0U>OrFaOP!yS9Bk2I;2f1R*b zNA-)OKj%drzu9Z^j!DnuHi6@j_z9r)zD~9Z_saHYw#$8jZqFWNfZ(JHN%ZMOONZU( zM)UfXZU6l}XV?ENWSlV_1mITW+6VFCaPSgQg zk4>m)YVN#Cwc|R<2P5CL=ZG%g?OdrjSLH|j9$mf565O*cVg}x{RZowE8(h7=b(gsF zIP2wi*KD=Dmrc24x*pQ7N!pD45^>khsS4+x!QR%R7`Y2wF81>L_3>5}fdqkB({!@0 z|N3x~dg(D2Hvr=Cn?JceAIX$Gj^0OYTION57e9jQS^V8e@4R=c5_a|J`f^UAt)G!W z7N#mVZnkA{6UPRiuiy=Hm>nJz_Zw#5d3~Z`43E@?Zsbm#23CZuqS!*hu3xEMAv)UL z*mRiUNsX(UTkpt-dkcT3jQgBss5gWuP!=fz6MQDwh|kXg-seK?d}IH&V+ zxoq9`@cGwU0L}mxb{)O?F%T_;;H_WBfgH;N_Xv|x;!Xmt%dIUOC|+ASqLp~I2ld$$ z`NP@MI#f#h<->=nFGs?zQiRjn+S;~SiatAb_1Rz7^;PC2(>TvF!R#`lEnUcWXCARC zD=GOEI!)-UhS#+hep-DW%06)#5e__UjPgySBP2(E^i@Az$ZvMv6ID|Vm`+Vi-SZ~S zi?})I<-X?-}= zb48~PJ#7@}c!+{<_+oMy{)(YT z_DGISH?9mNAT%0t#)*;M-YEx#E32G1LGqz);yZ}j31 z{fTs0w9vcl<*QeKOf+b#HPp_u4&X|iQ2B@~?!c?5czF6(~<=YZ}yc4n|n$}-=khum}N z-+rQ{SQ<__Bsq@AW;o73vTXktgimza&srh{ygm1d%BVcB2bcm)#yv0}0=B(n3Sr6K z!6QEaW4l%fDY|I7tLasEaH}EawE*eyd z)|4S@{xE)hS(R`;&kFMuMRr%plC=+43KrJ6yGFt|lBSqf(BAq#e7LRDmsUfw)HpCx z-`c_9!>VV%<)ri6NM=1YW2ky+liXoXfKAvvCQkyJFbkWUy<=2SF*-7$3Pko?(CC4D z1U4v4-+Q&mi*}qVyZhm50s%jN&TLZs{4>%fLtv2ORPR^H!Qcss@D7uVez*P*#nz9v z_FnyHnVFuCMiBs%6SW70k#<3!o~2u#CN(mwz2Z+^n0w(n)Gyv&OhZ`kRK3DqFI**L zE7eAYwb8`-E|Br4MLvnA`WvFizMqTh-mUhJZ7r)f*Og0A4G0S~K9vCf43k*hOZ zNav)Ym1$L9p`7)gor=k1^hEAKcKG`8fvKu2=GdAwt^yEx9eXpu@xOl(=3eZChPpP-xXbT&>lM%Hsl0KCa?bDAn* z`)_By7@Gfn@C}x2S@+TDWw_5(BRHbK{*=L>8f?ePh$}<==4A!);7_^>5D|Aj%Vp+2U|Y`rGpf z0fhqtxazBucc}<^0H-s{bJ>ZYmLBpKJ?~#6(CsJZiQ3o7VSa&uuf0~6h&V)FQUA%& zf<@HQMPuToxipnN7PrfL-SYvm0rCi-91nU9)P{Qilv5q{b%HLF242f!VtO7^r_pIB zBKzk#j_ejTtp$F$|M6D)ui(qz3((AeMXH*bnK^Lqpx^J`zZL3YI_j=IVwGSKVH*3nr6rs=)}G6AZ4B)ivWxeSe$NAmp&h*Gy}hfULHLUOk%T$-kS93Nj{#X` z&>+$?F!a^0Y!X0QS+urfff^^DbUq>1VTc~Nu)MVNz!m>%D%*Zq@PdNiry}eMI_+u+kA|7u3c)5v!YI}wp701q9o} zG~jLpL0$-<^H9?iT@!46tydAuAqL1Ih!9J5o&3C64JcO)6j~7e33!s2#lB#wDF9n~-P z8UXMtXEvhYkhP5UoV%ihL(n%cV1kG-!_hE9F#oZMMwGToWBbcHJ3C3v9$c~saXIKt zBSBzB;($gG{T%Xd6xR3qi|XWyDQB?(f%*CQM2m@t&s_vd%KE*m8!Xf!?~mN`BZawP zV%;7Mq(q0IpVS{$QDzgt$|NEils7ds5u6i2O_+f2aF(%JU7A$`gkb1&0@$Ps{=C;{ zm0fnWQLDVFstO!j1eWJH)|aPM?KAic8OPoE+I$|J?PTuM{X75u39&G4{CFa3z5 zByG7=)Hv-V!|903uI}*f-bd ze!-e3F);!!Ok(YuP2VG#l-JbMFdn&79mp*B1k zr=GDy;^qVt!ZwA-d9PjA6QGX({TVCx0F+VB;Gijoi|`>8A9~y2#zSgCywFFK+2p)d z-h~}LLZ+gC?;{)8QP9#>cN+KMd0J5sy{h#(docs|34cH9x#`JCO+q&bt>k*M93Xk} z;r~Z7A>@{~wDsOZYQPr6ustx|pJ(^y=e<-;&Y=_3^)dqGHcS6*avnGk!xmtCQcq8B zVPU~+`(e6}~jjv9SLN9 z45H^8z7B9fvHjyM&?MDt!8ZeYVw}5B8<;?tqcJ;WW@hGgP5l`iGxIk9k7%|t$$8-; zLuh(Yq13}%AS%7Oziq$3P8QL7wJ~Yl6clN*wZ9uF{{jJG7rw0N8JoCnqO_CJ-~uPG%8$|51j30wkmu=|Z-f_MCDVJ|u4Y#dftatl7!iAxPeEZJ zvT|neGd^7v6sx!T#j@wlox@>yiYnfKf{v6?4#t##!~6H|_h+wTF2%DF2-&PDc^5j| z0mSehgy~Ve0&!3NalY$DhGRMlJ)1|mTU$TKbc63s(kqliLrWAXR!IjxY6hMRw^?)Z zrsS_Ux<5sIr2K+{xcT^MZw@kHml*bLqoDE;oU+&+oW;1;cFvJvbK>g!?=hjXynn$3 zPJ;WN8hu}P{e4?&rqO0G>n?x?o;e{OUenN(I* zwv&x*Y*IzF_AS+DOF9Qq{s2K9V5grPJa-56ILo~0FagnkvxYR+P}CiU8~u?$&!bTU z+%xSq2{dfYFvO?HFJHdAva<3QS;EsD^f`FAMD5J$#P0n5Z2%hIq8q5o0gbmd$R4m4 z9N;I;MhI;Z6U(8zt4hNx-~$P%EnLj1LkV@t#@V?7bUd1Hv_89*=iZyvO)vBXqo}B( zC4X;H^J&DL0mm!;PZ>q5>cYJR6ylS{rg8CEMzkd=OASTu9Vh{9&Zu3#oV%^pE90QxIWb&9TAlD z>Np%8D|D9pgUi%)oLM584R(k=M|2V7&i7r%be<3h*e!5j7T`;5PL>4jr>vo&VQqST zzIc9H-Zs+<6neQfw}E0Nzii`gt%&d+?CX;!Iv!!uN2TFwym201C}^ciP^9-j9v~2h z*f~D!3?G8pMJq?o!t%ZM>q8J4p`P7@zyeZPztfSH+)Q-E;Ctxl=}$!n-EcvB0TB38cRbQhsd*=z;p)}N$0k&;OaK7|&g^=F|w+(aeg`D(trqf>T{{{Dm*O&<34Q(F#tItQtjwwK>P4rS=09 zmP&i%s!oN|r`z-i5aol~rJ^|p^gDn@5o&p-t2&<01_Jl&RByTVqScr18k4}{;){GD z_wU_fE;*|6;oy%k=*^-&SgSx_9mqKiFHdjt6=IWg5WtcjFK3>CXcYlwBogP2OS~Bw zfvCI^uG6nem#{WbFXVk$i~=J>Z$*+-qa9@ZzMWa1)6E&1d0NM4ub$79MFl!IPbxohJRm=yEIAbAkVQmi=uLug=Z`@hl6O>}Dj785{a^*M5#hUUWsYh-E^`kkz-mipnsrs|Mt z$2IwmAAbT&elE*c8RXWM?b|7TBTqVeaMVRh$Yb#cEjBhb*2TpI1j(QLabC_)ZiUn5 zysxf=ll}DZumPNVtwcxAW6PiL&>=@I^Ib~U zyQS~^yMzpa`Lo|u5_-s0kX~t)6%-ZUJB~DS%`0nErgf=~nU~;97`<;BrLG<$o&yBr zvJMKA&aM35iJ<7*!Zu zo1-=d9i1WIjSO@#9Bg zrCooL2%Jw-Q47M8kl@FvDju=rTS&u<5OVeyUoDV8VCB=QrSsdi2=H)~9o2rEvDdG- zSa93!*Ih|kKPd4|Use=_eNge~T_&|z4z+LohJJ`$+H5NmpCxdjDab2R#E1!sx5;?% zx)*5FJ@!yg+}n`N;6xyd8aHTI#SO|Iu~}TZcC8u_5+rzuT=>R!#`n$>dJHI-U2Y@C zi1rPuVv1A+$jl=kP)}46R#}f$#7dI~0JsUHm$}t26(3N{I}8qAerE|PU?}4jLWpR+ z{*AQMjdtWUuu8ks0|>Pe|E-sZ0-1;a;NjvbgCU@^G3{zgZhn5ZV9x>+y|ZZhYS0Q* zqwK5UV5$L#k5Owv_aYjPpoD;#H+-61FKYeuA+ve4_-an-dcXI2;jFc}clV8C%7X_~ zQYgPMA8E*Lqvts}HF92{!j^k&cAvGW%QOXel3EmB#gW#0%$49;W2sJ(_KN~p6kCKiW|+^xtI5dKvX+% zqJ)j%5l(F+w0>qg#du58ir=M)??GY4D1fd=xzTp?+()yM4HF=E8$Zk+{po%~3SG>cMP|9y6 zXSceQn=t(N@dG*{{jOb4*3S0uC81Z)d|46W<4_`%BKgX5QW@JO1%e=&Fi7 zW{DG|3vpmz+WbnfwEl(7$nGacsqF{96tgQS^E)rC6~*w#JKo`qy9d>|_MQ!}#C-pX z>)$RmueCNEt8R84HF6D&!SbiuReYk$zHVCc&M<(#t=hEj%-_aJ^I99n>`1{_?=|mW zryDkVdIeo@sw%m9dky-PM!ctZ)lal#WjG%BHxza=ltXg|;fdpFX?s8UuVHc|z1c3D zPrysMziy%T1`tUQIRF6He5F|Gbg*;>vJ=o zu3UIaYX5k$E;d4Rk*sOMOm|*A?qg)0*K&={_z#NmNANktuv{|V=pQ+$({wiJ=6{J0 ze#{%()bi%4n5p7Z^F~S@JnXvt$nJtw(c=~-+;a2X)@qiK_i>}-TqnRqGmG08e2TF~ z;zQ!(&XbXrW<_rdxIxm+cRB{?=G(V#4*nXj_er|`93BBH(|lH~(EaMF=iv)BPR2>< z)S99|F0_`SSMRg{v1o(SJa8s3BYX!062gSG+uC#Jq1ACM3Vy=#8>pzFB@bG#;>044B8&I8o}fd zZVH^9S4cb9DDhD-WVyo^-ugsE?FC{OM3E{A&rwt08Ue^>`5_^@=-RWXHH*87S1z6< zzR_^|wZh1?!J07MrV0aMk!YnSNp&*wtcC9LRPitqf~I`n+~w*^v6+VS7)(BiU|R`C zA*yf?oR5IXNN2kVlLjFa!(D|GSxKlnou5Qd&4U>>6((L-o;%Nn;!Z@!ic8g*gGHGdp?~GKVS6Qh2;jwZAukAeADu`_?M-iQznV~S4{QVh{Ycz7{6!`Y(uT1*&*l25=k~xM^#{#@Av}3V{Sk^Bo=;F>L8-9GuKv@4eH4h~$eR7k zWjm0w(fAcX5+Hc}{QV7qcCHn;gg~!ISR}&jUWPzQr9}meh3llj!lZm|+HM)CpGK3R zCD?UE@GBsrzaR@_P#qYB0|cE@32YRgRhfMdM?YY@CA?+eUljp63D*%}VI#rJVvMaM z=uYX$5{i|jk@SMu=Bqcoc7O{Y)c3P6Fd(|lp+un-1qYxR8Fy8IOa8eSExSj(ir3cN z_Yzcl!Ze<&tjIf(=wmka_V(xRg+3h;=HXngoE;G` zdo1F5=tPVp$e7;~NclfM#}pM?zhQwTIwZCC_sO?cX1puw>R)6y-TJ8HbzWtIYRtNG z<^Au)!JhBCCg&bR2)}T(NA5OEqJnXnOE}+cm)q=}ce-i)Z@fHPvsEkWbT{nRm8o+R zIqVj3l1(h3e`1^snhr0l|5)U{X-=N6I4s4tLCPsV+asJ>shX~L|K;g4o{aLK7`BUv zFIeIk12}FkBn$J%J+<7w_TF*Xtc8=)r>VNBFkd7DKD$dl-&=%-#xb7Xp}93LH}^Mc zf1LASaEPtvUC%gG#=?+`ECU*)_s^e%n^Fo~FACjXc)BkvTYn3octD#I`|oG>Lq;uf z2ijg!g6z51vSIK}zMETfTR8&N$p*a4M zQmmDy0BI5;Ip}Pm?*No@OOuZ2l;m{l1b?cMv3DbPnC@35Jl=q&<%m_Bxa07iy+Wr+ zQWoX+L`Omhu>MwiMJ<`3csgt2G)t^OQn(P*`St5*{Y&6got(z9A`4E*rMqwA^sr~|9QkSWPopI&(Tcrpz9;ww6(QrBk{ktY!gE6RR*M*a2Z9k6Fu&- zU2U{_-7G#3TEYBWfK1P;EwZN2#nysqUeYm|?aH&B##96NYZjK5H_-T}cHWeNLjE$s zR*0_s(3Pxyw%{1FnX|8U0($Ts-s(30kwV1Zj=bKRbHm6pkKY zawRk;;M23Q(h`-^&@6k<;S*edvy79O*+DNaFQQ%M)BR+FgwIt~{$GX;&V6O)_MEH2 zxph&SCG5YL#(fUuR`L4jg9pb-345)$5Hcwq>u-Aw|LXF*DGU|<&*B7)sG!#jR`SLb z^x59Mdp839bCey^m4zvS55&sbBHt)f1+z-fqZeAP(hn>5_fv8cqAFUUH*a}R>brj* zN!03tiDQfAk(*ykqQyroiB28tCAR!y`bc-N9OM=$$i&K;C zy=>Rj)rFN;ZORtZ{TeK8FnEZ&G1?72(2%>&q<7jka zkAkiRlfZ%se#?64ah{EpTMOD){%Tfge6leo{|5<; zfu8;gtfk)oq!tjY5yx-RgNwF5$OJJ1wHBtC%P%X?xCx@DWb@OqOFst_K*!Eri~5O_ zxC6Q*Sk*&n>52*p{qJhye0gJiKHu$WK4#WJv$jBye9_*?xGg{{s)XI`@BZD#bjs>q z$h%Gy&TrYe^*X>BXtb{{I?y=_mUBgv+`E6@cDzGkc^a(Y-^L^c5Z@KZzi0tKz1_(< zQ^QC{XZq^{wWQNM2HujUUFCtd2jecM1v7|eGdP@i8mfK)*wc9z(`F9u?bd~xGBY(3 zw^zq(yO=C4m35T6!Qsws!074P*!xzU24A;YXJF1gYt25rTt!Uv(GFj+$Rbuw#TcJ)7YpfSbZOmPd zMT1_2BYQ%(Yp5CEu@6BX2P!#sRj0C!g0|!ln~IkZ-3Ew%T{hL%+$tL-dZo&9`ODuQ z&Ej&QH#ai6^1B4pv_~~ed_KOL&&F;aYnPHboVOWd03BUk)LyaFwssEh4i&HDi|DNu+$a$J>R(FJ_H^V}Wj|Y<&*2eubaXV=p6>q71wr(}tFvRi(wD7h&jk&6|bzF8`C}OyLVGL4wr=M>r|x|kna^eVGyrX8rqn! z@vmHF2Cf{CP*jxvd`G~DN_y`3`&XgI<2S0?NK4Osrqnl|UcO3WiPz1?>KPJqL0s-q zE<4Se?@jfk{n;;XP{T4g?`(A~G;yOdX=|D9E}f|>{ZplxD^MLxeN0DO>ag-=V%0-N z2Zt~I?cDpxAu}}2m*eBce(e3v72AqA?%Xx}&Y02mdo0Ir*8Je5>b4MN$@2`)p%iim zn`pMOOiauY^j?dnD}%*1+C$?V(?r2MI*<3I(-lw`P!Wa;D1G z%nskKddJqxmFP<=UmJ~%=+ZGwn%U{cuO(I+Y*8J7{8eBtgZT00e1Zx=kQ5cd0|>91 z@8yTa+b_z(v;rqEI`aeK;?KbkB>cGn?UChb?oM)w7l!hGJNB@M9^ZasWml$%#nnoU z6FyJb3a0}?Ir5IYX(s6c#=)>qpJ6DE68s=1had#uJ1co{4Os6vIBU~4N&G|170_!bw-fl{HD zf3Q7oWbOYA+!V9wj&O1&@ryPmFG|>c(Rugzl1W3zzAt$v`Skv1?cU|>f1pPc92rZp z_!X6+;usypT;oc5VlE4^$U(RbfqZWOSWmq)=r79w)g}9tdqGkw z`Z`=>Z+K4l(sG!frebd60W_X^_$`;_C+Y$h@^msHi;KYJfEccUz^K|AK-Ixx62vAm_WbBt#b-DAS;1r*g zVDKB4$8xPF9ewRMEB`dgo`8jqm|nto#ZDh^Ow#}qL@Ndl5zPrulVg95z`+OZhp-F6 z@t8oEe(5VvL-rgudrVYh9p%SpIrd7OUw)3XjrzALC6JR47IV1ICMWF&fl%ihj> z$Ny1-H^9j};13tn$^l%;--;h?2r)b%VP9^ziavHy`FgJuBtbA>tXRcfV_RE<`vuQD zS#I&#lixMNcJkDRd-KYP?p;{E8eH~}D;!V;!D$2X+}Ck4Ejz%&zDiH^Xz|b3swp+j zrn(p@F=%*308(>~cxYJ7BGI`dsQ<<$eS{FgQf>2B6BY>8^mhN@QG?*L0C~ao^}&-y z#h#3XmG>raSPpBqOP<#eO?8WTJ3zc2kMXVhi2GhnX zEp^wI?yN8#8PNXVV*6Z;)TyhyZ#bpChH z;V_qpxo+J zoH!4>II|)rpf0)UvMnz4Hsrm3 zx+1+IF#0Q@Ur~sg00wr>-s`tynq^&uS}iA=uV%Q3AQ(r#DUACQ#jd}$Xm$6}-2Ifx zFU-y@j7ypQENeUGzrwq*R%N<**Mt6c6y`r&Ic{!@26-{$=m`Qpaec6kYw8#Al!cR|Ea&t?3Ilg4wQ1RlbI*(^lax&SDaH;8KXE4Hp zxQjNXjo<-(KEY~*j2@O`ahrdTXP@>5Dxstd6W?fKhQiOg#KUF1Vw3v)d2?Dnuc`V% zKZACb0(NaL@o1Imfwwb_3?~h8Z83h7%99=`62lgu^yxe-J)yb^XHM0{X4t&Q?nvHs zHi~M~R&kM|bf&~TDmi7`+Iu0*oLer>?(=iO46~3>j>?Zj@EG4HFZugJ^9{{CUwO@) zmcJ+|sIOD=how+U&ZI9 zU9#(}OaIQL1%F$&H@_qRO950#Lelos)zo~vYKl$~Lxw{cb*P;f8lgg!c?AkqnOzDE z?Sk1)S-MKA-rnC08a+W``$TtR28`V?{rp$lvGGCmTjW>a|3FXl1}+?eJ3!46g7PIb z4!`968k5qhs*y*3ASn@R0$)5*@O?B++*d?-cphM2%{LrWH2lMptwxheBbTNdlEOr~ zfnj8R((uWmLX0Dhf#xxcKWNT%10)s$Oeh;@u zc`!#E#?gowSCD2ZA6CH7!W#0S7|*TsX9=08W@+})%lDRh(A2zEZZ6PUoRN|e3fW$1 z5-3H2?ZiER(MW1ERF4>V(7=Ev9=;@dHVhlUg3@(RP{rgz2!1qI>5X$^n%>ebZz zJTHX!(}TzZ&Y774c3cJjP2x@c7n4GXbL9LVaKtmR+@9p=1J*NdqG*}5`64EDFjXX; zn(sQpLrhQM^2G4pA$1j4R1RMKc$9DfI^;vLhwJ9hHa&&1Hl+1;y8G8|2cwhg5dRil zmuKs5Opg6Y(RxDzaneTHdGhy3_@_pzrYbc+PGJ0x@Yx)atpjyVy=_}*OUt}m#Zk`C z{VQ`(P5s&CyTHW;{gpO)fZZ^*`~Pc>DN_3flitK$2e()qDe@2~2vw{GyFr*tSH%I# zOu&E4{lP3C;rK^IZiCqh+URF^mm%dREQB zTSkrXvODlxtjxn0N@BEa?ZHJ#cT_~83 z75QRc7u(Fpl;g;~GbCTtmiI3(f*=?g7(dH^y9tm9E>;5kEH}=}(ZD!keg;)#Ubz1p zrbf~)dE=M23tSC*6&BXlj|TX0t4W>Ykn@TLFHDgE2NDra_H{CfpfE6g@>yonQBF?f zT*oT&X%>l}$+ND%$wm&PXWbxDTH>=_9=M^Vi3vKA*XrzLSlPD{+6Q5U>HLc%`JRC{ z-cTMi?vcfzhRq)>&rKf)-Fp5SMI+46V24fva7nmK?hBTl-a^7=9S{%@$nImHtBmzp z{OXpvy5edffKm2%Pj@tIp*LRVQeo`awlVqQLpW1>iP2^b)$U9q-`FgAy2W&>u6r$d zcaf=4=sE7mt!d)RzDG;aJDf>SujlM#R1DS;&h@>g{h#7|y=H)q)A?!W#DqJ`W;4)j z!~`vrR(teGBony(fypNRpdFd+3)aj|e!Ak~_T|o|$v@#LUJ5^z0vO`)^~97h>=A@x zdeq+R^y$-tj*;l*15xT%&|U$XeIKlLF!6tpr~O_#-13aLRtv{24;qrU*fP7P`YXQ4=2db>`-b3%mrl+Qq9WKox+|OqjS6-l2QQ)fj z0S{wDi(8Y z>AvzSRq4ev7A>dz>D70ra!IM!GPGij+3(@QS3v#*2aFqJ1~P}3R*kWSC%Xj${EWTc zo0i(weNIqCPw%mO?1SYHz)DscV-N)+Om@(O-oluSEZqaJPe=`z0|`c8CESP}Qt5-B zlbq*NGkh%}1X5)6-g2zw~=a=s;f*-Tl?!C#S z^|%d$3DkUyjIop&f^u$dD^@&1+d3JL4ZPp8xM3dKDZ1R#ZIgq>&zgK=3Y6x0u0%wToq zGY;3X$ZsEP48etP9$H#cw?I*C&WK9^apcSY8Z_YTbQoz40*WGg3?~-R;}bI1Fj;_~c+JOKm%=W)i1&Zr z1kk^_e09dnpGc0fC#K%%6{=w9kc0*0gjkZq@?1EH_OX&EZgh{4%ulZw9?DHuVq!Tq z0M|FR*F#G6u$Uow&SM4z=*?K)8!v*f8>mjYcTzzpXB*S+2vdVfdW*bUSDy=QDkz9% zdFJ=}9pgONaBJoPs4wanUO)>D5Lz$Zp{KJAICeE&W!I6#%9y?7L3dqeh}b@fh{C%U zRFLu*Qmrvzh-XrJ8V?k5@fxegiXa@?6~XdDKX=lT^B2Ug$#Pz~5D-;@mnn?K#I28k zRK5li0C9EYZU41(YtKAwPT9=IRChARD#J43;#(s{izlWcgAa{wA`2x0!V&Km$nXN+ zgPEjbU&-*oolw{b&cp7F^hwOow&z&qSc{cI$DjB;GV&01bG69h`;Q<0_pX9X%!(BX zB#a((tgO|Tq&d$T1@r$lrc(y{j$3Ytk-ED6alW%8+wNj&e?`p>9pkP0Yaz8ukArD1eiTA39*Fn+u~r{f~nrE zGQlSy1@m=MxjQSki|zXLMVSs?cCw1+4E3s(X~kOrhC1@z6Cmqsws{jqD5VLr)$`sy zq>B+^LPgms81FiO$8P|mdYS1s<;Q`r29RTw?23s|57>*_!i|iKUc5KToYA63%*^!#@4ivXyB=PhcD44=1%s z#WMWI3Eq`!dl#d^;-iG;2cG@Vep!If|D;eZ3s9Bn(HUM=B`i+o?wh3cA0Gc_z|y(+ zjDmM2)9|i+A7jYBr=ye*mahV~il<6pZWIdCV0Me15r(f0$u2^}CWgS_95RHe{BNv1 z9D1@a2Ea(bcsowO$i_B@_gua8Y#&2%QM9zQ44A{haKV8&7hYWK9gq%@Ini&+x06RF zgaVt6mxniu_Q@u{kJ=o8gk$n8#41de?I#f;IQ-Pch;q1*W$uUDO<;QNJECN?3b0z`jzX57PX}G< z>uo{W6eSm8|8N)evOId#vb-dDK$6b(nQ;pV2-Ke#oeL-YS#UZc9B;sfv7P0(A130s zPV>&=4lvzkuZ=0;P0N-u2n?T#4gahG^fImUbzyh}bEW)47BK?5sd#|`*Ea3EP#yjY zFX$|C&+2^=3e!95sN`iSIys-i?#~o5dZ3NE9u_UOrA318!gr%+P@lke5<}eq>Uxko zVdndtF^*zLi5H8&XhFPi2fg1%YrKwu@OCz;T<00UYH4BI&({-g!_vk?TIjN$(r&_H zG`st_^`o)TE1VREwPM&jtVf15k9q}`3L2Gs*r8_hJw%wVAFF4Fg-RRkvl)={lih4=K{%+W zFT=E4p?f+CsI7~rr&f1zc}FdGUNO;cxLve>Lt$p;o**4m?c!5c4#`iWZ4&T_If4#-yalwdXyR( z3j=!{|I$^KpPyd!)Hr}yCh;afW3!zf%x`eZ(aKOtIUio63o+ILI}%Y9azHbO&tokw z8=L<>(3h_A!g=Yv1%7@FN3U(ElihaioMt=;jF*OV4?rXr13}{$d$goqjfrai2<)$u zZP3EUEyrnBcRrf*6mmC)ta-IL@XiKgGA-_dbaeUhzVS}C{5VGcWw#h~N*+A-1-P>#`l8XyDaeNUBCS|4A&-e#R8ut2cuB zS8BvAg|%cDX4oZQxK3FM_<`r_b03H*q$8K#8KZ|ITwG{9r!j&|bU&e|_fRw>o}3-N zMBm>>k#ZUDz`FDB;rmw%AxHVXsu{e|i2KNU0{v+Mj^uHR=0Lo5@6O%3S~4f-D(~>= zKVZ`Hi4acvlc<)qCVM(ST#S#dQWQPoh4>R>k2vILO5mYF)uASpKUb*O43Lvy8hmj? z*q#T3A%_R4D*OoI$>C`)OFYJ2LQw+SeP5=Qo{3!Nua^As<5B4`vFoqo6C&?1)&2Ccn z@!8dnAI>;7+puu3!M!(Ye-SBl#C6q z{lvJwiv=YR6530=;sPxGFDY4fS649n&l?{hv%UJJl+rx-m8qwE2j0V!!t?ghF``c) z?~o_|hbeVI*J_#p8VP&em)`UJY+DbY&`?Kk@UkpK7pQ`mRM<-Gszmm;A$!f9xbmQiUP4avAs zG$^B?K?_kDnp#GZlx{S%Qwk;RrJ_{QBrOR^LtAOD-}Snk@p*h7pU?UIaUSP94&A-q z@B8(Bj_Y|{*E6%}OPBi6WtZjck`6~AwT7&2qFcOpaVlm(jD19`2_QoYeQL!y@e`hn z$?cuNU^$%EpX^}6J4!t%l)t2>VB(VrxPT5>*f8#r{+y)R}DzX!-;x7XVh;%XT6{4+ zriPf2Cum$45zNUe4mR37^y%yfd!U-oUgkp}lB^Qkl?_*jVvl7i6sbg9PR|CXZeE{T z*K9k17QeX%XvIgxH(r#U8<&owbY{&>AJZciwZ-+CfW!4F!RVd*F}z@h*2DAAYr9`KSo+IolQjkR#PAy*ALJ9W zmH>xU`O+~vWqy+lQh9?3N5u$e6q6TDe{4SB4x1wG&Z$rfv<)f`gZsIoGw#b2!oVa+ zFSFN!zB6Mt1ZYH78Pg9%;3SBcr(4oGc>`KvX>?t4BL^^LN`3aGdDSPx9ORfB*k(D^ zV$uF21>-4{2Z8;JqL}HOU-UY36|_H$eekbhPcq6ly%Ghe$Y+CTUPzaPUDQ^pdLMlP zM6X}>byySV#y{(8`~hVgjd$AQ_x^WoZsGxoP#lMd!lUT0<*hA*8ODI1D!9+tfaXg; zV4T^vC}rXcu;pYANTk1 zSqc_@4a%e`Vk{A*T(qZ!c%Xp4Jc5Rx?MDb2FJc)8Sqb!jWoxaU?*g{ooMv{jq9 zO&|z$!fLGR-f=_Kw~}o9JJcW;)d3O;NJk?>4KhgF%XlYXYaq#>t&BfA$7SDVerqsX zxkxtFuH2vYph~r56_dIP%u%uW5xG2ix6(ypzzM`r+p25%SOVctef=vLuhxCl7qYW`91Z^M;6cD$6_ zo0e%(9=(fUa?->y*GRefIoF%JHW)Ptd@p^(OhZ;Msz!V~_vty1{|{gA_QJKG-wb|6 zuV8@!!XZdaqVQwF5OYxyV=uy%5bQmWo=K7gR4fX1AGFV!!Itb5!|u~SR8!gcG&L~= zE%S%wlNJ}Ov}D-r&nnt{WdxY{7214QKoL(N%xHPex)e@zt%Y}n2F9Q4N@ZXl`1txR zO4{E&(ceN0LEw(@s24AF8F`zmh zxuf2_jA8E30H#T%ijzi#XnK!dnBYtS~+sdAIrai`KP;Gd1_$8i2eQhmx+U7j*9w z&8DXqiA2-)e@ez-5ac4Cx6fm4G3Y|?AE8LdA1Z!)G+KTLKeQK|r_k!3-y6ThA5Xyy zuYs{KF);lBF8s(520^OsiNY0a`6be|Iu)9Cgd4RD0W49p)R=Fpm_Kt&k#+u#O3R;T zcz)*Dsl}_sdI5z|q=Yc|G&L=Iad%@~k@ATmR(E$y+J}crfU810f(nv)*dWhT>C=1( zTcd4d&{h-gvBZ2IpIH%%4l8^d2Cp*lXV928vdeiCs^!_`5)u~sgaoK! ziOg!RxOVSG0w}$?-qC8Zck`1e+IN-*Q3f@VHt&itR07u!Nvk@ny_6$aRedi-&;UOu zSP-skUEPUxq0#m_>((>uX#G$%1)d|*;O{3p_9beS9&kTl0La<$_8xMqG-5Dp7b`7& z`8p5)P`DhuS!Ho!GNua1Q!8z_@totp0k8Otn_vX{&@p1gpz z&C)RCz&+!^6QV|+TH1YmUh~RjvVnU<%%qXy7N);qe6j2D0KAuSJ0x#OJ|C@7guCZg zD7?{{EoEU*?>iW{1#)1^$amf>$%WM{Yj>+J+?gHSY2vitwt32VP`+=aBY9X{k@c3I zn~o*D#wfABQr=F%`#mZq_`oU6*dpN%Iz0zrHw3y8Dr|I!5`nmrLe)fkK0pRP*!D$8 zUnl1`VD4{hzRFt2OK*rQQ~b)at!*UG@`Gtl^IPX#IB&jkO7z5&8ZEu5RqY*-THfr1 zFiCGXred;g{r2(E&2)t613C|pYJX3Z5LJz7*5Tax#iK$~!)kayl^3pU866)*}kMn`jlZwPf4-D)wt_b1%|;taI1OT$TyBu*C& zVcmU{t7AhZv#;!34*4vnwC~bDCK%UFF3LDE7Od+)3bG5Uexpyki8sAguz!LH8ZlS`tc?jGl4c`J0?Q_GPr+=a#f#ga zhsIYi&}<~R zN5e6x?vb_#%B=vf06|Aa?#3h&w3MP6yhLyiTY)d@B@7sVIE~yL{^uJAmT%+AHECyUoKlv*9) z!8RmpEE`4+@0vC7^E+EgpF7t*;h^l_6SJF;Q>PPYQDrq5B6!0-hB^qQ4EoLS0W>9l zr^(+zHBL*p!W3h!&Z+!Kk56HBV^XTZ1elOPT|XafrG$T-8ES_%W5nzqZH5p^8;a~s zYQc$uWd*Jn_mVv6)$k2m8H%7ja!E`1{M>ImE3_Z#T$y}6%9^CN*xj8ZF3P*m`!57t zLnKRj`!9`cS}k@~)=T#6H$WSdSE@))x~=y%*A*!K>^UiW!kqH(0M;6hj7;^*~9yB$0k8~52+B$eqglH)PhI^c;JnmzOMPn1iZ zzl$Ytud}u$BN+iMUtC6CGw7*D(@ByA=j74{ssudM%>m38^)* zUaDQG7dQV(yLKhPMi|MG=OI{nmrk|J!mptS-y+G8L07B8lD$|-gNOzt^foTVMA75Z z{c_9iT9&zPt|2na+m#|PJn=EL0(INZvR?SFBsuG3d5NFg_Xz(+{E3Ko4R>j-yPKTN zqBlXA(+fYpKR{cxlJJr4_(Am)t-sB((ZN&PAnAS-A*u)CBhi!zcIafYDmZ#;C%kBD zKju{hd?JAb61Zv6EU$QP#=L3%*7-&!W!g=!fR+~C_w!88hc(AivVvTH zdZwDJw>AXl3j<|Hvuq+8o+R(sVNtU)a{Do(D66(`-h23M47ie?9BY>k6)sl~1Gju` z0u>BEHJ~5><5Z0jG<10*nv&ClTqnFG3ZE$`!A~d-Rzp;G%sUgJviL662G?@G&D9nm z2$Dbxc~}FmQrL5S_s!o?`^PW)Rr^!I_u+tN`SWM9N2rG-b#fR&Aal`=orL-k>5PT00&;gs4P=|NA9UXizpKC+Q^%tsc&Bk4M;{_Hpbr@8;WMaS4k z9PCv{pXosw>O`~~r)-o2hJJ%j``s=lIs5+8>W0f;^W(>DMpy#^)Ln7~tL*=apk^Np z^|2}c#$^0&o#!_`U@J$T=v4#+b8OLLG$|7$-`0w98r^3KDJJLAloTEu4$NiDI|wCV z9*YS^KL0T@0nuL~ge2rb;Z~(G@;{r;<>;+%3`+mieY#yL7IIvQ*-cA`w~` zLgOd2dZB;hFtYS{1_-;byY%Bzx=)0TP5bxnuLoa)_~s!kyoO3VEx;M3#Ym;4h~|Wn z5o?C|Ak=&|a#2M^MeNbvk_tk&4mRAa{#&O z#I%3>BA78m#7d0t@vnGMtXcTQ{B@A)8u%-ps!CNwS<5;PG}A9$Ui>b8xI%>X#*|bZ zVy^p}F2wOnDi{44@nu4eyWFa@eTN%JHAq9&$`9i%_cg^BDMR!U`yfVx7lDq??L9bS zU>x2AH~4>1fm-fF0p#3@B^AiY@-R&YdI`~Gqq`r_0b)Ax;y|>a+t^X`luOyzEKuse zV8Rz%jr9<-&>N3W852jYgfSZ~2+uPWjB+fb^l2?Eippt@*4Hx9q zbvn@GUg!Uk+x_Lw#I);1Ji^0nuE&tx1VVb#Qk-eo=(OXGh<@c*U&+MhD^XFTlUz9! ztk%k`Cly_^04HV)QO&P7e)2l{1gH;SpRS*Q(LxL}h5hu2T(^Na$22J>Ldw|lD1T6=hvQE<@T8C#?$st0PgeiT(fzn-4k$le~%U3n##%W z`4R(6FE6@`kESaqG^GrY4v(T0!M@JxU~p35j&yU~N-;g1IgjTBw3w>GeR!3QL$>R- zBPmuw%HrZ+dk>G+(hk8vZHyhvK;vA>z@(R+%OPP^{lzfBAAB~1X}bfSX%Bi%Lp9P> zlNrSCTGjjWMB;67>{qHvoLy9D<l3k+Hr|fge~~nMi}~5!(!xqW)2ht zJP=8tox1lVx4LAE9RX7SRs><&^ci?RoU8FidHL$md&ZzO203&}E-y5dbt?xx=%(ko zx+s7e3VP;l*`2!8UDb!;eZgPN1NL{!f|i#B9B^;GIdYt`%lO2K5`aBZcF3GNEu`F8Su#|l-!Vrb)XLlq4RmJvkbj`0I(zRwl~lM+wcX5z%tu?T z>*AM(?_hBrd4V%e2)#anurZLj*{tIyIkQ4W)0Ix29{zI zy^G;)K~qGj_->?s)8jtZ&^rXo?KX|mtwchvz=WE*F;_#f{%}`VUFAh z;RjpwjPtEQuuc8=BczOz5bgo10Tv{z1z0CP9E<;5DJ-nknaSRC)^pAB?1@IM(HHYDO3TvGkC7B%C-`t4(P$1ZT}{Ox)Rh1bPkev#)D&F818d-*tf zGC0ZYtU>_)JAXcijSW$_xQQqoZgI;lGKK*wadn5v0oKc*#{4EL$UGxC`%^5Y<8+tc zG=m23E)1riE-%k02%)?xEB2%}_RE|VTt4Y-@k8+b=;}Bo>E?n_N&g(rgL$NQqbkW4 zED@EFS*@(Bj2#@R$QNM#aMF=oEg)SGi%Y{Skj03l91*{`B*7ky_#a+(gCh^tGHKZI z6L$xi;O3E8iT6Uotlt;0--lO6Qqltzj-2E;=`mN~@!&daG_>BV+9v%w|4ks7bSNl1<02QWQ%pWpODGaV> zZ0+ss9}Dd&EL@mz-WZ_mHOCY8k++hp&-!5NvZ}er+1dFp?9&05l7T#I?XWB8T@3uz zEZ7I68Jn9v&&M@KjMHP$OKC|-N&7v$-i9k*OJAC&2U%_+Iia z`0gA|7O} zDsOTgQ3b^7yT(es?ld2!8gVvf1V?q8ZtIx@|Jne{Gcvv0uW0j~sE`@~ufg)SY2 z;WQ5KEhPD2*cuaSe>A=DoshTEh*X`c5Q#bZgtG7Qv1$OTDtYN#tCV-&dUey^zpTFg zPGVx>kpVWc;OvYD2IpLl*+{rkaRHo#|MxFT&v4NG$G$Y1-Xok`M`yn@gE6Avk9bU> zZk~5ll)rOAl>Wdglmm3b6BCZMqq^~WH@vLK)jHHsTtHv<>u+}y{W$l z-!Sm+(6x<}G%)H4Zb&)Y6g*`%X8C@sP0?fp`Dyg_Jo1AdEBDLpyn!l%32&xmy&b3{ zrWI@s+yCzmh2=NQzS0&sjWF{ctK38*a`W;=;KuQ_x3{tA zclzD+zZ^`7CiFjlQU$k1TLSm%c^uda<{&YdFR!g-0|^x{*HB(9$Opuw7_^FbaP{G? ztyJCd{P@O#Ld-|tNXh;jw4aC_^a*vie&?Q;>sS^->`h|;9Pr>9ema9GK>%ts82Ro4 z3vUv>pYPUxaJ7?Yl0*N%I8j$!Efyi55lVRdxwd9Abe2F}PGL;v_)$cY-~9Jd5>4Q&uym|i#9 zM&$kqukyjB5v)`gnCDVm<%#$6EpH`p!hDD+AZwq3eUCA+N2GUPy#KcopRo~cg)#gG zL&PQHvoI6meu0}2uaLqxZyTOpVo4t0xlz#O6D~SE5|D3_g9RC!KtkPj#izkaAp}Nw zf_Y-;pNpIv^UA7R7C^l+h3|j~)F5Zb<`~FJZkx7LIq_@Tx`ht#fe4k~<;BFO;^1u2 zrrba$|MnOC{YFV1@&d8O7ukFFk6%=K3HV%@p*^f&6=^{aN(ZKcG1W|JMoFzkcF_sU zvj`|TH#el$dYiB>rC@%E;JllJKCwG3Lq@hwS4wry;e7JrMowRzxfAm1GHy=|p8PL; zphDmL8O((g&@Z5;xu|m-{2v6)TMib}^CDg3SHVV_{A;=wL6O`FnSbg`kg?TaG{F`Fv@V1)Z4ircJ)Y+6Qk5JRJ*V zqGTowi5(D>i)4I%gLgyd-Jbf?<{xH(`;|Jgrjd$Xg!p^M>icsQZ9>LkthB6`ms{ls* z2O2{SY*SKk-dHo2<4q;bU~1uVfml8lcQIrmzEOD)3io1D9N0Mta3`mo6vp+^2P$OE zvt}G;CvAwB2*+O9e*b6JixN%2ErTcjJIrf;$`l(gwQ2msUx52}ogamJIpFlM91q z1eOGOtgZpoZT%ka5U|=l7|vsj44H3tVG1jY4L#ey#FPfQV2+eslXq3|!1pmX$VJVu z;sLHG?{WT`h%G5tNF`tjywMH5updkxSx_0wfr*bY4oWN@YK2<2h^;}pBp$!gqJ%-?Dii+e#CQViPKhRxw)q=;+%vf2NqT|zz`dLZj zBOd9b#!exO*KZqMofB+{ycG@GqN}t|2bI+sY||>D51D@{Qi4rHmKbrUf#!WHJ3AZ3 zAE`=`TZKnIE1PJ!-wORcFVs2CxPJ4;a*zDhF|#b!W@vdk`uZ$wyKBxPqMMVLjiS-? zbL8Q{sA^7=cUk_b$isprhxz|KJ)>j(uj^f@OF)&vk#XLO~`uhcY0`Osw$=C&| z^pZuoXUq|H@FS6^$`K6=!CNoG(K@ebrF9Bk2zci_;mt#UCr~I5BUT_^S-|I#tffGM zB$SteX$&5`FTh=W+4B;WkZ!s;3j#ASuynrNiZ16?pczQ$nrq!<{@s!Es{50M?7LyqYPG{ zds-#PlP$vWV1K|vloVtk7u{WaWkmM@Zu6^{B|`FK_=GO?jL{DCcf@@H?-3mgi=-;t zZm!u}hMu;NSQ7BBaDSug|&BfmLh?j0eFTU^(2D-xMd{h{F{#@sW_Yvu(g1eH_-JR%q)OoyzR#c{1Q@bcLPT@2IUj0YXdM4Uhtnm-L3YA!$#)T5EB*e?>7vV1&3+8`LP7d9%dTqS0Zmy_^Q01yO{+OGa zgGPW2J5|1c!$jds#33TNBnJT` z9!~JVAY#i3Ci_?*^EJcDKG>c_z$#u43UTzAX|Tr2$H)PBcw z6sgVNAX3fwu!E_qCZZBAWiF1Jb9aQJ6TF&u6n1)GpBRk9NX15Wrr`?%VXf96fOqZM zgCNZlr%ExsjA(pQB#zw(5<_5S)ag|BYEYi;HopA-08KtCivR!s literal 0 HcmV?d00001 From 1484279cc921db2c2949d3f8aec6574ff230750b Mon Sep 17 00:00:00 2001 From: Mamta Wardhani Date: Thu, 19 Dec 2024 16:49:32 +0530 Subject: [PATCH 13/40] [Term Entry] PyTorch Tensors `.argwhere()` (#5742) Co-authored-by: Daksha Deep --- .../tensors/terms/argwhere/argwhere.md | 65 +++++++++++++++++++ 1 file changed, 65 insertions(+) create mode 100644 content/pytorch/concepts/tensors/terms/argwhere/argwhere.md diff --git a/content/pytorch/concepts/tensors/terms/argwhere/argwhere.md b/content/pytorch/concepts/tensors/terms/argwhere/argwhere.md new file mode 100644 index 00000000000..3f3871fa26c --- /dev/null +++ b/content/pytorch/concepts/tensors/terms/argwhere/argwhere.md @@ -0,0 +1,65 @@ +--- +Title: '.argwhere()' +Description: 'Returns the indices of elements in a tensor that satisfy a specified condition, arranged in a 2D tensor.' +Subjects: + - 'AI' + - 'Data Science' +Tags: + - 'AI' + - 'Deep Learning' + - 'Functions' + - 'Machine Learning' +CatalogContent: + - 'intro-to-py-torch-and-neural-networks' + - 'py-torch-for-classification' +--- + +In PyTorch, **`.argwhere()`** returns the indices of elements in a tensor that satisfy a specified condition. It is useful for finding the positions of elements in a tensor that meet specific conditions, such as values greater than a threshold. + +## Syntax + +```pseudo +torch.argwhere(input) +``` + +- `input`: A tensor containing the elements to be checked. The condition will be applied to this tensor. + +It returns a 2D tensor containing the indices of the elements in the input tensor that satisfy the specified condition. Each row in the resulting tensor represents the indices of an element that meets the condition. + +## Example + +In this example, `.argwhere()` is used to find the indices of elements in the tensor that are greater than _0_, equal to _0_, and less than _2_: + +```py +import torch + +# Define a tensor +tensor = torch.tensor([[0, 1], [2, 0], [-1, 3]]) + +# Case 1: Use argwhere to find indices of elements greater than 0 +indices_case_1 = torch.argwhere(tensor > 0) + +# Case 2: Use argwhere to find indices of elements equal to 0 +indices_case_2 = torch.argwhere(tensor == 0) + +# Case 3: Use argwhere to find indices of elements less than 2 +indices_case_3 = torch.argwhere(tensor < 2) + +print("Case 1 (elements > 0):", indices_case_1) +print("Case 2 (elements == 0):", indices_case_2) +print("Case 3 (elements < 2):", indices_case_3) +``` + +Here is the output for the above example: + +```shell +Case 1 (elements > 0): tensor([[0, 1], + [1, 0], + [2, 1]]) +Case 2 (elements == 0): tensor([[0, 0], + [1, 1]]) +Case 3 (elements < 2): tensor([[0, 0], + [0, 1], + [1, 1], + [2, 0]]) +``` From a69ef01b800355c11c288842c0aa1de7ec59f5be Mon Sep 17 00:00:00 2001 From: Mamta Wardhani Date: Thu, 19 Dec 2024 16:51:30 +0530 Subject: [PATCH 14/40] Edit Entry: Shortened description for CNNs (#5744) --- .../convolutional-neural-networks.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/ai/concepts/neural-networks/terms/convolutional-neural-networks/convolutional-neural-networks.md b/content/ai/concepts/neural-networks/terms/convolutional-neural-networks/convolutional-neural-networks.md index 40479a40768..2925ed6fa5e 100644 --- a/content/ai/concepts/neural-networks/terms/convolutional-neural-networks/convolutional-neural-networks.md +++ b/content/ai/concepts/neural-networks/terms/convolutional-neural-networks/convolutional-neural-networks.md @@ -1,6 +1,6 @@ --- Title: 'Convolutional Neural Networks' -Description: 'Convolutional Neural Networks are a type of neural network that are primarily used for computer vision tasks, such as image classification, object detection, and semantic segmentation.' +Description: 'Convolutional Neural Networks (CNNs) are neural networks primarily used for computer vision tasks like image classification, object detection, and segmentation.' Subjects: - 'Machine Learning' - 'Computer Science' From ada81f2e226983ac38b143d9735fc8ddfe4880c8 Mon Sep 17 00:00:00 2001 From: Sudharshanan <71761488+Maverick073@users.noreply.github.com> Date: Thu, 19 Dec 2024 21:15:10 +0530 Subject: [PATCH 15/40] [Term Entry] NumPy Built-In Functions: .sort() * changes made * Update sort.md minor fixes * Update sort.md * Minor changes --------- --- .../built-in-functions/terms/sort/sort.md | 79 +++++++++++++++++++ 1 file changed, 79 insertions(+) create mode 100644 content/numpy/concepts/built-in-functions/terms/sort/sort.md diff --git a/content/numpy/concepts/built-in-functions/terms/sort/sort.md b/content/numpy/concepts/built-in-functions/terms/sort/sort.md new file mode 100644 index 00000000000..87e8b86680d --- /dev/null +++ b/content/numpy/concepts/built-in-functions/terms/sort/sort.md @@ -0,0 +1,79 @@ +--- +Title: '.sort()' +Description: 'Sorts an array in ascending order along the specified axis and returns a sorted copy of the input array.' +Subjects: + - 'Computer Science' + - 'Data Science' +Tags: + - 'Arrays' + - 'Functions' + - 'NumPy' +CatalogContent: + - 'learn-python-3' + - 'paths/data-science' +--- + +In NumPy, the **`.sort()`** function sorts the elements of an array or matrix along a specified axis. It returns a new array with elements sorted in ascending order, leaving the original array unchanged. Sorting can be performed along different axes (such as rows or columns in a 2D array), with the default being along the last axis (`axis=-1`). + +## Syntax + +```pseudo +numpy.sort(a, axis=-1, kind=None, order=None) +``` + +- `a`: The array of elements to be sorted. +- `axis`: The axis along which to sort. If set to `None`, the array is flattened before sorting. The default is `-1`, which sorts along the last axis. +- `kind`: The sorting algorithm to use. The options are: + - [`'quicksort'`](https://www.codecademy.com/resources/docs/general/algorithm/quick-sort): Default algorithm, a fast, comparison-based algorithm. + - [`'mergesort'`](https://www.codecademy.com/resources/docs/general/algorithm/merge-sort): Stable sort using a divide-and-conquer algorithm. + - [`'heapsort'`](https://www.codecademy.com/resources/docs/general/algorithm/heap-sort): A comparison-based sort using a heap. + - `'stable'`: A stable sorting algorithm, typically mergesort. +- `order`: If `a` is a structured array, this specifies the field(s) to sort by. If not provided, sorting will be done based on the order of the fields in `a`. + +## Example + +The following example demonstrates how to use the `.sort()` function with various parameters: + +```py +import numpy as np + +arr = np.array([[3, 1, 2], [6, 4, 5]]) + +print(np.sort(arr)) +print(np.sort(arr, axis=0)) +print(np.sort(arr, axis=None)) +``` + +This example results in the following output: + +```shell +[[1 2 3] + [4 5 6]] +[[3 1 2] + [6 4 5]] +[1 2 3 4 5 6] +``` + +## Codebyte Example + +Run the following codebyte example to better understand the `.sort()` function: + +```codebyte/python +import numpy as np + +arr = np.array([[23, 54, 19], [45, 34, 12]]) + +print("Original array:") +print(arr) + +# Sort along axis 0 (sort by columns) +print("\nSorted array along axis 0 (columns):") +print(np.sort(arr, axis=0)) + +# Sort along axis 1 (sort by rows) +print("\nSorted array along axis 1 (rows):") +print(np.sort(arr, axis=1)) + +print("\nSorted array (flattened):") +print(np.sort(arr, axis=None)) +``` From 69d76683e479e191b341e081758b25b4ad20960a Mon Sep 17 00:00:00 2001 From: Dani Tellini Date: Thu, 19 Dec 2024 12:49:58 -0300 Subject: [PATCH 16/40] [Edit] Python: Loops * added pass keyword explanation and example * implemented comment in review 1 --------- --- content/python/concepts/loops/loops.md | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/content/python/concepts/loops/loops.md b/content/python/concepts/loops/loops.md index 8110b647798..4b0c1771205 100644 --- a/content/python/concepts/loops/loops.md +++ b/content/python/concepts/loops/loops.md @@ -118,6 +118,21 @@ for i in big_number_list: print(i) ``` +## Pass Keyword + +The `pass` keyword is used as a placeholder statement to allow empty loops, functions or classes to be included in an executable code block without throwing an error. This is common when structuring future implementations. + +```py +# Nested loop with a placeholder for incomplete logic +for i in range(3): + for j in range(3): + if i == j: + # Placeholder for future implementations + pass + else: + print(f"i: {i}, j:{j}") +``` + ## Video Walkthrough In this video, you will learn how to use the for and while loops in a Python script. From 198cdc7f85c7dbbf34f6ea444578367c76861a12 Mon Sep 17 00:00:00 2001 From: Dani Tellini Date: Thu, 19 Dec 2024 13:13:25 -0300 Subject: [PATCH 17/40] [Concept Entry] Python: Type Hints * intro/syntax complete * drafted type-hints.md - ready for pr * check fail fix * implemented comments in review 1 * check fail fix * Minor changes --------- --- .../python/concepts/type-hints/type-hints.md | 117 ++++++++++++++++++ 1 file changed, 117 insertions(+) create mode 100644 content/python/concepts/type-hints/type-hints.md diff --git a/content/python/concepts/type-hints/type-hints.md b/content/python/concepts/type-hints/type-hints.md new file mode 100644 index 00000000000..a360f8ffeea --- /dev/null +++ b/content/python/concepts/type-hints/type-hints.md @@ -0,0 +1,117 @@ +--- +Title: 'Type Hints' +Description: 'Specify expected data types for variables, function arguments, and return values, improving code readability and aiding static analysis.' +Subjects: + - 'Code Foundations' + - 'Computer Science' +Tags: + - 'Python' + - 'Types' +CatalogContent: + - 'learn-python-3' + - 'paths/computer-science' +--- + +**Type hints** in Python are a feature that enables developers to specify the expected data types of variables, function arguments, and return values. It was introduced in Python 3.5. + +> **Note**: Type hints are part of the **`typing` module**, which provides a comprehensive set of tools for type annotations. + +Type hints help developers write more robust code by allowing tools like linters and IDEs to catch type-related errors before runtime. + +## Syntax + +This is the general syntax for type hints in function annotations: + +```pseudo +from typing import List, Dict, Union + +def function_name(parameter_name: parameter_type) -> return_type: + # Function body +``` + +- `parameter_name`: This represents the name of the parameter that the function accepts. +- `parameter_type`: This indicates the expected data type of `parameter_name`. +- `return_type`: This specifies the data type of the value that the function will return. + +### Commonly Used Type Hints + +- `int`, `float`, `str`, `bool`: These are the basic data types. +- `List[ElementType]`: This is a list containing elements of `ElementType`. +- `Dict[KeyType, ValueType]`: This is a dictionary with keys of `KeyType` and values of `ValueType`. +- `Union[Type1, Type2]`: This is a value that can be of either `Type1` or `Type2`. +- `Optional[Type]`: This indicates that a value can be of `Type` or `None`. + +> **Note**: Starting with Python 3.7, PEP 563 allows type annotations to be stored as strings and evaluated only when needed, optimizing runtime performance. From Python 3.10 onwards, PEP 604 introduces the `|` operator as a concise alternative to `Union`, simplifying syntax for type annotations. + +## Example + +This is an example of a function using type hints: + +```py +from typing import List, Dict, Union + +def process_data(data: List[Dict[str, Union[int,str]]]) -> List[str]: + """ + Processes a list of dictionaries to extract string values. + + Args: + (data: List[Dict[str, Union[int,str]]]): A list of dictionaries including string keys and integer or string values. + + PEP 604 Args: + (data: List[Dict[str, int | str]]): A list of dictionaries including string keys and integer or string values as per PEP 604. + + Returns: + List[str]: A list of string values extracted from the dictionaries. + """ + + result = [] + + for item in data: + for key, value in item.items(): + if isinstance(value, str): + result.append(value) + return result + +# Example usage +data = [ + {"name": "Alice", "age": 25}, + {"name": "Bob", "city": "New York"} +] + +output = process_data(data) + +print(output) +``` + +The above example would output the following: + +```shell +['Alice', 'Bob', 'New York'] +``` + +## Codebyte Example + +Here is a codebyte example demonstrating the usage of type hints: + +```codebyte/python +from typing import List, Optional + +def greet(name: Optional[str] = None) -> str: + """ + Args: + name (Optional[str]): Name of the person to greet. Defaults to None. + + Returns: + str: A greeting message. + """ + + if name: + return f"Hello, {name}!" + return "Hello, World!" + +# Test the function +print(greet("Dani")) +print(greet()) +``` + +> **Note**: While type hints enhance code clarity and facilitate static analysis during development, they do not affect how Python executes the code. From b6cebe10505fce39cb67fae881c27acc0bf14bd9 Mon Sep 17 00:00:00 2001 From: NeemaJoju Date: Fri, 20 Dec 2024 00:07:58 +0530 Subject: [PATCH 18/40] Added file on movedim (#5734) * Added file on movedim * The correct file * some basic edits * Added edits * Update content/pytorch/concepts/tensor-operations/terms/movedim/movedim.md * Update content/pytorch/concepts/tensor-operations/terms/movedim/movedim.md * Update movedim.md Fixed formating issue --------- --- .../terms/movedim/movedim.md | 131 ++++++++++++++++++ 1 file changed, 131 insertions(+) create mode 100644 content/pytorch/concepts/tensor-operations/terms/movedim/movedim.md diff --git a/content/pytorch/concepts/tensor-operations/terms/movedim/movedim.md b/content/pytorch/concepts/tensor-operations/terms/movedim/movedim.md new file mode 100644 index 00000000000..33689588f81 --- /dev/null +++ b/content/pytorch/concepts/tensor-operations/terms/movedim/movedim.md @@ -0,0 +1,131 @@ +--- +Title: '.movedim()' +Description: 'Returns a tensor with the dimensions moved from the positions specified in source to the positions specified in destination.' +Subjects: + - 'AI' + - 'Data Science' +Tags: + - 'AI' + - 'Arrays' + - 'Data Structures' + - 'Deep Learning' +CatalogContent: + - 'intro-to-py-torch-and-neural-networks' + - 'paths/computer-science' +--- + +In Pytorch, **`.movedim()`** is used to move specific dimensions of the input tensor to a specified positions, while the other dimensions that are not explicitly mentioned remain in their original order. + +## Syntax + +```pseudo +torch.movedim(input, source, destination) +``` + +- `input`: The input tensor whose dimensions are to be rearranged. +- `source`: The dimensions to be moved. Can be a single integer or a tuple of integers. +- `destination`: The target positions for the dimensions specified in `source`. It should have the same length as `source`. + +## Example + +The following example demonstrates the use of `.movedim()`: + +```py +import torch + +# Define a 1D tensor +a = torch.tensor([[1, 2, 3, -8]]) + +# Define a 2D tensor +b = torch.tensor([[1, 2, 3, -8], + [4, 3, 8, 0], + [-1, 7, 6, 3], + [5, 6, 9, 0]]) + +# Define a 3D tensor +c = torch.randn(2, 2, 3) + +# Define a 4D tensor +d = torch.randn(2, 3, 2, 3) + +# Move dimension 0 to dimension 1 for 1D tensor +a1 = torch.movedim(a, 0, 1) +print("One Dimensional tensor:") +print(a1) +print("\n") + +# Move dimension 0 to dimension 1 for 2D tensor +b1 = torch.movedim(b, 0, 1) +print("Two Dimensional tensor:") +print(b1) +print("\n") + +# Move dimension 0 to dimension 1 for 3D tensor +c1 = torch.movedim(c, 0, 1) +print("Three Dimensional tensor (Dim 1):") +print(c1) +print("\n") + +# Move dimension 0 to dimension 2 for 3D tensor +c2 = torch.movedim(c, 0, 2) +print("Three Dimensional tensor (Dim 2):") +print(c2) +print("\n") + +# Move dimensions [0, 1] to positions [2, 3] for 4D tensor +d1 = torch.movedim(d, [0, 1], [2, 3]) +print("Four Dimensional tensor:") +print(d1) +``` + +This example will generate the following output: + +```shell +One Dimensional tensor: +tensor([[ 1], + [ 2], + [ 3], + [-8]]) + +Two Dimensional tensor: +tensor([[ 1, 4, -1, 5], + [ 2, 3, 7, 6], + [ 3, 8, 6, 9], + [-8, 0, 3, 0]]) + +Three Dimensional tensor (Dim 1): +tensor([[[ 1.0064, -1.2284, -1.1452], + [-0.9374, 1.2943, -1.7862]], + + [[ 0.4316, 3.1050, -0.4264], + [-0.9219, 1.6863, -0.3411]]]) + +Three Dimensional tensor (Dim 2): +tensor([[[ 1.0064, -0.9374], + [-1.2284, 1.2943], + [-1.1452, -1.7862]], + + [[ 0.4316, -0.9219], + [ 3.1050, 1.6863], + [-0.4264, -0.3411]]]) + +Four Dimensional tensor: +tensor([[[[ 0.0753, 1.5373, 0.0765], + [-3.1675, 0.2926, 0.5799]], + + [[-0.1520, -0.4855, 1.9026], + [-1.6107, 0.5367, -0.3401]], + + [[-0.9148, -0.6213, 0.5939], + [-0.6407, -1.0397, -0.7044]]], + + + [[[ 0.3897, 0.6399, 1.0818], + [ 0.7111, -1.3950, -1.3415]], + + [[-0.3749, 2.3008, -0.2464], + [ 1.4121, -0.3554, -0.5184]], + + [[-0.3224, -0.9296, 0.1633], + [-0.2641, 0.8230, 0.1766]]]]) +``` From 915e13d05037163d799cf49521a6ff9682c3531d Mon Sep 17 00:00:00 2001 From: Dani Tellini Date: Thu, 19 Dec 2024 16:22:51 -0300 Subject: [PATCH 19/40] Concept Entry - Python: Enum (#5736) * title table added * drafted enum entry, ready for pr * check fail fix attempt 1 * check fail fix attempt 2 * implemented comments * Update enum.md minor fixes * Update enum.md * Update content/python/concepts/enum/enum.md * Update content/python/concepts/enum/enum.md * Update content/python/concepts/enum/enum.md --------- --- content/python/concepts/enum/enum.md | 104 +++++++++++++++++++++++++++ 1 file changed, 104 insertions(+) create mode 100644 content/python/concepts/enum/enum.md diff --git a/content/python/concepts/enum/enum.md b/content/python/concepts/enum/enum.md new file mode 100644 index 00000000000..dcf8a0c759f --- /dev/null +++ b/content/python/concepts/enum/enum.md @@ -0,0 +1,104 @@ +--- +Title: 'enum' +Description: 'A class that defines a set of named values, providing a structured way to represent constant values in a readable manner.' +Subjects: + - 'Code Foundations' + - 'Computer Science' +Tags: + - 'Data Types' + - 'Enum' + - 'Python' + - 'Variables' +CatalogContent: + - 'learn-python-3' + - 'paths/computer-science' +--- + +**`Enum`** (short for _enumeration_) is a class in Python used to define a set of named, immutable constants. Enumerations improve code readability and maintainability by replacing magic numbers or strings with meaningful names. Enums are part of Python's built-in `enum` module, introduced in Python 3.4. + +> **Note:** Magic numbers are unclear, hardcoded values in code. For example, `80` in a speed-checking program might be confusing. Replacing it with an enum constant, like `SpeedLimit.HIGHWAY`, makes the code easier to read and maintain. + +## Syntax + +```pseudo +from enum import Enum + +class EnumName(Enum): + MEMBER1 = value1 + MEMBER2 = value2 +``` + +- `EnumName`: The name of the enum class. +- `MEMBER1`, `MEMBER2`: Names of the constants. +- `value1`, `value2`: Values assigned to the constants (e.g. numbers or strings). + +## `enum` Module + +The `enum` module provides the `Enum` class for creating enumerations. It also includes: + +- `IntEnum`: Ensures that the values of the enuemration are integers. +- `Flag`: Allows combining constants with bitwise operations. +- `Auto`: Automatically assigns values to the enumeration members. + +Enums also provide methods like: + +- `.name`: Returns the name of the enum member (as a string). +- `.value`: Returns the value assigned to the enum member. + +## Example + +This example demonstrates how to create an enum for days of the week with integer values: + +```py +from enum import Enum + +class Weekday(Enum): + MONDAY = 1 + TUESDAY = 2 + WEDNESDAY = 3 + +# Accessing members +print(Weekday.MONDAY) +print(Weekday.MONDAY.name) +print(Weekday.MONDAY.value) + +# Iterating through members +for day in Weekday: + print(day) +``` + +This example results in the following output: + +```shell +Weekday.MONDAY +MONDAY +1 +Weekday.MONDAY +Weekday.TUESDAY +Weekday.WEDNESDAY +``` + +## Codebyte + +This example demonstrates how enums can represent traffic light states and associate actions with each state: + +```codebyte/python +from enum import Enum + +class TrafficLight(Enum): + RED = 'Stop' + YELLOW = 'Caution' + GREEN = 'Go' + +def traffic_action(light): + if light == TrafficLight.RED: + return "Stop your car." + elif light == TrafficLight.YELLOW: + return "Prepare to stop." + elif light == TrafficLight.GREEN: + return "You can go." + +# Example usage +current_light = TrafficLight.RED +print(traffic_action(current_light)) +``` From a198871179728b14ff50b0d218e80e64ba3c838d Mon Sep 17 00:00:00 2001 From: Savi Dahegaonkar <124272050+SaviDahegaonkar@users.noreply.github.com> Date: Fri, 20 Dec 2024 16:59:38 +0530 Subject: [PATCH 20/40] [Concept Entry] Sklearn multilabel-classification (#5817) * New file has been added. * Update user-input.md * Update user-input.md * File has been modified. * Update multilabel-classification.md fixes * Update multilabel-classification.md --------- --- .../multilabel-classification.md | 121 ++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 content/sklearn/concepts/multilabel-classification/multilabel-classification.md diff --git a/content/sklearn/concepts/multilabel-classification/multilabel-classification.md b/content/sklearn/concepts/multilabel-classification/multilabel-classification.md new file mode 100644 index 00000000000..1a517c24ce3 --- /dev/null +++ b/content/sklearn/concepts/multilabel-classification/multilabel-classification.md @@ -0,0 +1,121 @@ +--- +Title: 'Multilabel Classification' +Description: 'Multilabel classification is a machine learning task where each instance can be assigned multiple labels or categories simultaneously.' +Subjects: + - 'Computer Science' + - 'Data Science' + - 'Data Visualization' + - 'Machine Learning' +Tags: + - 'AI' + - 'Classification' + - 'Natural Language Processing' + - 'Scikit-learn' +CatalogContent: + - 'learn-python-3' + - 'paths/intermediate-machine-learning-skill-path' +--- + +In sklearn, **Multilabel Classification** assigns multiple labels to a single instance, allowing models to predict multiple outputs simultaneously. This method differs from traditional classification, where each instance belongs to only one class. + +Scikit-learn offers tools like `OneVsRestClassifier`, `ClassifierChain`, and `MultiOutputClassifier` to handle multilabel classification and enable efficient model training and evaluation. + +## Syntax + +Here's the syntax for using multiabel classification in sklearn: + +```pseudo +from sklearn.multioutput import MultiOutputClassifier +from sklearn.ensemble import RandomForestClassifier +from sklearn.model_selection import train_test_split + +# Step 1: Initialize the base classifier +base_model = RandomForestClassifier(random_state=42) + +# Step 2: Create a MultiOutputClassifier wrapper for multilabel classification +multi_label_model = MultiOutputClassifier(base_model) + +# Step 3: Train the model using the training dataset +multi_label_model.fit(X_train, y_train) + +# Step 4: Make predictions on the test dataset +predicted_labels = multi_label_model.predict(X_test) + +# Step 5: Evaluate predictions or use the results +print(predicted_labels) +``` + +- `RandomForestClassifier`: The base classifier for multilabel classification. +- `MultiOutputClassifier`: A wrapper to extend the base classifier for multilabel tasks. +- `Training and testing`: The model is trained with `fit()` and predictions are made using `predict()`. + +## Example + +This code demonstrates multilabel classification using scikit-learn by training a model to assign multiple labels: + +```py +from sklearn.datasets import make_multilabel_classification +from sklearn.ensemble import RandomForestClassifier +from sklearn.multioutput import MultiOutputClassifier +from sklearn.metrics import classification_report + +# Generate synthetic multilabel data +X, y = make_multilabel_classification(n_samples=100, n_features=10, n_classes=3, n_labels=2, random_state=42) + +# Initialize a base classifier +base_classifier = RandomForestClassifier() + +# Wrap the base classifier for multilabel classification +model = MultiOutputClassifier(base_classifier) + +# Train the model +model.fit(X, y) + +# Predict labels for new data +predictions = model.predict(X[:5]) + +# Display predictions +print("Predicted Labels for First 5 Samples:") +print(predictions) +``` + +The code results the following output: + +```shell +Predicted Labels for First 5 Samples: +[[1 1 0] + [1 1 0] + [0 0 1] + [1 1 1] + [0 1 0]] +``` + +## Codebyte Example + +The following codebyte example trains a Random Forest classifier for multilabel classification on dataset and predicts multiple categories for new samples: + +```codebyte/python +# This code demonstrates multilabel classification using scikit-learn. +from sklearn.datasets import make_multilabel_classification +from sklearn.ensemble import RandomForestClassifier +from sklearn.multioutput import MultiOutputClassifier + +# Generate synthetic multilabel data +X, y = make_multilabel_classification(n_samples=100, n_features=10, n_classes=3, n_labels=2, random_state=42) + +# Initialize a Random Forest classifier +classifier = RandomForestClassifier() + +# Wrap the classifier for multilabel classification +multi_label_model = MultiOutputClassifier(classifier) + +# Train the model on the dataset +multi_label_model.fit(X, y) + +# Predict labels for the first 4 samples +predictions = multi_label_model.predict(X[:4]) + +# Display the predictions +print("Predicted labels for the first 4 samples:") +print(predictions) +``` From 8c84a29f20ed164ba923fabf8819141c31fc9a1a Mon Sep 17 00:00:00 2001 From: Daksha Deep Date: Fri, 20 Dec 2024 17:05:49 +0530 Subject: [PATCH 21/40] Created the concept file `scipy.md` (#5876) * Created the concept file * Update scipy.md * Update scipy.md minor fix --------- --- content/scipy/scipy.md | 10 ++++++++++ 1 file changed, 10 insertions(+) create mode 100644 content/scipy/scipy.md diff --git a/content/scipy/scipy.md b/content/scipy/scipy.md new file mode 100644 index 00000000000..0187279f9d9 --- /dev/null +++ b/content/scipy/scipy.md @@ -0,0 +1,10 @@ +--- +Title: 'SciPy' +Description: 'SciPy is a Python-based library that builds on NumPy’s array operations to provide a wide range of mathematical, scientific, and engineering tools.' +Codecademy Hub Page: 'https://www.codecademy.com/catalog/subject/data-science' +CatalogContent: + - 'learn-data-science' + - 'paths/data-science-foundations' +--- + +**`SciPy`** is a widely used open-source [Python](https://www.codecademy.com/enrolled/courses/learn-python-3) library that provides various scientific and numerical computing tools. Built on top of [NumPy’s](https://www.codecademy.com/resources/docs/numpy) robust array manipulation capabilities, SciPy extends Python with specialized modules for tasks such as optimization, signal processing, integration, statistics, image processing, and more. Its goal is to combine a consistent collection of high-level mathematical functions and algorithms so scientists, engineers, and data analysts can perform advanced computations efficiently, often without switching to lower-level languages like [C](https://www.codecademy.com/resources/docs/c). From a36c38a85e4a2321c4e11eed8c19357098745d21 Mon Sep 17 00:00:00 2001 From: Max Reilly <104339073+cmaxreilly@users.noreply.github.com> Date: Fri, 20 Dec 2024 07:39:16 -0700 Subject: [PATCH 22/40] Numpy square (#5740) * Adding/fixing C printf format specifiers. * Added files for square function in numpy module. Added metadata. * First draft of .square information page. * Revert printf.md * Update content/numpy/concepts/math-methods/terms/square/square.md Changed line lengths to match. * Fixed Header /square/square.md * Added links and inline syntax blocks to /square/square.md * Substituted colon for elipsis /square/square.md * Fixed line lengths and colons /square/square.md * Fixed formatting errors in square.md * Fixed syntax labels. * minor changes * Update square.md * Update square.md fixed --------- --- .../math-methods/terms/square/square.md | 127 ++++++++++++++++++ 1 file changed, 127 insertions(+) create mode 100644 content/numpy/concepts/math-methods/terms/square/square.md diff --git a/content/numpy/concepts/math-methods/terms/square/square.md b/content/numpy/concepts/math-methods/terms/square/square.md new file mode 100644 index 00000000000..75e6f655f02 --- /dev/null +++ b/content/numpy/concepts/math-methods/terms/square/square.md @@ -0,0 +1,127 @@ +--- +Title: '.square()' +Description: 'Calculates the square of each element in an array.' +Subjects: + - 'Computer Science' + - 'Data Science' + - 'Discrete Math' +Tags: + - 'Arrays' + - 'Functions' + - 'NumPy' +CatalogContent: + - 'learn-python-3' + - 'paths/computer-science' +--- + +In NumPy, the **`.square()`** method computes the square of a number or the square of the elements in an array. It is commonly used in mathematical calculations, machine learning, data analysis, engineering, and graphics. + +## Syntax + +```pseudo +numpy.square(x, out = None, where = True, dtype = None) +``` + +- `x`: The input data, which can be a number, an array, or a multidimensional array. +- `out` (Optional): A location where the result is stored. If provided, it must have the same shape as the expected output. +- `where` (Optional): A boolean array specifying which elements to compute. The result is only computed for elements where `where` is `True`. +- `dtype` (Optional): The desired data type for the output array. If not specified, it defaults to the data type of x. + +## Examples + +### Modifying the output array + +The output array for NumPy operations cannot be a Python [list](https://www.codecademy.com/resources/docs/python/built-in-functions/list) because lists are not optimized for numerical computations. NumPy arrays are composed of contiguous blocks of memory, which enhances performance. Therefore, the array passed for the out parameter must be a NumPy array initialized with the `numpy.array` function: + +```py +import numpy as np + +output_array = np.array([0, 0, 0, 0, 0]) +``` + +This array can then be used as the `out` parameter in the `numpy.square()` function: + +```py +import numpy as np + +output_array = np.array([0, 0, 0, 0, 0]) + +array = [1, 2, 3, 4, 5] +np.square(array, out = output_array) +print(output_array) +``` + +This generates the output as follows: + +```shell +[1, 4, 9, 16, 25] +``` + +### Operating conditionally + +Using the `where` parameter, the function will execute conditionally. The `where` parameter specifies where to apply the operation, based on a condition. If the condition is `True` at a particular index, the corresponding element in the array will be squared. If the condition is `False`, the element will remain unchanged. For instance: + +```py +import numpy as np + +array = np.array([1, 2, 3, 4, 5]) +conditions = np.array([False, True, True, False, True]) + +result = np.square(array, where=conditions) +print(result) +``` + +Output: + +```shell +array([1, 4, 9, 4, 25]) +``` + +The `where` parameter takes a boolean array or condition. It determines where the squaring operation will take place: + +- True at an index: The element at that index will be squared. +- False at an index: The element at that index will remain unchanged. + +If the `where` parameter is set to a single boolean value (either `True` or `False`), the entire array is either squared (if `True`) or left unchanged (if `False`). + +### Changing types + +Sometimes, it is important to increase or decrease the size of the datatype of the output array. This can be done by setting the `dtype` parameter to an np datatype, like: + +```py +import numpy as np +array = np.array([1, 2, 3, 4, 5]) # Ensuring it's a numpy array +result = np.square(array, dtype=np.float32) + +# Print the result +print(result) +``` + +Output generated will be as follows: + +```shell +array([ 1., 4., 9., 16., 25.], dtype=float32) +``` + +## Codebyte Example + +Run the following example to understand how the `.square()` method works: + +```codebyte/python +import numpy as np + +# Create a NumPy array +array = np.array([1, 2, 3, 4, 5]) + +# Create an output array initialized with zeros +output_array = np.zeros_like(array) + +# Set the condition for the 'where' parameter (square values where condition is True) +conditions = np.array([False, True, True, False, True]) + +# Use numpy.square() with all parameters +result = np.square(array, out=output_array, where=conditions) + +# Print the result +print("Squared values with conditions:", result) +``` From 61d4e299742c44fc57262b67ed1f96a20eac03a1 Mon Sep 17 00:00:00 2001 From: Mamta Wardhani Date: Fri, 20 Dec 2024 22:31:11 +0530 Subject: [PATCH 23/40] [Concept Entry] Python:SciPy: scipy.integrate (#5874) --- .../scipy-integrate/scipy-integrate.md | 46 +++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 content/scipy/concepts/scipy-integrate/scipy-integrate.md diff --git a/content/scipy/concepts/scipy-integrate/scipy-integrate.md b/content/scipy/concepts/scipy-integrate/scipy-integrate.md new file mode 100644 index 00000000000..04b48e13a97 --- /dev/null +++ b/content/scipy/concepts/scipy-integrate/scipy-integrate.md @@ -0,0 +1,46 @@ +--- +Title: 'scipy.integrate' +Description: 'Provides functions for numerical integration, solving ordinary differential equations, and handling integrals over a range of functions.' +Subjects: + - 'Computer Science' + - 'Data Science' +Tags: + - 'Algorithms' + - 'Data' + - 'Filter' +CatalogContent: + - 'learn-python-3' + - 'paths/computer-science' +--- + +**`scipy.integrate`** is a submodule of SciPy that provides tools for numerical integration and solving differential equations. It supports both single and multi-dimensional integrals, offering efficient methods for handling integrals of functions, ordinary differential equations (ODEs), and more. Key features include: + +- **Numerical Integration**: Calculate definite integrals of functions. +- **Ordinary Differential Equations (ODEs)**: Solve initial value problems for ODEs. +- **Quadruple Integration**: Handle higher-dimensional integrals over specified ranges. +- **Integration of Systems of ODEs**: Solve coupled systems of ODEs with multiple variables. + +`scipy.integrate` is a powerful tool for working with integrals and differential equations in scientific computing and engineering applications. + +## Syntax + +Here's a generic syntax outline for using `scipy.integrate`: + +```pseudo +import scipy.integrate + +# Example: Numerical integration (definite integral) +result, error = scipy.integrate.function_name(function, bounds, *args, **kwargs) + +# Example: Solving an ODE +solution = scipy.integrate.function_name(function, time_points, initial_conditions, *args, **kwargs) + +# Example: Multi-dimensional integration +result = scipy.integrate.function_name(function, bounds, *args, **kwargs) +``` + +- `scipy.integrate.function_name`: Replace this with the specific function you want to use (e.g., `quad`, `odeint`, `dblquad`). +- `*args`: Positional arguments specific to the function. +- `**kwargs`: Keyword arguments that can be used to modify the behavior of the function. + +This structure is applicable for most functions in `scipy.integrate`, where an integration or ODE solving task is defined and then applied to the data, with many functions like `quad()`, `odeint()`, `trapz()`, `dblquad()`, and more, making it versatile for various numerical integration and differential equation tasks. From 291c849db418f2aa42a52dae8ce21193de83692c Mon Sep 17 00:00:00 2001 From: Mamta Wardhani Date: Fri, 20 Dec 2024 22:33:16 +0530 Subject: [PATCH 24/40] [Concept Entry] Python:SciPy: scipy.signal (#5873) * [Concept Entry] Python:SciPy: scipy.signal * Update scipy-signal.md fixed formatting --------- --- .../concepts/scipy-signal/scipy-signal.md | 46 +++++++++++++++++++ 1 file changed, 46 insertions(+) create mode 100644 content/scipy/concepts/scipy-signal/scipy-signal.md diff --git a/content/scipy/concepts/scipy-signal/scipy-signal.md b/content/scipy/concepts/scipy-signal/scipy-signal.md new file mode 100644 index 00000000000..21429a9f7b7 --- /dev/null +++ b/content/scipy/concepts/scipy-signal/scipy-signal.md @@ -0,0 +1,46 @@ +--- +Title: 'scipy.signal' +Description: 'Provides functions for signal processing tasks such as filtering, spectral analysis, and signal generation.' +Subjects: + - 'Computer Science' + - 'Data Science' +Tags: + - 'Algorithms' + - 'Data' + - 'Filter' +CatalogContent: + - 'learn-python-3' + - 'paths/computer-science' +--- + +**`scipy.signal`** is a submodule of SciPy that provides tools for signal processing, including filter design, spectral analysis, and convolution. It supports both continuous and discrete signals, with applications in areas like audio processing, communications, and data analysis. Key features include: + +- **Filter Design and Application**: Design and apply various types of filters. +- **Fourier Transform**: Analyze frequency components of signals. +- **Convolution and Correlation**: Apply convolution and correlation for signal processing tasks. +- **Signal Generation**: Generate standard test signals like sinusoids and square waves. + +`scipy.signal` is a powerful tool for working with signals in scientific and engineering fields. + +## Syntax + +Here's a generic syntax outline for using `scipy.signal`: + +```pseudo +import scipy.signal + +# Example: Designing a filter +b, a = scipy.signal.function_name(*args, **kwargs) + +# Example: Applying the filter to a signal +y = scipy.signal.function_name(b, a, x) + +# Example: Signal processing task (e.g., convolution, correlation) +result = scipy.signal.function_name(x, y, *args, **kwargs) +``` + +- `scipy.signal.function_name`: Replace this with the specific function you want to use (e.g., `buttap`, `filtfilt`, `convolve`). +- `*args`: Positional arguments specific to the function. +- `**kwargs`: Keyword arguments that can be used to modify the behavior of the function. + +This structure is applicable for most functions in `scipy.signal`, where a signal processing task is defined and then applied to the data, with many functions like `lfilter()`, `wiener()`, `correlate()`, `resample()`, `csd()`, `spectrogram()`, and more, making it versatile for various signal processing tasks. From 193437fe2c7a7721891476ac549d507b37a0fbf4 Mon Sep 17 00:00:00 2001 From: Sriparno Roy <89148144+Sriparno08@users.noreply.github.com> Date: Sat, 21 Dec 2024 12:11:09 +0530 Subject: [PATCH 25/40] [Concept Entry] Sklearn: Linear Discriminant Analysis (#5824) * [Concept Entry] Sklearn: Linear Discriminant Analysis * Apply Suggestions --------- --- .../linear-discriminant-analysis.md | 121 ++++++++++++++++++ 1 file changed, 121 insertions(+) create mode 100644 content/sklearn/concepts/linear-discriminant-analysis/linear-discriminant-analysis.md diff --git a/content/sklearn/concepts/linear-discriminant-analysis/linear-discriminant-analysis.md b/content/sklearn/concepts/linear-discriminant-analysis/linear-discriminant-analysis.md new file mode 100644 index 00000000000..d5820599d46 --- /dev/null +++ b/content/sklearn/concepts/linear-discriminant-analysis/linear-discriminant-analysis.md @@ -0,0 +1,121 @@ +--- +Title: 'Linear Discriminant Analysis' +Description: 'Linear Discriminant Analysis aims to project data onto a lower-dimensional space while preserving the information that discriminates between different classes.' +Subjects: + - 'Data Science' + - 'Machine Learning' +Tags: + - 'Machine Learning' + - 'Scikit-learn' + - 'Supervised Learning' + - 'Unsupervised Learning' +CatalogContent: + - 'learn-python-3' + - 'paths/computer-science' +--- + +In Sklearn, **Linear Discriminant Analysis (LDA)** is a supervised algorithm that aims to project data onto a lower-dimensional space while preserving the information that discriminates between different classes. LDA finds a set of directions in the original feature space that maximize the separation between the classes. These directions are called discriminant directions. By projecting the data onto these directions, LDA reduces the dimensionality of the data while retaining the information that is most relevant for classification. + +## Syntax + +```pseudo +from sklearn.discriminant_analysis import LinearDiscriminantAnalysis + +# Create an LDA model +model = LinearDiscriminantAnalysis( + solver='svd', + shrinkage=None, + priors=None, + n_components=None, + store_covariance=False, + tol=0.0001, + covariance_estimator=None +) + +# Fit the model to the training data +model.fit(X_train, y_train) + +# Make predictions on the new data +y_pred = model.predict(X_test) +``` + +- `solver`: The solver to be used. Common options include: + - `svd`: Singular Value Decomposition (default). + - `lsqr`: Least Squares Solution. + - `eigen`: Eigenvalue Decomposition. +- `shrinkage`: Controls the amount of shrinkage applied to the covariance matrix. Common options include: + - `None`: No shrinkage (default). + - `auto`: Automatic shrinkage utilizing the Ledoit-Wolf lemma. +- `priors`: Prior probabilities of the classes. The default value is `None`. +- `n_components`: The number of components. The default value is `None`. +- `store_covariance`: If set to `True`, it explicitly calculates the covariance matrix when `solver` is set to `svd`. The default value is `False`. +- `tol`: The tolerance for the eigenvalue calculation. The default value is `0.0001`. +- `covariance_estimator`: Estimates the covariance matrices. The default value is `None`. + +## Example + +The following example demonstrates the implementation of LDA: + +```py +from sklearn.discriminant_analysis import LinearDiscriminantAnalysis +from sklearn.datasets import load_iris +from sklearn.model_selection import train_test_split +from sklearn.metrics import accuracy_score + +# Load the Iris dataset +iris = load_iris() +X = iris.data +y = iris.target + +# Create training and testing sets by splitting the dataset +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) + +# Create an LDA model +model = LinearDiscriminantAnalysis() + +# Fit the model to the training data +model.fit(X_train, y_train) + +# Make predictions on the new data +y_pred = model.predict(X_test) + +# Evaluate the model +print("Accuracy:", accuracy_score(y_test, y_pred)) +``` + +The above code produces the following output: + +```shell +Accuracy: 1.0 +``` + +## Codebyte Example + +The following codebyte example demonstrates the implementation of LDA: + +```codebyte/python +from sklearn.discriminant_analysis import LinearDiscriminantAnalysis +from sklearn.datasets import load_iris +from sklearn.model_selection import train_test_split +from sklearn.metrics import accuracy_score + +# Load the Iris dataset +iris = load_iris() +X = iris.data +y = iris.target + +# Create training and testing sets by splitting the dataset +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=44) + +# Create an LDA model +model = LinearDiscriminantAnalysis() + +# Fit the model to the training data +model.fit(X_train, y_train) + +# Make predictions on the test set +y_pred = model.predict(X_test) + +# Evaluate the model +print("Accuracy:", accuracy_score(y_test, y_pred)) +``` From 3dc1a363149cc9f4f5fcc3440c0ab715978fff1b Mon Sep 17 00:00:00 2001 From: Savi Dahegaonkar <124272050+SaviDahegaonkar@users.noreply.github.com> Date: Sat, 21 Dec 2024 12:22:18 +0530 Subject: [PATCH 26/40] [Term Entry] Python Plotly- graph_objects .Candlestick() (#5819) * New file has been added. * Update user-input.md * Update user-input.md * File has been modified. * Image has been added successfully. * Update content/plotly/concepts/graph-objects/terms/candlestick/candlestick.md * Update content/plotly/concepts/graph-objects/terms/candlestick/candlestick.md * Update content/plotly/concepts/graph-objects/terms/candlestick/candlestick.md * Made the changes to the file. * Changes implemeted. * Update candlestick.md fixed syntax --------- --- .../terms/candlestick/candlestick.md | 88 ++++++++++++++++++ media/candlestick-example.png | Bin 0 -> 19673 bytes 2 files changed, 88 insertions(+) create mode 100644 content/plotly/concepts/graph-objects/terms/candlestick/candlestick.md create mode 100644 media/candlestick-example.png diff --git a/content/plotly/concepts/graph-objects/terms/candlestick/candlestick.md b/content/plotly/concepts/graph-objects/terms/candlestick/candlestick.md new file mode 100644 index 00000000000..d7ff9b6d285 --- /dev/null +++ b/content/plotly/concepts/graph-objects/terms/candlestick/candlestick.md @@ -0,0 +1,88 @@ +--- +Title: '.Candlestick()' +Description: 'Creates candlestick charts to visualize financial data, showing open, high, low, and close values over time.' +Subjects: + - 'Data Science' + - 'Data Visualization' +Tags: + - 'Data' + - 'Finance' + - 'Plotly' + - 'Graphs' + - 'Data Visualization' +CatalogContent: + - 'learn-python-3' + - 'paths/data-visualization' +--- + +The **`.Candlestick()`** method in Plotly's [`graph_objects`](https://www.codecademy.com/resources/docs/plotly/graph-objects) module is used to create candlestick charts, widely used for visualizing financial data. A candlestick chart displays four key data points for a specific time period: + +1. **Open**: The starting value of the asset. +2. **High**: The highest value achieved during the time period. +3. **Low**: The lowest value during the period. +4. **Close**: The final value of the asset. + +Candlestick charts are commonly used to identify trends and patterns in stock prices and forex, helping analysts and traders visualize market behavior and make informed decisions. + +## Syntax + +```pseudo +import plotly.graph_objects as go + +go.Candlestick(x=None, open=None, high=None, low=None, close=None, increasing=None, ...) +``` + +- `x`: Represents the x-axis values, typically dates or time intervals for the candlestick chart. +- `open`: Represents the opening price of the asset for each time period. +- `high`: Represents the highest price of the asset for each time period. +- `low`: Represents the lowest price of the asset for each time period. +- `close`: Represents the closing price of the asset for each time period. +- `increasing`: Customizes the appearance of candles in cases where the closing price is higher than the opening price. The line color, width, or other styles can be defined. + +> **Note**: The ellipsis (`...`) indicates that additional optional parameters can be specified to customize the candlestick chart further. + +## Example + +The following code example creates a candlestick chart using Plotly's `.candlestick()` method. The x-axis represents dates or time periods, and the y-axis displays the opening, highest, lowest, and closing prices for each time period. + +```py +import plotly.graph_objects as go + +# Sample data +dates = ['2024-12-01', '2024-12-02', '2024-12-03'] +open_prices = [100, 105, 110] +high_prices = [110, 115, 120] +low_prices = [95, 100, 105] +close_prices = [105, 110, 115] + +# Create the figure +fig = go.Figure(data=[go.Candlestick( + # Dates or time periods for the x-axis. + x=dates, + # Opening prices for each date. + open=open_prices, + # Highest prices for each date. + high=high_prices, + # Lowest prices for each date. + low=low_prices, + # Closing prices for each date. + close=close_prices +)]) + +# Customize layout +fig.update_layout( + title='Sample Candlestick Chart', + xaxis_title='Date', + yaxis_title='Price', + xaxis_rangeslider_visible=False +) + +# Display the figure +fig.show() +``` + +This example generates an interactive candlestick chart that displays the price movements over specific dates. + +The above code generates the following output: + +![Candlestick example Plotly](https://raw.githubusercontent.com/Codecademy/docs/main/media/candlestick-example.png) diff --git a/media/candlestick-example.png b/media/candlestick-example.png new file mode 100644 index 0000000000000000000000000000000000000000..d3aabbeac7bfac4395e64aa93a9c2d31602f2b1a GIT binary patch literal 19673 zcmeHvcT`i`x-TL~K!qp*DlLeJic&-=QbIt*3b+*zk+x|{Mji~e* zFhCR}s5DV2AwZNCltAbK5|TTE;5mDr?RjIIG2VFZ-s>MmR#xU*bItjc-~4^wH*c93 z8Lk)DD!|3Xwf@-AL%(rxaihV%^^i5-n=FWj4ft=B&u@nMTzSnBU%9woT*nR}&R(@1 z?Lpl)p9!W+sKelkhxJ-7-g+uScK=H*MRcod!0+#^)JrnoA&oC?Jby~IN39@AM?C4k>#Z*O`_G51+Tc4US3n8rPxYZR z_&HV67Xw=S#+=7|SjQZO7Uz3u{XQ7~%Xqj``D_J8Om~F2+YK1Ran80JeQyJ^3S0#h$i1s zL+}xrv=)j!0AE$kSm~S#@kVo{AIPT**I+sG{4aY9R~`n@Z(iv~@8J+Pk6n{7*O>12 zkwRd4(942927W*c25}AH_M&fC8EV)apOt=~Pb;_(=ca_O;bK;X3Pq=^UKxd66ECDD zDV?i3`1?CT^}s7jtn~Bil{u6A*Hu;?=INYo^sq#dddjG>9i*reqHA)`_c*>V%g5>j z(n&AU;?x``x1Shg;Y(=O_Eu%qi=xJ>B#HSLHIG6*_H@MT=OmJQvt#jdT%p7k&XVfw z<=riUCJHr`3TX$tJL1qbOh)7SB(r9INcrag4hl+u`cd2^-hk$%ivue8)fZ5r*fYtvWos4)}r!@lTBw&8CMj;P=F=?s4nF&|z!a$ni?Kmth!blPYl zR8L{wyhMNJ8xoDRw?;}AQ{-4**$bEZj|a?pGwMUybf+rtRIPlUkB2*&rN^a3Qy11^ zNJ}sH^a*%DuZNh!eA^mAY@MX*#5_xGzhm;bswmzLZU5F%JlBv<5W$zt)s~ntOBagG zTHlYwy=ImvX?O;ZT7>tE{1%jEDdpi`;H0$at?v0AjG2vA;;{(CwP)?A)V!Ll}QPM*IvRKx@d6Uf2fT*QL|)e7<9&;dPEx zcz-GMIFB7Ni&V9HAJimUL3duHW(YIgwK!!qagQA0^LjsYsu)e={XC2`NzOts-$-C@0-3yRe)4<~zVVqsEi` zqs&zj_-l}P8tS+Kip_&;Y;N9h!P%HZWMe}bd3Dw02e3U;Ry7^lIFfsI2V^SuV#w8f zjHw}WHz!p4w~->ak}kT#V@||CF7T$N(3@9}ax@Pnp|~jQo{y!>`{qa0G3MDH-;1M1 zU##0CPps|p-?Uy-#6Sn*RX`aQaWBF+O_`er&5pZ1fJy8;WXbD?yspf6W8Ks1vChec zZ1hRY)AM1!^zeM~J6%yc{M)43QKo4$XMP^?pH+k;(~MZ*6O%--8J#O#-{R_1tMxl2 ze48fjvmZ)d3_O3>BinY@)E)0F3CJ+70x`xq5%g-_Qxm#FtJ=T$-*D_L%iQdoW&Ge1 z{Xz>5T@DpIlLk|j^)rBPI69$xc zE>%yqeubUQgO&w8in|q^Vg2a?^Fjr(&!HeY#eg?sa(6Ei*`hlVS)E;4>wCSx+jqQ4 zOwy5!dT72*o%Vo!Sgy(QWRFD82F6in8AFoLv1A(txZ@x-uX+z=&hxGzCaa1`w>Q#` zW`!5@z+7g!%aO)-WG`~)tt8uxXj)jsuvQ%WU<4JMk$Cqi%|PRQ#MvZTAX1!|X3bs; zC(+I{tubuL*^$ZQFGBzs(0mcUZ^ssvV8oD|=e|{tbd6gol8jdrdWsD3 zwF1i@t{hX-m%phodXdd1&2h@&J~zt(YEHZCa@$ zA{;IBEDLg_YK_UoGu0mnZ5%d!80Z_6U`-x;y$ zPw-szCd+tV4~Q|=@(G)f()7T@Nad^Z&_fRKDkka@=d7h#_dZ>WGEP%&-f>F8SqUFd zGAk;eH(S++F}?{STaDCem!+zT#i^!;BVJ822 zUn@+R5DnWT$*;fu4(t;%F?s_GVlQD)r=$m67zN#Eg_&Fdo$7^|OxYhiat#*g?I42Z zLJVVNA@)>=e(PP>CoX>drkJmrSAm}Hnlib0GRV=e$U<4jVZ;9x55%44{bC@F?JUOG z6r^b-(7(<|9duK8F;7w}9Xpv8racN%Li1^6B>0|${@090lA~jBgJS z5B7dJ=RTAlPvrfaS*%zhp0LL)=#md?Bf9I>QK(DE$$AJBtrDHn2liBpNFKXLNkr#B z`P74+vgZt4OHy#L6|-zg++37m`rBG=DGere;)N~$z|+?HaHd)?2GSV`JA9uek6z8JOg7`57o-+L6a=9H#chf z3ue$>yZGJ4M3UnAMsA$BhKaX3>y{krffCbd*5fR42@j z^MBHXSX%M!`E7EyJ^5DK*d|10-XN@DM$&c&W@Fz}D)3!e8IA7ib^swskp>D< zD2f=SYzeq$zD3A=f2TPVoh1EH1@s;%n?1)1(Z^eP--AtSZ3OB|*p7I*YU5w;OCmPg z8$xeK^JJ{~$r|rGxDmY-_P3(RlQ9e$UE|3Zs=pG+h1lHxW_QS{u=k#ZP(tSh^ds3{ z5Bz(z`8B1#gyt`L`b!Tb5x>;pKTr$vY*^66e$hZAeQmoBWimd%D!YTU?7pcCX0tPr zwBk<|iH&QWI&TExsNG-BM8Q589r{}==CT28iKP{YSkd>t7PZo1--k8odylS;QPahL zdm?VcUjs-dV%JzQho&<+E&nqYk2}v}XEa!)e-wBsw_GzB2&N@Ijs&m+3EQWDG2{Ti zrX^PiMe|~jQu`EjFQi0`>3F@CZ#ir5nfkGw^yP8==$k^}n=s|yxmBMS4FvkXp4RfI zWsfgK;?9Bx? zpqp|m_Ky*9WDXMB6)R5^ujk)Fenx-%%-g=c^H!{rym8vucstQm*q8P=Rx2x@1Ky?7 z%({d_&Kl!7>}ldm?Q<-(N% z?;fKkVMbYP*vUutuBj$tB3DB#^fz`+Pn%Pfs%JzPnSNDdQ5#msrM)+@6$%Dz%7~Qe ziOg?Ow%mrZ(U-)CG^O%q9}M3a5Llx!5Pp3&QCRsHbPcQsys<75D+I*}t zq%-g(i%zfhpe3@@7>jh4*lcxxa&7-x~{A-d_B=`)f9?i~N#}Uz+mEGJH4T|9f@8RRk^M-u-*a zSjek@u?xMIV>sc)6V6r_)fFu|X;Z}RE--33=2?X_*Qq%ZrtQ(gr$J z>~{wTnd)kAo+v>TkP+I#Fs4zgiS}4_QkwTP5*=3?+b|~yEpuJZ>PQ~MP2S@Ib z=nh^>EBG4QV)VsC!9Nl=3wLIhP)KpwWz>h&%v4lCxM9HP>VymK&Zo4i?$!G@fAuWn z6!qFSxN|vpfc0*nnE=n3DLy>FaQo> zwXaw3cLUGxtT&cW4=cDw5=NTf7YB`rbfK|Q;}F}-Chd{R0^gA`C}NoG9p-ftHegC% z#kxK{7Us1xqjh3FxGKN13FVt`O4t3--W02*tTic`j+IqifjRd+6Me{nc<EA8O2p4*s>~L_K}($(%~bKyEqYbaiy(qbwRj<&9BzE z9igF}XKEP-`-DUT`Ty;~4v0PSHXw-r`dZXBFrJjC$%tCBAOLurgbx6yjfw)Wmbvc~ zltA6|4+8kT&{06^{swrd(w=RQ1NgH~_-hhVK@WgQ0<7((QHcHq<-6WtCZwHUlX1c^ z2^LA*1xCqf1;}3Ti&er~z}_M1e?eo7ZHUgm;ey=Z7xmR;{a>re;z1LnW=|%OnVrXB zS*OX51&t?@k261mxl_yp>^f=hpC*mdL1{ph)xi;c%gHZvB5gXT-D6=G;|6wMZX>gj zWp*F?%krEvPd(6ni^@N!aSXap$Pc)`9y*d{5KIYUi=TCA%y$W^0EW9V|EgOBCMa?trc0i3kYdIjbFU`C8tYyH7n7ypefq+{@!DE3C z#dayuUu{HtydZFx#(zkg#9dgUpgL}AV%5z*=I-kFdA+KvUkWjo|dcj;vPNoc&fEaX7(rYxW@RL&~R-!TK7g|Nn6j~xuFIyTEZq?N^xMY*@& z%dvx}TPkgDtoT=sEWZ`ehpYW$6_I-&iEt0yFiqOX=oG587u$K$J%J)-k2S^n5=ew2Y*d#QEWI^03Z#2vIgLl zrlnmD$XDnOlg0ki<&hMsq4LTlOOx6)`am`g7R0Lm|(Wqo$D zYI#S9I98-ld8Pe2vj5(7R-keXr|s&WwYC3Y0^>R1o>8YFb8)|X|IRU+VAp;4=Xnp3 zIK;ruCZ-VPHY0xgNUi5y8OWl%^iLKs`YKcrWR#7aE8I?0f41Y_{PO?2+-vNR2fQAt z|ItjVStiZy`5Va~hT-!+f5Tz;YZv21HvP~baL<5X9RT-|Fg&d z`2@GRHlVlJZtDk@`}Ch|%H5@S;QzG7e}V4C)XZSMoH^H-F#zsb;Y(BYJbLY}7qZm*mcv(bwFvh)ey4Wizv!Ioy%57{eOEs$JjcXVyC_MAG_iVcz4mA1Q&kqmY%yc4i7JvWGEQGr<@o|N zha}T0=VgXaJ*cN2vg+j1<|0SC*cUQm3cvBuN8sM6R_cKvZ9>IG^&NrKo59{4Wb@g% z>QXvxc3KGUnvm{OSmjEni_>Wyab{a4?+f|eF(zj9N1h$Vo1KN_Ba2t(OSXwVpP#nO zI+a&(jXl<(3&8{kelbCuo54h){6=EJ#vc`vU1AmHY8{xD>__Kr#UlF-%_x?#KFJxYm~A8nA#08Fs0Cl1k@85=_D+!u?Ej<6 zn~7%ck$UphlR96v*w8@6X2P3AYV5Y!kU`jc)TAS@vP1PHAl>@gjhu%|3~CuSfaN&=Civd~B8gGkN>u>RWqFrr zfdGuVuR)lJDkq^sYMhRC#=y?5dXNCPala<>m#F+wgkM%=$!6x5 zLUl{LBXn>K-jNC)e|Sm4tYkqZcDj?3<(b9yg;}=F0Y&?X{qleT~f$q&LM`qb6T_BdJlbfYTGsbjAdJ77wg; zXHQPj-M_}zICnIq!MoM#7i2+zF&e6^P6C;Lk(x3fUr^I2V{6`T5wPwT!5F|`WEECL zzQ#$3!)IfNl%#x^nGOWyIpRzuG17|xj@qZ?+V{1KxNqG@;rf_T z_2~4Het9Fo+HYO=TEmib7&GbR&7#$*n8*W7-;9LOZ9T~cCAR?Hgam?|AcczL z*LmYq3_Z}=)(5I+6?BU1Baj@f9PwZ`s(bRfF{!xB3k z=GQJ8ksi(W)47D#VC?3Ek{P0Q0;^8jHf}C1v~S!!cg1BQ%@@fm@wPu082)&;9a8gz z0s(p8<@A?dj;9Nb;H!ygfrET;+5}loiBei6GwhUxd!^Mvyma<}iW8r257mEG6^=Z- z1d4c_PNq7TvaQ=`atdcHY(LC-tJ&3PXw5CO?v{2MjM=*s3pZiFBt~0grc*^z9|VlW z=^z>8pf0Dh;I!aLF*D@QkqQx>^AHCiRnx_<(r(?uqMv2@6Bh?%?D6)a;a(;9cbzDg zb!=OBWx9R)+x?uChe|q_zbVo%aVNW0e-!{L7r-R)Bk?@BGUT!1CC~ z=j%gnr~UNy5$GCCu-`k%*?c+VfG3Zy!p8(_aS78Pl(a{+ixj zg7iyeei?`V-1ffm{wL~39Q6VOIQAy0V<*_j@PxmlvQ4Mo0j?+d?K!Z$Y>#bw2lmB- zO`zsLHEjG;=n8_8REXcd%?pG}+S`hIr7rm3nkwAPWFh9hWM@~Lrw7w-3PcOwKXV0N zlDRoCW>Q+vb8k~G5JGNW4(SZp_-6K$JIEk52UgEmg2?2N3-O9_deHd3z6cBgz$nD` zG`C1i|H*-44x;S&kO9hf#QQR_0#NlbT`|jGy|L*t4k?c`__w?XNnsgyHO|vP4 zp@TpFII!6&H|gc;3HvD8;PcVrRngAf=K$OwbQboPRv(e|3|Ol6csOegQ7xnH{sQ9T zqw?h-yKk$|7Il=vD&lf5C`v4js&)0#Qbu0muB-zgP-(dwP1z$RwhI=*ppXQC^61*L z_5pu?eOGps=Svxg!*VYE4lGp@y;5rCGQaKK5NtpOWWpyHpZZ5pN{8t0TUQ}KK0UAY z5d)s9Z#2-YR|PZ3cVTWtt|8@fcbkkmMQD|OCq$RgCNxf%O#-ZR0XE!$^Oew8Der?W70a|E<3NQ>=k2_}g&uY`#oA*ez>E`bxluO=*kr1` zU&y-`o{aU$KExcmiuIzWPJGUYvUCF(t+QIg%drm(`7S7yai6j$Ewo$teq7IqX?(%f zhF#*H200mXxN%plv@yTWMDq2uLrv@Ps!fzHUOh&0p~be2Kv&;+oZ8A4&-=w{M6GK% zy;htrb@A}bOcrv6QG7G-40A0#NA*c(xT}XeMAp{u)m96rN8seMYShgE%5h}JHn#V@ zTm~k+%a%Q%E~eg0Pi>)=IbwU&%p7R9jpyTklOvjH7gg?zHrJ zBK$Uct~i-0j8$}J)@tEWmveLHwF=doFEAzLApNIYOMlOXH9l5mkeLFfX2m)`SO!q% z#pr55x7kVVtb`d8b7Z;xx$HAJxmqL%G|?nm`hA^3m#R`?D@}TD6Xl|oIqQ7S5lg;C zv8aQgq0QIL3e74X^)PQ9pYUm-4C`H-uL!KaEyt{N_>^jqiZkG`qvRM4?p?nLT?cO& zyDAzU0iWw7;O`aVVYm)$H+p_`n^F3{XA{Z|^Quo~>iT7q(+h;GRXtx~Quu35`|m#H z_uLffv{sa@9k??;+c}sVH%RpXsXZCqy@PO*DWCm}FRbhJSW^C;0P@57I-%b$k&d*D z>mfRs6rw-&>sl0v9&?wvAr9pKbk5-l=Y5RF<|qX6T?3s1I~a_zs~$1C&yAg|2c{tU zbWF4O)}4@uR(g|jHHl_7$W(d*LcE1rK9YZ1$Pf(kx{`L{T^0C6@sv;PlX`LUKZ-E{ z5$3skT81Q@cq+!b4P~@{;s2*vvE^hG-8N{uJeKw6?eI8S!us+w7!3J0?%|S?ep^(ERvrr997Du$A! zhqa}Km2bWJyk3I0p>#^3CW4F%>6fn0ePkT0`^}a@-}!k^@d_Y|sa%s6t8XE#9lM## zoJQH?2Hhgm7O`}L#Kp{VmNDL;0e)GsMMk1Y-e^47m10WBO>>_4iec4&nOTf_yi2#` zfR?j*2uophLLBw=zU-h~-{&TAxrS24?*1VPzbLzzG&e85po3Qq)SUsQf8ag^>cqa& zyIimDK0M(BWsd?y|yGg97O`_X%BeO89d?P6+J# zaU#fez*6j}4wd+EzQ{vhrJlp4ew-502X>8p;%~nEIQis1>)>2mt!Ic3){K^J8br(O zW0b}D8|N~S@xcQN1f_7w>g5wpt^gig-Ql`AYU+(&H9bH0ZaXu(Dhc)Y*aTL`!E=_e zQW)5ejV)P-+PLMi%0bJjOUmM*%QjRO)Mrr$pP<~i+~XfC#AbQ>ZSh)VZAvaJDAWXM zK?v`XKHu%cu`?HB2KF3UIw}Qr6od)F93HT^>&SbPgLg23>*{VcyuZ)PYv5$P<8!8;==o3wTQ>a25X)X8P%~r$C2NzuiM?69GR;X~(&trr zw%>ub9f&3+m%yjLa%}^dhd(;xc&o}qd&?b!k5kx z;Z&mc0k?AXjM?`E0N3|6!Z=AXf}$%A{r_wbNAw=L&IFK{on;}|p!O|isILxSNhu9Z zaR(o9j&^Z3)^%L=%Ey}~>%he8-3EGd8%I!)z4me!w*?f`W#DaIyZ2PrnqBF!2OOg( zz{Z{cPrD`GRyxELdU_jRChtWJc)(2jLV;*Jpn{ypy|_^i%h|glqXfCN;wPKTg`KUH zYeCS}kz$nY;(tg~$Gqo?L4yTNH@QnNt| z#%`^pw6B3XEBuiU70|>EN9p-C!X7DHKs!z<9@%~P2f^ z(W}YTxe?(IQv zxWglEM{xd3#Zq-}VuTR8OaPp+B1fsM!`wIA6e`X+?}!r+6pmKmfj*7b!wGFc(JOSK zD4P@$;aFM*nfi^q%(+Hk?(zxL;M(0K#&_G7?$_7~B=2DgINYc?BL4_i=xu>@Gmq}2 zi> zIl|z4)w%8L7hyxZHX9JKMm?aIz-4tH}-N)efdoU207EC87;md(-0u!;V_Fe zE7wO&fnxbmG(3JN(sxU=<;S%t(4)50{_-D+1Os0{?m7NATk0&(qqBa|{YSv_mc9%F tng%->zjW#hN6~%_Vr6FjW+vv?z7p^B>UjCU@nl?F#||4E%G0+E`!BGS Date: Sat, 21 Dec 2024 13:10:20 +0530 Subject: [PATCH 27/40] [Concept Entry] Sklearn: Quadratic Discriminant Analysis (#5825) * [Concept Entry] Sklearn: Quadratic Discriminant Analysis * Update quadratic-discriminant-analysis.md minor fixes --------- --- .../quadratic-discriminant-analysis.md | 105 ++++++++++++++++++ 1 file changed, 105 insertions(+) create mode 100644 content/sklearn/concepts/quadratic-discriminant-analysis/quadratic-discriminant-analysis.md diff --git a/content/sklearn/concepts/quadratic-discriminant-analysis/quadratic-discriminant-analysis.md b/content/sklearn/concepts/quadratic-discriminant-analysis/quadratic-discriminant-analysis.md new file mode 100644 index 00000000000..de6d3e83781 --- /dev/null +++ b/content/sklearn/concepts/quadratic-discriminant-analysis/quadratic-discriminant-analysis.md @@ -0,0 +1,105 @@ +--- +Title: 'Quadratic Discriminant Analysis' +Description: 'Quadratic Discriminant Analysis is a technique that models each class with a quadratic decision boundary, assuming different covariance matrices for each class.' +Subjects: + - 'Data Science' + - 'Machine Learning' +Tags: + - 'Machine Learning' + - 'Scikit-learn' + - 'Supervised Learning' + - 'Unsupervised Learning' +CatalogContent: + - 'learn-python-3' + - 'paths/computer-science' +--- + +In Sklearn, **Quadratic Discriminant Analysis (QDA)** is a classification technique that assumes that the data points within each class are normally distributed. Unlike **Linear Discriminant Analysis (LDA)**, which assumes a shared covariance matrix for all classes, QDA enables each class to have its own covariance matrix. This flexibility enables QDA to model more complex decision boundaries, making it suitable for datasets with overlapping classes or non-linear relationships between features. + +## Syntax + +```pseudo +from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis + +# Create a QDA model +model = QuadraticDiscriminantAnalysis(priors=None, reg_param=0.0, store_covariance=False, tol=0.0001) + +# Fit the model to the training data +model.fit(X_train, y_train) + +# Make predictions on the new data +y_pred = model.predict(X_test) +``` + +- `priors`: The prior probabilities of the classes. If `None`, the class distribution is estimated from the training data. If specified, it should sum to 1. This allows control over the importance of each class. +- `reg_param`: The regularization parameter. A value greater than 0 applies regularization to the covariance estimates. Regularization can help in cases where the covariance matrices might be singular or near-singular. +- `store_covariance`: Whether to store the covariance matrices for each class. If `True`, the covariance matrix is explicitly computed and stored when `solver='svd'`. If `False`, it will not store the covariance matrices but will use them for prediction during training. +- `tol`: The tolerance value for the eigenvalue decomposition when using `solver='eigen'`. This helps control the precision of the eigenvalue computation. + +## Example + +The following example demonstrates the implementation of QDA: + +```py +from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis +from sklearn.datasets import load_iris +from sklearn.model_selection import train_test_split +from sklearn.metrics import accuracy_score + +# Load the Iris dataset +iris = load_iris() +X = iris.data +y = iris.target + +# Create training and testing sets by splitting the dataset +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) + +# Create a QDA model +model = QuadraticDiscriminantAnalysis() + +# Fit the model to the training data +model.fit(X_train, y_train) + +# Make predictions on the new data +y_pred = model.predict(X_test) + +# Evaluate the model +print("Accuracy:", accuracy_score(y_test, y_pred)) +``` + +The above code produces the following output: + +```shell +Accuracy: 1.0 +``` + +## Codebyte Example + +The following codebyte example demonstrates the implementation of QDA: + +```codebyte/python +from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis +from sklearn.datasets import load_iris +from sklearn.model_selection import train_test_split +from sklearn.metrics import accuracy_score + +# Load the Iris dataset +iris = load_iris() +X = iris.data +y = iris.target + +# Create training and testing sets by splitting the dataset +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=44) + +# Create a QDA model +model = QuadraticDiscriminantAnalysis() + +# Fit the model to the training data +model.fit(X_train, y_train) + +# Make predictions on the new data +y_pred = model.predict(X_test) + +# Evaluate the model +print("Accuracy:", accuracy_score(y_test, y_pred)) +``` From b58b0f874e94ec4b08f3048e0d6ef2273b6475c9 Mon Sep 17 00:00:00 2001 From: Daksha Deep Date: Sat, 21 Dec 2024 14:10:44 +0530 Subject: [PATCH 28/40] Created the `stats-optimize` concept file (#5878) * Create the stats optimize file * Changes on syntax * lint fix * Update scipy-optimize.md minor fixes --------- --- .../concepts/scipy-optimize/scipy-optimize.md | 67 +++++++++++++++++++ 1 file changed, 67 insertions(+) create mode 100644 content/scipy/concepts/scipy-optimize/scipy-optimize.md diff --git a/content/scipy/concepts/scipy-optimize/scipy-optimize.md b/content/scipy/concepts/scipy-optimize/scipy-optimize.md new file mode 100644 index 00000000000..b5bd8b74c75 --- /dev/null +++ b/content/scipy/concepts/scipy-optimize/scipy-optimize.md @@ -0,0 +1,67 @@ +--- +Title: 'scipy.optimize' +Description: 'The Optimize module in SciPy has algorithms for optimization and root-finding, solving tasks like curve fitting, parameter estimation, and resource allocation.' +Subjects: + - 'Data Science' + - 'Machine Learning' +Tags: + - 'Python' + - 'Optimization' + - 'Mathematics' +CatalogContent: + - 'learn-python' + - 'paths/data-science' +--- + +The **`scipy.optimize`** module is part of the [SciPy](https://www.codecademy.com/resources/docs/scipy) library for scientific computing in [Python](https://www.codecademy.com/resources/docs/python). It provides a variety of optimization and root-finding routines designed to solve mathematical problems, such as finding minima or maxima of functions, solving systems of equations, and performing linear or nonlinear optimizations. Whether tuning model parameters, allocating resources, or fitting complex curves, `scipy.optimize` offers a rich toolbox for improving decision-making and model performance. + +## Functions in `scipy.optimize` + +### Minimization + +Minimizes a scalar function (i.e., finds the values that minimize the objective function). It has the following syntax: + +```pseudo +optimize.minimize(fun, x0, method=...) +``` + +- `fun`: The objective function to minimize. +- `x0`: Initial guess. +- `method`: Algorithm to use (e.g., `'BFGS'`, `'Nelder-Mead'`, etc.). + +### Root-Finding + +Finds the roots (or solutions) of a function, i.e., the points where the function equals zero. It has a syntax: + +```pseudo +optimize.root(fun, x0, method=...) +``` + +- `fun`: The function for which the root is sought. +- `x0`: Initial guess. +- `method`: Algorithm to use (e.g., `'hybr'`, `'broyden1'`). + +### Linear Programming + +Solves linear optimization problems, such as maximizing or minimizing a linear objective function subject to linear constraints: + +```pseudo +optimize.linprog(c, A_ub=..., b_ub=..., A_eq=..., b_eq=..., bounds=..., method='highs') +``` + +- `c`: Coefficients of the linear objective function. +- `A_ub`, `b_ub`: Inequality constraints. +- `A_eq`, `b_eq`: Equality constraints. +- `bounds`: Variable bounds. + +### Curve Fitting + +Fits a model to observed data by performing nonlinear least squares fitting, finding the parameters that minimize the difference between the observed data and the model. The syntax is: + +```pseudo +optimize.curve_fit(f, xdata, ydata, p0=...) +``` + +- `f`: The model function, `f(x, …)`. +- `xdata`, **ydata**: The observed data. +- `p0`: Initial guess for the parameters. From 36b45e010387ffaa481e078ae195fa8f0432929c Mon Sep 17 00:00:00 2001 From: Sriparno Roy <89148144+Sriparno08@users.noreply.github.com> Date: Sun, 22 Dec 2024 11:20:23 +0530 Subject: [PATCH 29/40] [Concept Entry] Sklearn: Stochastic Gradient Descent (#5822) * [Concept Entry] Sklearn: Stochastic Gradient Descent * Update content/sklearn/concepts/stochastic-gradient-descent/stochastic-gradient-descent.md Co-authored-by: Pragati Verma * Update content/sklearn/concepts/stochastic-gradient-descent/stochastic-gradient-descent.md Co-authored-by: Pragati Verma * Update content/sklearn/concepts/stochastic-gradient-descent/stochastic-gradient-descent.md Co-authored-by: Pragati Verma * Update content/sklearn/concepts/stochastic-gradient-descent/stochastic-gradient-descent.md Co-authored-by: Pragati Verma * Update content/sklearn/concepts/stochastic-gradient-descent/stochastic-gradient-descent.md Co-authored-by: Pragati Verma * Update content/sklearn/concepts/stochastic-gradient-descent/stochastic-gradient-descent.md Co-authored-by: Pragati Verma * Fix Formatting --------- --- .../stochastic-gradient-descent.md | 134 ++++++++++++++++++ 1 file changed, 134 insertions(+) create mode 100644 content/sklearn/concepts/stochastic-gradient-descent/stochastic-gradient-descent.md diff --git a/content/sklearn/concepts/stochastic-gradient-descent/stochastic-gradient-descent.md b/content/sklearn/concepts/stochastic-gradient-descent/stochastic-gradient-descent.md new file mode 100644 index 00000000000..ff1fa8b427f --- /dev/null +++ b/content/sklearn/concepts/stochastic-gradient-descent/stochastic-gradient-descent.md @@ -0,0 +1,134 @@ +--- +Title: 'Stochastic Gradient Descent' +Description: 'Stochastic Gradient Descent (SGD) aims to find the best set of parameters for a model that minimizes a given loss function.' +Subjects: + - 'Data Science' + - 'Machine Learning' +Tags: + - 'Machine Learning' + - 'Scikit-learn' + - 'Supervised Learning' + - 'Unsupervised Learning' +CatalogContent: + - 'learn-python-3' + - 'paths/computer-science' +--- + +In [Sklearn](https://www.codecademy.com/resources/docs/sklearn), **Stochastic Gradient Descent (SGD)** is a popular optimization algorithm that focuses on finding the best set of parameters for a model that minimizes a given loss function. + +Unlike traditional [gradient descent](https://www.codecademy.com/resources/docs/ai/search-algorithms/gradient-descent), which calculates the gradient using the entire dataset, SGD computes the gradient using a single training example at a time. This makes it computationally efficient for large datasets. + +Sklearn provides two primary classes for implementing SGD: + +- `SGDClassifier`: Well-suited for classification tasks. Supports various loss functions and penalties for fitting linear classification models. +- `SGDRegressor`: Well-suited for regression tasks. Supports various loss functions and penalties for fitting [linear regression models](https://www.codecademy.com/resources/docs/sklearn/linear-regression-analysis). + +## Syntax + +Following is the syntax for implementing SGD using `SGDClassifier`: + +```pseudo +from sklearn.linear_model import SGDClassifier + +# Create an SGDClassifier model +model = SGDClassifier(loss="hinge", penalty="l2", max_iter=1000, random_state=42) + +# Fit the classifier to the training data +model.fit(X_train, y_train) + +# Make predictions on the new data +y_pred = model.predict(X_test) +``` + +Following is the syntax for implementing SGD using `SGDRegressor`: + +```pseudo +from sklearn.linear_model import SGDRegressor + +# Create an SGDRegressor model +model = SGDRegressor(loss="squared_loss", penalty="l2", max_iter=1000, random_state=42) + +# Fit the regressor to the training data +model.fit(X_train, y_train) + +# Make predictions on the new data +y_pred = model.predict(X_test) +``` + +- `loss`: Specifies the loss function. + - For `SGDClassifier`, the options include `hinge` (default), `log`, and `modified_huber`. + - For `SGDRegressor`, the options include `squared_loss` (default), `huber`, and `epsilon_insensitive`. +- `penalty`: Specifies the regularization penalty. Common options include `l2` (L2 regularization, default), `l1` (L1 regularization), and `elasticnet` (a combination of L1 and L2 regularization). +- `max_iter`: Specifies the maximum number of iterations for the optimization algorithm. The default value is `1000`. Excessive values can lead to overfitting or unnecessary computations. +- `random_state`: Specifies the random seed for reproducibility. The default value is `None`. Setting `random_state` ensures consistent results across runs by fixing the randomness of data splitting or model initialization. + +## Example + +The following example demonstrates the implementation of SGD using `SGDClassifier`: + +```py +from sklearn.datasets import load_iris +from sklearn.linear_model import SGDClassifier +from sklearn.model_selection import train_test_split +from sklearn.metrics import accuracy_score + +# Load the Iris dataset +iris = load_iris() +X = iris.data +y = iris.target + +# Create training and testing sets by splitting the dataset +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) + +# Create an SGDClassifier model +model = SGDClassifier(loss="hinge", penalty="l2", max_iter=1000, random_state=42) + +# Fit the classifier to the training data +model.fit(X_train, y_train) + +# Make predictions on the new data +y_pred = model.predict(X_test) + +# Evaluate the model's accuracy +accuracy = accuracy_score(y_test, y_pred) +print("Accuracy:", accuracy) +``` + +The above code produces the following output: + +```shell +Accuracy: 0.8 +``` + +## Codebyte Example + +The following codebyte example demonstrates the implementation of SGD using `SGDRegressor`: + +```codebyte/python +from sklearn.datasets import load_diabetes +from sklearn.linear_model import SGDRegressor +from sklearn.model_selection import train_test_split +from sklearn.metrics import mean_squared_error + +# Load the Diabetes dataset +diabetes = load_diabetes() +X = diabetes.data +y = diabetes.target + +# Create training and testing sets by splitting the dataset +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) + +# Create an SGDRegressor model +model = SGDRegressor(loss="squared_loss", penalty="l2", max_iter=1000, random_state=42) + +# Fit the regressor to the training data +model.fit(X_train, y_train) + +# Make predictions on the new data +y_pred = model.predict(X_test) + +# Evaluate the model's performance +m2e = mean_squared_error(y_test, y_pred) + +print("Mean Squared Error:", m2e) +``` From 186d5cc764f5cda4c4289a1fd02db3bdeeea478c Mon Sep 17 00:00:00 2001 From: Savi Dahegaonkar <124272050+SaviDahegaonkar@users.noreply.github.com> Date: Sun, 22 Dec 2024 11:57:42 +0530 Subject: [PATCH 30/40] [Concept Entry] Sklearn multiclass-classification (#5814) * New file has been added. * Update user-input.md * Update user-input.md * File has been modified. * Update content/sklearn/concepts/multiclass-classification/multiclass-classification.md * Update content/sklearn/concepts/multiclass-classification/multiclass-classification.md * Update content/sklearn/concepts/multiclass-classification/multiclass-classification.md * Incorporated the changes. * Implemented the changes. * Update multiclass-classification.md fixes --------- --- .../multiclass-classification.md | 122 ++++++++++++++++++ 1 file changed, 122 insertions(+) create mode 100644 content/sklearn/concepts/multiclass-classification/multiclass-classification.md diff --git a/content/sklearn/concepts/multiclass-classification/multiclass-classification.md b/content/sklearn/concepts/multiclass-classification/multiclass-classification.md new file mode 100644 index 00000000000..1df559b9805 --- /dev/null +++ b/content/sklearn/concepts/multiclass-classification/multiclass-classification.md @@ -0,0 +1,122 @@ +--- +Title: 'Multiclass Classification' +Description: 'Multiclass classification is a supervised machine learning task where instances are categorized into one of three or more distinct classes.' +Subjects: + - 'AI' + - 'Data Science' + - 'Machine Learning' +Tags: + - 'Classification' + - 'Multitask Learning' + - 'Scikit-learn' + - 'Supervised Learning' +CatalogContent: + - 'learn-python-3' + - 'paths/intermediate-machine-learning-skill-path' +--- + +In [Sklearn](https://www.codecademy.com/resources/docs/sklearn), **Multiclass Classification** is a supervised machine learning task where instances are categorized into one of three or more distinct classes. Unlike binary classification, which involves two classes, multiclass classification requires the model to differentiate among multiple categories. + +Multiclass classification in Sklearn is implemented using algorithms such as [`Decision Trees`](https://www.codecademy.com/resources/docs/sklearn/decision-trees), [`Support Vector Machines (SVMs)`](https://www.codecademy.com/resources/docs/sklearn/support-vector-machines), and `Logistic Regression`. These algorithms handle multiple classes through strategies like One-vs-Rest (OvR) or One-vs-One (OvO), depending on the model and configuration. + +> **Note:** Sklearn offers many algorithms for multi-class classification. + +## Syntax + +Sklearn offers a variety of algorithms for multiclass classification. Below is an example syntax for performing multiclass classification using `RandomForestClassifier` in sklearn: + +```pseudo +from sklearn.datasets import make_classification +from sklearn.model_selection import train_test_split +from sklearn.ensemble import RandomForestClassifier # Replace with your classifier +from sklearn.metrics import classification_report + +# Generate a synthetic dataset +X, y = make_classification(n_samples=1000, n_features=20, n_classes=3, random_state=42) + +# Split the dataset into training and testing sets +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) + +# Create the classifier (can be any model that supports multiclass classification) +clf = RandomForestClassifier(random_state=42) + +# Fit the model +clf.fit(X_train, y_train) + +# Make predictions +y_pred = clf.predict(X_test) + +# Evaluate the model +print(classification_report(y_test, y_pred)) +``` + +## Example + +The following example code loads the `iris` dataset, split it into training and testing sets (80% training, 20% testing), then train a `RandomForestClassifier`, make predictions on the test data, calculates and prints the accuracy of the model: + +```py +from sklearn.datasets import load_iris +from sklearn.model_selection import train_test_split +from sklearn.ensemble import RandomForestClassifier +from sklearn.metrics import accuracy_score + +# Load the Iris dataset (for multiclass classification) +data = load_iris() +X, y = data.data, data.target + +# Split the dataset into training and testing sets (80% train, 20% test) +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) + +# Initialize the RandomForestClassifier +model = RandomForestClassifier() + +# Train the model on the training data +model.fit(X_train, y_train) + +# Make predictions on the test data +y_pred = model.predict(X_test) + +# Evaluate the model by calculating accuracy +accuracy = accuracy_score(y_test, y_pred) + +# Print the accuracy of the model +print(f"Accuracy: {accuracy:.2f}") +``` + +The code outputs the following output: + +```shell +Accuracy: 1.00 +``` + +## Codebyte Example + +The following codebyte example trains a `Random Forest classifier` for multiclass classification on synthetic data and predicts the category of a new product: + +```codebyte/python +from sklearn.ensemble import RandomForestClassifier +from sklearn.datasets import make_classification +from sklearn.model_selection import train_test_split +from sklearn.metrics import accuracy_score + +# Generate synthetic data for multiclass classification (3 classes) +X, y = make_classification(n_samples=1000, n_features=20, n_classes=3, random_state=42) + +# Split the dataset into training and testing sets (80% train, 20% test) +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) + +# Initialize the RandomForestClassifier +model = RandomForestClassifier() + +# Train the model on the training data +model.fit(X_train, y_train) + +# Make predictions on the test data +y_pred = model.predict(X_test) + +# Evaluate the model by calculating accuracy +accuracy = accuracy_score(y_test, y_pred) + +# Print the accuracy of the model +print(f"Accuracy: {accuracy:.2f}") +``` From 88e1876013af72b8cf37add6f4e59210594b9b46 Mon Sep 17 00:00:00 2001 From: Daksha Deep Date: Sun, 22 Dec 2024 13:29:11 +0530 Subject: [PATCH 31/40] Created the `scipy-stats` concept file (#5877) * Created the scipy stats file * syntax update * Update scipy-stats.md * Formating fixes * Update scipy-stats.md minor fixes --------- --- .../scipy/concepts/scipy-stats/scipy-stats.md | 79 +++++++++++++++++++ 1 file changed, 79 insertions(+) create mode 100644 content/scipy/concepts/scipy-stats/scipy-stats.md diff --git a/content/scipy/concepts/scipy-stats/scipy-stats.md b/content/scipy/concepts/scipy-stats/scipy-stats.md new file mode 100644 index 00000000000..0c934af1fa4 --- /dev/null +++ b/content/scipy/concepts/scipy-stats/scipy-stats.md @@ -0,0 +1,79 @@ +--- +Title: 'scipy.stats' +Description: 'scipy.stats is a Python module offering statistical functions, distributions, and hypothesis tests for data analysis.' +Subjects: + - 'Data Science' + - 'Machine Learning' +Tags: + - 'Distributions' + - 'Hypothesis Testing' + - 'Python' + - 'Statistics' +CatalogContent: + - 'learn-python' + - 'paths/data-science' +--- + +The **`scipy.stats`** module is part of the broader [SciPy](https://www.codecademy.com/resources/docs/scipy) library for scientific computing in Python. It provides functionality for working with various probability distributions, conducting hypothesis tests, and computing descriptive statistics. By leveraging `scipy.stats`, data scientists and analysts can quickly explore their data, model it using theoretical distributions, and draw meaningful conclusions through statistical inference. + +## Probability Distributions + +`scipy.stats` provides a wide range of distributions (e.g., Normal, Exponential, Binomial) with methods to work with them. For example, for the Normal distribution: + +```pseudo +stats.norm.pdf(x) # Probability Density Function +stats.norm.cdf(x) # Cumulative Distribution Function +stats.norm.rvs(size=n) # Generate random samples +``` + +- `pdf`: Returns the probability density function (PDF) value at a given point for continuous distributions.. +- `cdf`: Gives the probability that a random variable is less than or equal to a certain value. +- `rvs`: Draws random samples from the specified distribution. + +These methods can be used with other distributions available in `scipy.stats` by replacing norm with the desired distribution (e.g., `expon`, `binom`). + +## Descriptive Statistics + +Compute common statistical measures with both `numpy` and `scipy.stats`: + +```pseudo +np.mean(data) +np.median(data) +stats.mode(data) +stats.describe(data) +``` + +- `mean()`: Computes the average value of the data. +- `median()`: Finds the middle value separating the higher and lower halves of the data. +- `mode()`: Returns the most frequently occurring value (for multi-modal data, it returns the smallest mode). +- `describe()`: Provides a quick summary of the data, including count, min, max, mean, variance, skewness, and kurtosis. + +> **Note**: While `mean` and `median` are part of `numpy`, `mode` and `describe` belong to `scipy.stats`. + +## Hypothesis Testing + +Perform a variety of statistical tests to assess differences or relationships: + +```pseudo +stats.ttest_ind(group1, group2) # Independent t-test +stats.chisquare(observed, expected) # Chi-square test +stats.mannwhitneyu(group1, group2) # Mann-Whitney U test +``` + +- `ttest_ind()`: Checks if the means of two independent samples differ significantly. +- `chisquare()`: Compares observed frequencies to expected frequencies for a goodness-of-fit test. +- `mannwhitneyu()`: Tests for differences in the distribution of two independent samples (non-parametric). + +## Correlation and Regression + +Evaluate relationships between variables: + +```pseudo +stats.pearsonr(x, y) # Pearson correlation +stats.spearmanr(x, y) # Spearman rank correlation +stats.kendalltau(x, y) # Kendall’s Tau correlation +``` + +- `pearsonr()`: Measures linear correlation between two datasets. +- `spearmanr()`: Measures rank-based correlation, less sensitive to non-linear relationships. +- `kendalltau()`: Measures the association between two measured quantities using rank correlation. From 82ceb2365eab0a828820baa277db7951f095c203 Mon Sep 17 00:00:00 2001 From: goldleo1 <97662958+goldleo1@users.noreply.github.com> Date: Sun, 22 Dec 2024 17:16:54 +0900 Subject: [PATCH 32/40] [Edit] Lua Strings: .lower() --- content/lua/concepts/strings/terms/lower/lower.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/lua/concepts/strings/terms/lower/lower.md b/content/lua/concepts/strings/terms/lower/lower.md index 18f9b5c09df..078483c8fee 100644 --- a/content/lua/concepts/strings/terms/lower/lower.md +++ b/content/lua/concepts/strings/terms/lower/lower.md @@ -1,5 +1,5 @@ --- -Title: 'lower()' +Title: '.lower()' Description: 'Returns a copy of the string given, with all uppercase characters transformed to lowercase.' Subjects: - 'Code Foundations' From 9961926963490bc3adf9c9d4b5f6bdd6494a152b Mon Sep 17 00:00:00 2001 From: codecademydev Date: Sun, 22 Dec 2024 13:05:47 +0000 Subject: [PATCH 33/40] =?UTF-8?q?=F0=9F=A4=96=20update=20concept=20of=20th?= =?UTF-8?q?e=20week?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- bin/concept-of-the-week.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/bin/concept-of-the-week.txt b/bin/concept-of-the-week.txt index b2e6e340036..d1c5b89855e 100644 --- a/bin/concept-of-the-week.txt +++ b/bin/concept-of-the-week.txt @@ -1 +1 @@ -content/ruby/concepts/gems/gems.md \ No newline at end of file +content/c/concepts/user-input/user-input.md \ No newline at end of file From 9813aa15ca52348567d5ee3a7a9c3ad22dc98bca Mon Sep 17 00:00:00 2001 From: Sriparno Roy <89148144+Sriparno08@users.noreply.github.com> Date: Mon, 23 Dec 2024 11:57:06 +0530 Subject: [PATCH 34/40] [Concept Entry] Sklearn: Probability Calibration (#5823) * [Concept Entry] Sklearn: Probability Calibration * Update content/sklearn/concepts/probability-calibration/probability-calibration.md Co-authored-by: Pragati Verma * Update content/sklearn/concepts/probability-calibration/probability-calibration.md Co-authored-by: Pragati Verma * Update content/sklearn/concepts/probability-calibration/probability-calibration.md Co-authored-by: Pragati Verma * Update content/sklearn/concepts/probability-calibration/probability-calibration.md Co-authored-by: Pragati Verma * Apply Suggestions --------- --- .../probability-calibration.md | 164 ++++++++++++++++++ 1 file changed, 164 insertions(+) create mode 100644 content/sklearn/concepts/probability-calibration/probability-calibration.md diff --git a/content/sklearn/concepts/probability-calibration/probability-calibration.md b/content/sklearn/concepts/probability-calibration/probability-calibration.md new file mode 100644 index 00000000000..9a3928e20a6 --- /dev/null +++ b/content/sklearn/concepts/probability-calibration/probability-calibration.md @@ -0,0 +1,164 @@ +--- +Title: 'Probability Calibration' +Description: 'Probability calibration improves the reliability of predicted probabilities from machine learning models.' +Subjects: + - 'Data Science' + - 'Machine Learning' +Tags: + - 'Machine Learning' + - 'Scikit-learn' + - 'Supervised Learning' + - 'Unsupervised Learning' +CatalogContent: + - 'learn-python-3' + - 'paths/computer-science' +--- + +In [Sklearn](https://www.codecademy.com/resources/docs/sklearn), **Probability Calibration** is a technique used to improve the reliability of predicted probabilities from machine learning models. When a model outputs a probability, it makes a statement about the likelihood of a specific outcome. + +A well-calibrated model ensures that these probabilities accurately reflect the true likelihoods, meaning the predicted probabilities align closely with observed outcomes. + +Sklearn provides two primary methods for implementing probability calibration: + +- **Platt Scaling**: Fits a logistic regression model to the model's output probabilities. +- **Isotonic Regression**: Fits a non-parametric isotonic regression model to the model's output probabilities. + +## Syntax + +The `CalibratedClassifierCV` class is used to implement probability calibration. + +Platt Scaling uses a `sigmoid` function to map raw model scores to calibrated probabilities, ensuring they better reflect true likelihoods. + +The sigmoid function, σ(x) = 1 / (1 + e^(-x)), maps any real-valued number to a range between 0 and 1. + +In Platt Scaling, this function is parameterized as: + +P(y=1 | x) = 1 / (1 + e^(-(A \* x + B))) + +Where A and B are parameters learned during calibration. + +Following is the syntax for implementing probability calibration using Platt Scaling: + +```pseudo +from sklearn.calibration import CalibratedClassifierCV +from sklearn.linear_model import LogisticRegression + +# Create a logistic regression classifier +model = LogisticRegression() + +# Calibrate the classifier using Platt Scaling +model_calibrated = CalibratedClassifierCV(model, cv=5, method="sigmoid") + +# Fit the calibrated classifier to the training data +# X_train: Features for the training set; y_train: Target labels for the training set +model_calibrated.fit(X_train, y_train) + +# Make predictions using the calibrated classifier +y_pred_prob = model_calibrated.predict_proba(X_test) +``` + +Isotonic regression is a non-parametric regression technique that fits a piecewise constant, monotonic (increasing or decreasing) function to the data. + +In the context of calibration, the isotonic method uses isotonic regression to map the model's raw probabilities to calibrated probabilities while preserving their relative order. + +Following is the syntax for implementing probability calibration using Isotonic Regression: + +```pseudo +from sklearn.calibration import CalibratedClassifierCV +from sklearn.linear_model import LogisticRegression + +# Create a logistic regression classifier +model = LogisticRegression() + +# Calibrate the classifier using Isotonic Regression +model_calibrated = CalibratedClassifierCV(model, cv=5, method="isotonic") + +# Fit the calibrated classifier to the training data +# X_train: Features for the training set; y_train: Target labels for the training set +model_calibrated.fit(X_train, y_train) + +# Make predictions using the calibrated classifier +y_pred_prob = model_calibrated.predict_proba(X_test) +``` + +- `cv`: The number of cross-validation folds. The default value is `None`. +- `method`: The calibration method. Common options include `sigmoid` (default) and `isotonic`. + +## Example + +The following example demonstrates the implementation of probability calibration using Platt Scaling: + +```py +from sklearn.datasets import load_diabetes +from sklearn.model_selection import train_test_split +from sklearn.linear_model import LogisticRegression +from sklearn.calibration import CalibratedClassifierCV +from sklearn.metrics import brier_score_loss + +# Load the Diabetes Dataset +diabetes = load_diabetes() +X = diabetes.data +y = (diabetes.target > 126).astype(int) # Convert to binary classification + +# Create training and testing sets by splitting the dataset +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) + +# Create a logistic regression classifier +model = LogisticRegression() + +# Calibrate the classifier using Platt Scaling +model_calibrated = CalibratedClassifierCV(model, cv=5, method="sigmoid") + +# Fit the calibrated classifier to the training data +model_calibrated.fit(X_train, y_train) + +# Make predictions using the calibrated classifier +y_pred_prob = model_calibrated.predict_proba(X_test)[:, 1] + +# Calculate the Brier score +brier_score = brier_score_loss(y_test, y_pred_prob) +print("Brier Score:", brier_score) +``` + +The above code produces the following output: + +```shell +Brier Score: 0.17555317807611756 +``` + +## Codebyte Example + +The following example demonstrates the implementation of probability calibration using Isotonic Regression: + +```codebyte/python +from sklearn.datasets import load_diabetes +from sklearn.model_selection import train_test_split +from sklearn.linear_model import LogisticRegression +from sklearn.calibration import CalibratedClassifierCV +from sklearn.metrics import brier_score_loss + +# Load the Diabetes Dataset +diabetes = load_diabetes() +X = diabetes.data +y = (diabetes.target > 126).astype(int) # Convert to binary classification + +# Create training and testing sets by splitting the dataset +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) + +# Create a logistic regression classifier +model = LogisticRegression() + +# Calibrate the classifier using Isotonic Regression +model_calibrated = CalibratedClassifierCV(model, cv=5, method="isotonic") + +# Fit the calibrated classifier to the training data +model_calibrated.fit(X_train, y_train) + +# Make predictions using the calibrated classifier +y_pred_prob = model_calibrated.predict_proba(X_test)[:, 1] + +# Calculate the Brier score +brier_score = brier_score_loss(y_test, y_pred_prob) + +print("Brier Score:", brier_score) +``` From 5da9426c03e29e45d50e6cafbebe0fce6658833b Mon Sep 17 00:00:00 2001 From: Savi Dahegaonkar <124272050+SaviDahegaonkar@users.noreply.github.com> Date: Mon, 23 Dec 2024 12:11:33 +0530 Subject: [PATCH 35/40] [Concept Entry] Sklearn Biclustering (#5821) * New file has been added. * Update user-input.md * Update user-input.md * File has been modified. * Update biclustering.md * Update biclustering.md * Update biclustering.md * Update biclustering.md * Update biclustering.md * Update biclustering.md * Update biclustering.md * Update biclustering.md --------- Co-authored-by: shantanu <56212958+cigar-galaxy82@users.noreply.github.com> --- .../concepts/biclustering/biclustering.md | 111 ++++++++++++++++++ 1 file changed, 111 insertions(+) create mode 100644 content/sklearn/concepts/biclustering/biclustering.md diff --git a/content/sklearn/concepts/biclustering/biclustering.md b/content/sklearn/concepts/biclustering/biclustering.md new file mode 100644 index 00000000000..f0ba785c198 --- /dev/null +++ b/content/sklearn/concepts/biclustering/biclustering.md @@ -0,0 +1,111 @@ +--- +Title: 'Biclustering' +Description: 'A technique for grouping rows and columns of a matrix to discover local patterns in data.' +Subjects: + - 'Data Science' + - 'Data Visualization' + - 'Machine Learning' +Tags: + - 'Machine Learning' + - 'Scikit-learn' + - 'Unsupervised learning' +CatalogContent: + - 'learn-python-3' + - 'paths/data-science' +--- + +**Biclustering** is a form of unsupervised machine learning that takes a data matrix and groups both the rows and columns of this matrix to unveil previously unknown patterns. It's  standard in gene expression, text mining, and other recommendation systems and captures more localized relationships than the general clustering method. Scikit-learn provides spectral co-clustering and diagonal biclustering algorithms, implemented as classes with a fit method, enabling efficient pattern discovery in complex datasets. + +## Syntax + +Here's a syntax that shows the implementation of biclustering using sklearn: + +```pseudo +from sklearn.cluster import SpectralCoclustering, SpectralBiclustering + +# For Spectral Co-clustering +model = SpectralCoclustering(n_clusters=number_of_biclusters, random_state=seed) +model.fit(data_matrix) + +# For Spectral Bi-clustering +model = SpectralBiclustering(n_clusters=number_of_biclusters, method="log", random_state=seed) +model.fit(data_matrix) +``` + +- `n_clusters`: Number of biclusters to create. +- `random_state`: Ensures the randomness for reproducible results. +- `method`(For SpectralBiclustering): Specifies the algorithm variant, e.g., `log` or `bistochastic`. The `log` method applies logarithmic scaling, while `bistochastic` normalizes rows and columns. The choice of method can affect the results depending on the dataset. + +> **Note**: Since Bicluster is not directly available in sklearn, alternative methods for biclustering, such as `SpectralBiclustering`, can be used. + +## Example + +Here's an example of implementing biclustering using `SpectralBiclustering` from sklearn: + +```py +import numpy as np +from sklearn.cluster import SpectralBiclustering + +# Sample data matrix +data_matrix = np.array([[1, 1, 0, 0], + [1, 1, 0, 0], + [0, 0, 1, 1], + [0, 0, 1, 1]]) + +# Apply Spectral Biclustering +model = SpectralBiclustering(n_clusters=2, random_state=42) +model.fit(data_matrix) + +# Get the bicluster labels for rows and columns +row_labels = model.rows_ +column_labels = model.columns_ + +# Print biclusters +print("Row Biclusters:", row_labels) +print("Column Biclusters:", column_labels) +``` + +The above code results in the following output: + +```shell +Row Biclusters: [[False False True True] + [False False True True] + [ True True False False] + [ True True False False]] +Column Biclusters: [[False False True True] + [ True True False False] + [False False True True] + [ True True False False]] +``` + +- In the **Row Biclusters**, `True` in a position means that the corresponding row is part of the bicluster. +- Similarly, in the **Column Biclusters**, `True` indicates that the corresponding column is part of the bicluster. + +## Codebyte Example + +Here the example demonstrates how to perform Spectral Biclustering on a simple **6x6** binary data matrix using `SpectralBiclustering` from `sklearn`: + +```codebyte/python +import numpy as np +from sklearn.cluster import SpectralBiclustering + +# Sample 6x6 data matrix +data_matrix = np.array([[1, 1, 0, 0, 1, 1], + [1, 1, 0, 0, 1, 1], + [0, 0, 1, 1, 0, 0], + [0, 0, 1, 1, 0, 0], + [1, 1, 0, 0, 1, 1], + [1, 1, 0, 0, 1, 1]]) + +# Apply Spectral Biclustering +model = SpectralBiclustering(n_clusters=2, random_state=42) +model.fit(data_matrix) + +# Get the bicluster labels for rows and columns +row_labels = model.rows_ +column_labels = model.columns_ + +# Print the resulting biclusters for rows and columns +print("Row Biclusters:", row_labels) +print("Column Biclusters:", column_labels) +``` From 91cb38997cf9852f6bb3165872141d230c07dcd8 Mon Sep 17 00:00:00 2001 From: Sriparno Roy <89148144+Sriparno08@users.noreply.github.com> Date: Mon, 23 Dec 2024 19:06:42 +0530 Subject: [PATCH 36/40] [Edit] Python OS Path Module: .join() (#5889) --- content/python/concepts/os-path-module/terms/join/join.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/python/concepts/os-path-module/terms/join/join.md b/content/python/concepts/os-path-module/terms/join/join.md index 2f14309d38a..a66397cff9f 100644 --- a/content/python/concepts/os-path-module/terms/join/join.md +++ b/content/python/concepts/os-path-module/terms/join/join.md @@ -31,7 +31,7 @@ import os.path cc_courses_slug = "https://www.codecademy.com/catalog" -python_3_lessons_slug = "learn-python-3/lessons/" +python_3_lessons_slug = "learn-python-3/lessons" second_lesson_slug = "string-methods/exercises/introduction-ii" From 6afa7a5618d2bbecdf003de6b306c9028638963d Mon Sep 17 00:00:00 2001 From: Sriparno Roy <89148144+Sriparno08@users.noreply.github.com> Date: Mon, 23 Dec 2024 19:15:14 +0530 Subject: [PATCH 37/40] [Term Entry] PyTorch Tensor Operations: .permute() (#5886) * [Term Entry] PyTorch Tensor Operations: .permute() * Update permute.md minor fixes --------- --- .../terms/permute/permute.md | 55 +++++++++++++++++++ 1 file changed, 55 insertions(+) create mode 100644 content/pytorch/concepts/tensor-operations/terms/permute/permute.md diff --git a/content/pytorch/concepts/tensor-operations/terms/permute/permute.md b/content/pytorch/concepts/tensor-operations/terms/permute/permute.md new file mode 100644 index 00000000000..40ce74b2d96 --- /dev/null +++ b/content/pytorch/concepts/tensor-operations/terms/permute/permute.md @@ -0,0 +1,55 @@ +--- +Title: '.permute()' +Description: 'Returns a view of the given tensor with its dimensions permuted or rearranged according to a specific order.' +Subjects: + - 'AI' + - 'Data Science' +Tags: + - 'AI' + - 'Data Types' + - 'Deep Learning' + - 'Functions' +CatalogContent: + - 'intro-to-py-torch-and-neural-networks' + - 'paths/data-science' +--- + +In PyTorch, the **`.permute()`** function returns a view of a given [tensor](https://www.codecademy.com/resources/docs/pytorch/tensors) with its dimensions permuted or rearranged according to a specific order. + +## Syntax + +```pseudo +torch.permute(input, dims) +``` + +- `input`: The tensor whose dimensions are to be permuted. +- `dims`: The order in which the dimensions are to be permuted. + +## Example + +The following example demonstrates the usage of the `.permute()` function: + +```py +import torch + +# Create a tensor of size (2, 3, 4) +ten = torch.randn(2, 3, 4) + +# Permute the dimensions of the tensor in the order (2, 0, 1) +res = torch.permute(ten, (2, 0, 1)) + +# Print the size of the resultant tensor +print(res.size()) +``` + +In the above example, the order `(2, 0, 1)`: + +- Moves the dimension located at index `2` to index `0` +- Moves the dimension located at index `0` to index `1` +- Moves the dimension located at index `1` to index `2` + +The above code produces the following output: + +```shell +torch.Size([4, 2, 3]) +``` From 6f4d8c9df70fdd022132bc51aed26fa27f289d6d Mon Sep 17 00:00:00 2001 From: Sriparno Roy <89148144+Sriparno08@users.noreply.github.com> Date: Mon, 23 Dec 2024 20:11:43 +0530 Subject: [PATCH 38/40] [Term Entry] PyTorch Tensor Operations: .scatter() (#5888) * [Term Entry] PyTorch Tensor Operations: .scatter() * Update scatter.md minor fixes --------- --- .../terms/scatter/scatter.md | 58 +++++++++++++++++++ 1 file changed, 58 insertions(+) create mode 100644 content/pytorch/concepts/tensor-operations/terms/scatter/scatter.md diff --git a/content/pytorch/concepts/tensor-operations/terms/scatter/scatter.md b/content/pytorch/concepts/tensor-operations/terms/scatter/scatter.md new file mode 100644 index 00000000000..b3a3bcb5efb --- /dev/null +++ b/content/pytorch/concepts/tensor-operations/terms/scatter/scatter.md @@ -0,0 +1,58 @@ +--- +Title: '.scatter()' +Description: 'Writes values from a source into specific locations of a tensor along a specified dimension, based on indices.' +Subjects: + - 'AI' + - 'Data Science' +Tags: + - 'AI' + - 'Data Types' + - 'Deep Learning' + - 'Functions' +CatalogContent: + - 'intro-to-py-torch-and-neural-networks' + - 'paths/data-science' +--- + +In PyTorch, the **`.scatter()`** function writes values from a source ([tensor](https://www.codecademy.com/resources/docs/pytorch/tensors) or scalar) into specific locations of a tensor along a specified dimension, based on given indices. + +## Syntax + +```pseudo +torch.scatter(ten, dim, index, src) +``` + +- `ten`: The tensor where the values are to be inserted. +- `dim`: The dimension along which the values are to be inserted. +- `index`: The tensor which specifies the locations in `ten` where the values are to be inserted. +- `src`: The tensor which contains the values to be inserted. + +## Example + +The following example demonstrates the usage of the `.scatter()` function: + +```py +import torch + +# Create a tensor +ten = torch.tensor([[11, 12, 13, 14, 15], [16, 17, 18, 19, 20]]) + +# Create a tensor containing the locations +index = torch.tensor([[0, 2], [1, 3]]) + +# Create a tensor containing the values +src = torch.tensor([[21, 23], [27, 29]]) + +# Insert the given values into specified locations along dimension 1 in the original tensor +res = torch.scatter(ten, 1, index, src) + +# Print the resultant tensor +print(res) +``` + +The above code produces the following output: + +```shell +tensor([[21, 12, 23, 14, 15], + [16, 27, 18, 29, 20]]) +``` From 37bd936f1c1a47fe9f76d8d4eefba9d65bfff6e5 Mon Sep 17 00:00:00 2001 From: Sriparno Roy <89148144+Sriparno08@users.noreply.github.com> Date: Tue, 24 Dec 2024 11:35:41 +0530 Subject: [PATCH 39/40] [Term Entry] PyTorch Tensor Operations: .row_stack() (#5887) * [Term Entry] PyTorch Tensor Operations: .row_stack() * Update row-stack.md minor fix --------- --- .../terms/row-stack/row-stack.md | 51 +++++++++++++++++++ 1 file changed, 51 insertions(+) create mode 100644 content/pytorch/concepts/tensor-operations/terms/row-stack/row-stack.md diff --git a/content/pytorch/concepts/tensor-operations/terms/row-stack/row-stack.md b/content/pytorch/concepts/tensor-operations/terms/row-stack/row-stack.md new file mode 100644 index 00000000000..4fef9022b7d --- /dev/null +++ b/content/pytorch/concepts/tensor-operations/terms/row-stack/row-stack.md @@ -0,0 +1,51 @@ +--- +Title: '.row_stack()' +Description: 'Stacks or arranges a sequence of tensors vertically (row-wise).' +Subjects: + - 'AI' + - 'Data Science' +Tags: + - 'AI' + - 'Data Types' + - 'Deep Learning' + - 'Functions' +CatalogContent: + - 'intro-to-py-torch-and-neural-networks' + - 'paths/data-science' +--- + +In PyTorch, the **`.row_stack()`** function stacks or arranges a sequence of [tensors](https://www.codecademy.com/resources/docs/pytorch/tensors) vertically (row-wise). It is an alias or alternative for the **`.vstack()`** function. + +## Syntax + +```pseudo +torch.row_stack(tensors, *, out=None) +``` + +- `tensors`: The sequence of tensors to be stacked vertically. +- `out` (Optional): A tensor to store the output. It must have the correct shape to accommodate the result. + +## Example + +The following example demonstrates the usage of the `.row_stack()` function: + +```py +import torch + +# Create two tensors +ten1 = torch.tensor([12, 23, 34]) +ten2 = torch.tensor([45, 56, 67]) + +# Stack the tensors vertically +res = torch.row_stack((ten1, ten2)) + +# Print the resultant tensor +print(res) +``` + +The above code produces the following output: + +```shell +tensor([[12, 23, 34], + [45, 56, 67]]) +``` From 94f121951ba97740ab73b3ebb74f72881175d828 Mon Sep 17 00:00:00 2001 From: Pragati Verma Date: Tue, 24 Dec 2024 15:14:49 +0530 Subject: [PATCH 40/40] [Topic Entry] Subject: Blockchain (#5882) * Add blockchain topic entry * Update blockchain.md * Update blockchain.md minor fixes --------- --- content/blockchain/blockchain.md | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) create mode 100644 content/blockchain/blockchain.md diff --git a/content/blockchain/blockchain.md b/content/blockchain/blockchain.md new file mode 100644 index 00000000000..516db996db0 --- /dev/null +++ b/content/blockchain/blockchain.md @@ -0,0 +1,29 @@ +--- +Title: 'Blockchain' +Description: 'Blockchain is a decentralized ledger that securely records transactions, ensuring transparency, trust, and immutability without a central authority.' +Codecademy Hub Page: 'https://www.codecademy.com/catalog/subject/blockchain' +CatalogContent: + - 'rust-for-programmers' + - 'paths/computer-science' +--- + +**Blockchain** is a decentralized and distributed digital ledger that securely records transactions across multiple nodes in a network. It ensures data integrity through cryptographic techniques and transparency by allowing participants to access an immutable, shared history of transactions. By eliminating the need for a central authority, blockchain enables trust and collaboration in various applications, from cryptocurrencies to supply chain management. + +Blockchain’s origins trace back to 1991 when cryptographers Stuart Haber and W. Scott Stornetta introduced a system for timestamping digital documents. The technology gained prominence in 2008 with Satoshi Nakamoto’s creation of Bitcoin, the first decentralized cryptocurrency using blockchain as its backbone. Over time, its applications have expanded beyond cryptocurrencies to include smart contracts, supply chain management, and enterprise solutions. + +Key principles of Blockchain include: + +- **Decentralization**: No central authority; data is shared across nodes. +- **Cryptographic Security**: Data integrity ensured by encryption. +- **Consensus Mechanisms**: Agreement protocols like Proof of Work or Proof of Stake. + +## Types of Blockchains + +1. **Public Blockchains**: + Open and decentralized networks where anyone can participate, read, or write data. These blockchains prioritize transparency and security, making them ideal for cryptocurrencies (e.g., Bitcoin, Ethereum). However, they may face scalability challenges and require significant energy for consensus. + +2. **Private Blockchains**: + Permissioned networks with restricted access, used by organizations to enhance efficiency and control. Only authorized participants can interact with the network, making it suitable for use cases like supply chain management or internal data sharing (e.g., Hyperledger, Corda). + +3. **Consortium Blockchains**: + Blockchains managed collaboratively by a group of organizations. These hybrid systems strike a balance between decentralization and controlled access, often used in industries where shared authority is required, such as banking or healthcare (e.g., R3 Corda, Quorum).