Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 49 additions & 0 deletions .github/linters/.cspell.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
{
"version": "0.2",
"language": "en",
"ignorePaths": [
"**/.git/**",
"**/.gitignore",
"**/docs/Makefile",
"**/docs/make.bat",
"**/docs/source/conf.py",
"**/.mega-linter.yml"
],
"words": [
"spherimatch",
"xmatch",
"quadtree",
"coor",
"radec",
"idxes",
"Rodrigues",
"numpy",
"allclose",
"arcsin",
"arctan",
"isscalar",
"linalg",
"ndarray",
"randn",
"rtol",
"setdiff",
"scipy",
"dataframe",
"groupby",
"inplace",
"multiindex",
"ipynb",
"pypa",
"pypi",
"MAINT",
"bibtex",
"howpublished",
"autoclass",
"autofunction",
"automodule",
"genindex",
"modindex",
"toctree",
"undoc"
]
}
8 changes: 8 additions & 0 deletions .github/linters/.isort.cfg
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
[settings]
profile=
line_length=120
wrap_length=100
lines_between_sections=0
multi_line_output=3
known_first_party=spherimatch
treat_all_comments_as_code=true
2 changes: 0 additions & 2 deletions .github/linters/pyproject.toml

This file was deleted.

4 changes: 2 additions & 2 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ name: CI

on:
push:
branches: ["main"]
pull_request:
branches: ["main"]
workflow_dispatch:
Expand Down Expand Up @@ -30,11 +31,10 @@ jobs:

- name: MegaLinter
id: ml
uses: oxsecurity/megalinter/flavors/python@v8.6.0
uses: oxsecurity/megalinter/flavors/python@v8
env:
VALIDATE_ALL_CODEBASE: ${{ (github.event_name == 'push' && github.ref == 'refs/heads/main') || github.event_name == 'workflow_dispatch' }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
DISABLE_ERRORS_LINTERS: COPYPASTE_JSCPD

- name: Archive production artifacts
if: success() || failure()
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/deploy-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ jobs:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}

runs-on: ubuntu-latest
steps:
- name: Deploy to GitHub Pages
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/publish-pypi.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,14 +28,14 @@ jobs:
run: |
python -m pip install -r requirements.txt
python -m pip install build

- name: Build release distributions
run: python -m build

- name: Upload distributions
uses: actions/upload-artifact@v4
with:
name: release-dists
name: release-dist
path: dist/

pypi-publish:
Expand All @@ -55,7 +55,7 @@ jobs:
- name: Retrieve release distributions
uses: actions/download-artifact@v4
with:
name: release-dists
name: release-dist
path: dist/

- name: Publish to PyPI
Expand Down
35 changes: 35 additions & 0 deletions .mega-linter.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
DISABLE:
- RST
- REPOSITORY
DISABLE_LINTERS:
- JSON_PRETTIER
- YAML_PRETTIER
- PYTHON_PYRIGHT
DISABLE_ERRORS_LINTERS:
- COPYPASTE_JSCPD
- PYTHON_MYPY
- REPOSITORY_CHECKOV
- REPOSITORY_GRYPE
- REPOSITORY_SECRETLINT
- REPOSITORY_SYFT
- REPOSITORY_TRIVY_SBOM
- REPOSITORY_TRUFFLEHOG
ENABLE_ERRORS_LINTERS:
- PYTHON_ISORT

MARKDOWN_MARKDOWNLINT_ARGUMENTS: --disable MD041
PYTHON_BLACK_ARGUMENTS: --skip-string-normalization --line-length 120
PYTHON_PYLINT_ARGUMENTS: --enable I0021

PRE_COMMANDS:

- command: cp requirements.txt /venvs/requirements.txt
cwd: workspace
continue_if_failed: false

# Install dependencies for `pylint`
- command: python3 -m pip install --no-cache-dir -r /venvs/requirements.txt
venv: pylint
continue_if_failed: false

REPORTERS_MARKDOWN_TYPE: simple
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,6 @@ To cite spherimatch in your publication, please use the following BibTeX entry:
note = {Accessed: YYYY-MM}
}
```
Addtionally, you may add a reference to `https://github.com/technic960183/spherimatch` in the footnote if suitable.
Additionally, you may add a reference to `https://github.com/technic960183/spherimatch` in the footnote if suitable.

If you publish a paper that uses `spherimatch`, please let me know. I would be happy to know how this package has been used in research.
7 changes: 2 additions & 5 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import os
import sys

sys.path.insert(0, os.path.abspath('../../'))

# Configuration file for the Sphinx documentation builder.
Expand All @@ -18,11 +19,7 @@
# -- General configuration ---------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration

extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.viewcode',
'sphinx.ext.napoleon'
]
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.viewcode', 'sphinx.ext.napoleon']

templates_path = ['_templates']
exclude_patterns = ['build', 'Thumbs.db', '.DS_Store']
Expand Down
12 changes: 6 additions & 6 deletions docs/source/dev/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,15 +9,10 @@ please refer to the `API Reference <../ref/index.html>`_ or the `Tutorials <../t

This section is not complete yet.

.. toctree::
:maxdepth: 2

spherimatch

To develop the project, clone the repository and install the project in editable mode.

.. code-block:: console

$ git clone https://github.com/technic960183/spherimatch.git
$ cd spherimatch
$ pip install -e .[dev]
Expand All @@ -29,3 +24,8 @@ To test the project, run the following command.
$ python -m unittest

You should see ``OK (skipped=3)`` if all tests pass.

.. toctree::
:maxdepth: 2

spherimatch
File renamed without changes.
10 changes: 5 additions & 5 deletions docs/source/tutorial/duplicates_removal.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,10 @@ First, let's create a mock catalog with duplicates:
import pandas as pd

# Create a mock catalog as a pandas DataFrame
catalog = pd.DataFrame([[80.894, 41.269, 1200], [120.689, -41.269, 1500],
[10.689, -41.269, 3600], [10.688, -41.270, 300],
[10.689, -41.270, 1800], [10.690, -41.269, 2400],
[120.690, -41.270, 900], [10.689, -41.269, 2700]],
catalog = pd.DataFrame([[80.894, 41.269, 1200], [120.689, -41.269, 1500],
[10.689, -41.269, 3600], [10.688, -41.270, 300],
[10.689, -41.270, 1800], [10.690, -41.269, 2400],
[120.690, -41.270, 900], [10.689, -41.269, 2700]],
columns=['ra', 'dec', 'exp_time'])

Here, we actually only have 3 unique objects, but the catalog contains 8 entries and 5 of them are duplicates.
Expand Down Expand Up @@ -57,7 +57,7 @@ properties of your catalog. The ``'dup_num'`` column shows the number of duplica

.. note::
When there are two 'unique' objects that are very close to each other, it is possible that they will be grouped together.
In an exetrema case, it is possible that a chain of unique objects will be grouped together, linking by their duplicates.
In an extreme case, it is possible that a chain of unique objects will be grouped together, linking by their duplicates.
But this is rare for most catalogs. To solve this problem, you can try to decrease the tolerance value. However, if
decreasing the tolerance value separates objects that should be considered as duplicates, this package does not provide
a solution for now. You may need to remove the duplicates manually for those close objects.
Expand Down
22 changes: 11 additions & 11 deletions docs/source/tutorial/fof.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,9 @@ First, let's create a mock catalog:
import pandas as pd

# Create a mock catalog as a pandas DataFrame
catalog = pd.DataFrame([[80.894, 41.269, 15.5], [120.689, -41.269, 12.3],
[10.689, -41.269, 18.7], [10.688, -41.270, 14.1],
[10.689, -41.270, 16.4], [10.690, -41.269, 13.2],
catalog = pd.DataFrame([[80.894, 41.269, 15.5], [120.689, -41.269, 12.3],
[10.689, -41.269, 18.7], [10.688, -41.270, 14.1],
[10.689, -41.270, 16.4], [10.690, -41.269, 13.2],
[120.690, -41.270, 17.8]], columns=['ra', 'dec', 'mag'])

.. note::
Expand All @@ -36,7 +36,7 @@ The result object contains the clustering results. Four methods are available to
get_group_dataframe()
---------------------

To get the clustering results with the appendind data (``'mag'`` in this case), use the
To get the clustering results with the appending data (``'mag'`` in this case), use the
:func:`spherimatch.FoFResult.get_group_dataframe` method:

.. code-block:: python
Expand All @@ -47,7 +47,7 @@ To get the clustering results with the appendind data (``'mag'`` in this case),
Expected output::

Ra Dec mag
Group Object
Group Object
0 0 80.894 41.269 15.5
1 1 120.689 -41.269 12.3
6 120.690 -41.270 17.8
Expand All @@ -72,20 +72,20 @@ Expected output::
Print group 0:
The type of group is <class 'pandas.core.frame.DataFrame'>.
Ra Dec mag
Group Object
Group Object
0 0 80.894 41.269 15.5

Print group 1:
The type of group is <class 'pandas.core.frame.DataFrame'>.
Ra Dec mag
Group Object
Group Object
1 1 120.689 -41.269 12.3
6 120.690 -41.270 17.8

Print group 2:
The type of group is <class 'pandas.core.frame.DataFrame'>.
Ra Dec mag
Group Object
Group Object
2 2 10.689 -41.269 18.7
3 10.688 -41.270 14.1
4 10.689 -41.270 16.4
Expand All @@ -94,8 +94,8 @@ Expected output::
Each group is also a pandas DataFrame.

.. note::
The iterater from ``groupby()`` is extremely slow for large datasets. The current solution is to flatten the
DataFrame into a single layer of index and manupulate the index directly, or even turn the DataFrame into a numpy array.
The iterator from ``groupby()`` is extremely slow for large datasets. The current solution is to flatten the
DataFrame into a single layer of index and manipulates the index directly, or even turn the DataFrame into a numpy array.

If you want DataFrame with a single layer of index and the size of each group as a column, you can use the following code:

Expand All @@ -108,7 +108,7 @@ If you want DataFrame with a single layer of index and the size of each group as
Expected output::

Group Ra Dec mag group_size
Object
Object
0 0 80.894 41.269 15.5 1
1 1 120.689 -41.269 12.3 2
6 1 120.690 -41.270 17.8 2
Expand Down
4 changes: 2 additions & 2 deletions docs/source/tutorial/input_validation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The input DataFrame should have the following columns:
- One of the ``['ra', 'Ra', 'RA']`` (Right Ascension) in degrees.
- One of the ``['dec', 'Dec', 'DEC']`` (Declination) in degrees.

Addtionally, the DataFrame can have any other columns as well. These columns will be preserved in the output.
Additionally, the DataFrame can have any other columns as well. These columns will be preserved in the output.
And the index of the DataFrame has no restrictions and will be preserved in the output as well. (MultiIndex is not supported for now.)

numpy.ndarray
Expand All @@ -27,4 +27,4 @@ The input numpy array should be in the shape of (N, 2), where N is the number of
- The first column (``data[:, 0]``) should be the Right Ascension in degrees.
- The second column (``data[:, 1]``) should be the Declination in degrees.

Addtional data columns are not supported in the numpy array format for now.
Additional data columns are not supported in the numpy array format for now.
4 changes: 2 additions & 2 deletions docs/source/tutorial/xmatch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ xmatch()
Then, we can perform the cross-matching with the tolerance of 0.01 degree using the :func:`spherimatch.xmatch` function.

.. code-block:: python

from spherimatch import xmatch
result_object = xmatch(catalogA, catalogB, tolerance=0.01)

Expand All @@ -39,7 +39,7 @@ To get the matching results of catalog A, use the :func:`spherimatch.XMatchResul
print(result_object.get_dataframe1())

Expected output::

Ra Dec N_match
0 80.894 41.269 0
1 120.689 -41.269 1
Expand Down
6 changes: 0 additions & 6 deletions spherimatch/__init__.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,4 @@
from .chunk_generator_grid import GridChunkGenerator, GridChunkConfig
from .chunk_generator_grid import ChunkGeneratorByGrid, ChunkGeneratorByDenseGrid, ChunkGeneratorBySuperDenseGrid
from .disjoint_set import DisjointSet
from .fof import fof, group_by_quadtree
from .result_fof import FoFResult
from .result_xmatch import XMatchResult
from .utilities_spherical import *
from .xmatch import xmatch

__all__ = ['fof', 'group_by_quadtree', 'xmatch']
Expand Down
Loading