Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 20 additions & 37 deletions .github/workflows/tests.yml → .github/workflows/tests.yaml
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jcapriot I'd be curious about your thoughts on changing tests.yaml to use uv instead of conda. I've been moving toward using uv for all my python projects lately. I don't foresee aurora incorporating C or other code anytime in the near future. Any SimPEG-related drawbacks to this change that I may be missing?

Original file line number Diff line number Diff line change
Expand Up @@ -13,85 +13,68 @@ jobs:
runs-on: ${{ matrix.os }}
defaults:
run:
shell: bash -l {0}
shell: bash
strategy:
fail-fast: false
matrix:
os: ["ubuntu-latest"]
python-version: [3.8, 3.9, "3.10", "3.11"]
python-version: [3.9, "3.10", "3.11", "3.12"]

steps:
- uses: actions/checkout@v4

- name: Setup Miniconda
uses: conda-incubator/setup-[email protected]
- name: Install uv
uses: astral-sh/setup-uv@v3
with:
activate-environment: aurora-test
python-version: ${{ matrix.python-version }}
version: "latest"

- name: Set up Python ${{ matrix.python-version }}
run: uv python install ${{ matrix.python-version }}

- name: Install uv and project dependencies
- name: Create virtual environment and install dependencies
run: |
python --version
echo $CONDA_PREFIX
pip install uv
uv pip install -e ".[dev]"
uv venv --python ${{ matrix.python-version }}
uv pip install -e ".[dev,test]"
uv pip install "mt_metadata[obspy] @ git+https://github.com/kujaku11/mt_metadata.git"
uv pip install git+https://github.com/kujaku11/mth5.git
conda install -c conda-forge certifi">=2017.4.17" pandoc
uv pip install jupyter ipykernel pytest pytest-cov codecov

- name: Install Our Package
- name: Install system dependencies
run: |
echo $CONDA_PREFIX
uv pip install -e .
echo "Install complete"
conda list
pip freeze

- name: Install Jupyter and dependencies
run: |
pip install jupyter
pip install ipykernel
python -m ipykernel install --user --name aurora-test
# Install any other dependencies you need
sudo apt-get update
sudo apt-get install -y pandoc

- name: Execute Jupyter Notebooks
run: |
source .venv/bin/activate
python -m ipykernel install --user --name aurora-test
jupyter nbconvert --to notebook --execute docs/examples/dataset_definition.ipynb
jupyter nbconvert --to notebook --execute docs/examples/operate_aurora.ipynb
jupyter nbconvert --to notebook --execute docs/tutorials/pkd_units_check.ipynb
jupyter nbconvert --to notebook --execute docs/tutorials/pole_zero_fitting/lemi_pole_zero_fitting_example.ipynb
jupyter nbconvert --to notebook --execute docs/tutorials/processing_configuration.ipynb
jupyter nbconvert --to notebook --execute docs/tutorials/process_cas04_multiple_station.ipynb
jupyter nbconvert --to notebook --execute docs/tutorials/synthetic_data_processing.ipynb
# Replace "notebook.ipynb" with your notebook's filename

# - name: Commit changes (if any)
# run: |
# git config --local user.email "[email protected]"
# git config --local user.name "GitHub Action"
# git commit -a -m "Execute Jupyter notebook"
# git push
# if: ${{ success() }}


- name: Run Tests
run: |
source .venv/bin/activate
pytest -s -v --cov=./ --cov-report=xml --cov=aurora
# pytest -s -v tests/synthetic/test_fourier_coefficients.py
# pytest -s -v tests/config/test_config_creator.py
pytest -s -v --cov=./ --cov-report=xml --cov=aurora


- name: "Upload coverage reports to Codecov"
uses: codecov/codecov-action@v4
with:
CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
fail_ci_if_error: false
flags: tests
# token: ${{ secrets.CODECOV_TOKEN }}

- name: Build Doc
if: ${{ (github.ref == 'refs/heads/main') && (matrix.python-version == '3.8')}}
run: |
source .venv/bin/activate
cd docs
make html
cd ..
Expand Down
11 changes: 5 additions & 6 deletions docs/execute_notebooks.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
notebook_dir = DOCS_PATH.joinpath("tutorials")
notebooks += sorted(
nb for nb in notebook_dir.rglob("*.ipynb") if ".ipynb_checkpoints" not in str(nb)
)
)

# Execute each notebook in-place
for nb_path in notebooks:
Expand All @@ -29,19 +29,19 @@
"%matplotlib inline",
)
print(f"Executing: {nb_path} (in cwd={working_dir})")

try:
pm.execute_notebook(
input_path=str(nb_path),
output_path=str(nb_path),
kernel_name="aurora-test", # Adjust if using a different kernel ("dipole-st")
kernel_name="aurora-test", # Adjust if using a different kernel ("dipole-st")
request_save_on_cell_execute=True,
cwd=str(working_dir) # <- this sets the working directory!
cwd=str(working_dir), # <- this sets the working directory!
)
print(f"✓ Executed successfully: {nb_path}")
except Exception as e:
print(f"✗ Failed to execute {nb_path}: {e}")
exit(1)
# exit(1)

# Replace the matplotlib inline magic back to widget for interactive plots
replace_in_file(
Expand All @@ -50,4 +50,3 @@
"%matplotlib widget",
)
print("All notebooks executed and updated successfully.")

12 changes: 10 additions & 2 deletions tests/synthetic/test_feature_weighting.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,21 @@
"""

Integrated test of the functionality of feature weights.

1. This test uses degraded sythetic data to test the feature weighting.
1. This test uses degraded synthetic data to test the feature weighting.
Noise is added to some fraction (50-75%) of the data.

Then regular (single station) processing is called on the data and
feature weighting processing is called on the data.

---
Feature weights are specified using the mt_metadata.features.weights module.
This test demonstrates how feature-based channel weighting (e.g., striding_window_coherence)
can be injected into Aurora's processing pipeline. In the future, these features will be
used to enable more robust, data-driven weighting strategies for transfer function estimation,
including integration of new features from mt_metadata and more flexible weighting schemes.

See also: mt_metadata.features.weights.channel_weight_spec and test_feature_weighting.py for
examples of how to define, load, and use feature weights in Aurora workflows.
"""

from aurora.config.metadata import Processing
Expand Down
Loading