Happy New Year!

37 views
Skip to first unread message

TC Haddad

unread,
Jan 21, 2026, 11:51:29 PMJan 21
to pdx-...@googlegroups.com


Happy New Year PDXOSGeo! 

Join us on fourth Wednesday this month (Jan 28th) to hear from David Percy, aka "Percy", one of the founders of our group. Percy has been doing research on applying Reconstructability Analysis (RA) to categorical raster datasets since 2012, and is preparing to defend his dissertation on Spatial Reconstructability Analysis soon.

RA is a form of machine learning that works exclusively with discrete data, so categorical data such as the National Land Cover Database is a perfect source of data for it. Come hear houw Percy analyzed NLCD data in attempts to predict forest dynamics (clearcuts), and wildfire size, specifically looking at how historical vegetation affects these outcomes. Results will be presented that show how these data are useful in modeling future events.

All of the extraction for analysis was done in Python using open source GIS libraries Shapely and Rasterio

Code will be shared and discussed!

Time / Space Coordinates:
Wednesday January 28th, 6:30–8pm | Hot Lips Pizza @ the Natural Capital Center: 721 NW 9th Ave, Portland OR 

See you there!

Regan Hutson

unread,
Jan 28, 2026, 9:09:00 PM (10 days ago) Jan 28
to PDX-OSGEO
I was planning to make it but my car battery had other ideas. 

Next time.

Regan

David Percy

unread,
Jan 29, 2026, 5:32:26 PM (10 days ago) Jan 29
to pdx-...@googlegroups.com
Thanks to everyone who showed up and listened, and gave great feedback on my work.
It was very useful for me to see how these ideas are percieved!
See you next month,
Percy


--
You received this message because you are subscribed to the Google Groups "PDX-OSGEO" group.
To unsubscribe from this group and stop receiving emails from it, send an email to pdx-osgeo+...@googlegroups.com.
To view this discussion visit https://groups.google.com/d/msgid/pdx-osgeo/CAFVrPyiqh4r2H%3DQ8xtWqTJpYtRKWv5UYAyqoxYLhaeOt%3Dp8QvA%40mail.gmail.com.


--
David Percy ("Percy"), M.S.
-Spatial Data Science / GIS / Nat Sci
-Teaching Assistant Professor
Portland State University

David Percy

unread,
Feb 1, 2026, 7:33:34 PM (7 days ago) Feb 1
to pdx-...@googlegroups.com
There were a few folks who were interested in trying out the PyOccam software that I showed results from on Wednesday. I had to fix the test.PyPi version that was out there, but now it's all synced up with the development version that I've been using to actually process that Wildfires data (and Landslides, too!)

The code below has been tested on Google Colab, so it should run anywhere. I have "wheels" built for Linux, Mac, and Windows. Please let me know if you encounter any errors. Please email me directly with errors (per...@pdx.edu), but if you do anything useful it might be nice to tell the whole group 😁

It currently has two demo data sets that ship with it called landslides and dementia. To use your own data, you need to convert it to Tab-Separated and the complicated header that you see in the demos. I'm working on a script that will convert CSV files into Occam formatted data files.

Cheers,
Percy

"""
PyOccam Landslide Susceptibility Demo
Reconstructability Analysis for GIS Applications
Works on Google Colab, Linux, macOS, and Windows
"""

# ==== INSTALL ==== comment the next line out after
# installing, until you need to install again
!pip install -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ pyoccam==0.9.2

import pyoccam

# ==== LOAD LANDSLIDE DATA ====
print("="*70)
print("PyOccam - Landslide Susceptibility Analysis")
print("Reconstructability Analysis for GIS")
print("="*70)

data = pyoccam.load_landslides()
print(f"\nDataset: {data.n_samples} samples, {data.n_features} features")
print(f"Target variable: {data.target_name} (Landslide presence)")
print(f"\nFeatures (GIS layers):")
for i, name in enumerate(data.feature_names):
    print(f"  {name}")

# ==== CONFIGURE ====
manager = data.manager
manager.set_report_separator(pyoccam.SPACESEP)

# ==== SEARCH (full-up includes loops) ====
print("\n" + "="*70)
print("Running FULL-UP search (models with feedback loops)...")
print("="*70)
search_report = manager.generate_search_report("full-up", levels=7, width=3)
print(search_report[:3000])  # First part of report

# ==== GET BEST MODEL ====
# Different model selection criteria - uncomment your preference:

# Option 1: Best by BIC (Bayesian Information Criterion) - penalizes complexity most
best = manager.get_best_model_by_bic()

# Option 2: Best by AIC (Akaike Information Criterion) - less penalty for complexity
# best = manager.get_best_model_by_aic()

# Option 3: Best by information captured (highest %dH explained)
# best = manager.get_best_model_by_information()

# Option 4: Best statistically significant model (alpha < 0.05)
# best = manager.get_best_model_by_info_alpha()

print(f"\n{'='*70}")
print(f"*** Best model by BIC: {best} ***")
print(f"{'='*70}")

# Show all selection criteria for comparison:
print("\nModel Selection Comparison:")
print(f"  Best by BIC:         {manager.get_best_model_by_bic()}")
print(f"  Best by AIC:         {manager.get_best_model_by_aic()}")
print(f"  Best by Information: {manager.get_best_model_by_information()}")
print(f"  Best by Info+Alpha:  {manager.get_best_model_by_info_alpha()}")

# ==== FIT REPORT ====
print(f"\nFit Report for {best}")
print("-"*70)
fit_report = manager.generate_fit_report(best, target_state="0")
print(fit_report)

# ==== CONFUSION MATRIX ANALYSIS ====
print("\n" + "="*70)
print("Landslide Classification Performance")
print("="*70)
cm = manager.get_confusion_matrix(best, target_state="0")

if cm.get('has_values', False):
    print(f"\nConfusion Matrix (Landslide Prediction):")
    print(f"                      Predicted")
    print(f"                  No LS    Landslide")
    print(f"Actual No LS     TN={cm['train_tn']:6.0f}  FP={cm['train_fp']:6.0f}")
    print(f"Actual Landslide FN={cm['train_fn']:6.0f}  TP={cm['train_tp']:6.0f}")

    print(f"\nPerformance Metrics:")
    print(f"  Accuracy:    {cm['train_accuracy']:.1%}  (overall correct)")
    print(f"  Sensitivity: {cm['train_sensitivity']:.1%}  (landslides correctly identified)")
    print(f"  Specificity: {cm['train_specificity']:.1%}  (non-landslides correctly identified)")
    print(f"  Precision:   {cm['train_precision']:.1%}  (predicted landslides that are real)")
    print(f"  F1 Score:    {cm['train_f1_score']:.3f}")
else:
    print("Confusion matrix not available for this model")

print("\n" + "="*70)
print("✓ Landslide susceptibility analysis complete!")
print("="*70)
print("\n✓ Demo complete!")
Reply all
Reply to author
Forward
0 new messages