HADAMT-Lab (Hybrid Anomaly Detection & Adaptive Model Training) is a stand-alone laboratory for exploring data poisoning defenses without federated learning. It combines VAE, GAN, Isolation Forest, LOF and a DIVA-inspired meta-learner for a hybrid detector.
graph TD;
A[Download Data] --> B[Generate Poison];
B --> C[Train Detectors];
C --> D[Compute Hybrid Score];
D --> E[Defense Pipeline];
E --> F[Evaluation];
The project requires Python 3.8+ together with a number of common machine learning libraries. Install them into a fresh virtual environment:
python3 -m venv venv
source venv/bin/activate
pip install torch==2.1.2 torchvision==0.16.2 scikit-learn==1.4.2 \
pandas==2.2.1 yfinance==0.2.37 numpy==1.26.4 matplotlib==3.8.4 \
seaborn==0.13.2 tqdm==4.66.4 notebook==7.1.1 loguru==0.7.2
Run the included notebooks to launch an attack and then train the hybrid defense model:
jupyter nbconvert --execute LaunchingAttacks/poison_mnist_fed.ipynb
jupyter nbconvert --execute DefenseTraditionalML/Mal_vs_Hon.ipynb
The attacks include label flips and backdoor patches for CIFAR-100 and spike noise for S&P-500. The hybrid detector aggregates VAE, GAN, IF, LOF and the DIVA meta-learner. We cite defense-vae and the Kaggle "Fraud VAE" notebook for architectures.
Mal_vs_Hon.ipynb
now trains a convolutional VAE and computes reconstruction
errors for each participant. The errors are combined with existing metrics and
fed into an IsolationForest
based hybrid detector. The notebook reports KDE
plots for reconstruction errors and hybrid scores together with precision,
recall and ROC-AUC values.
- Clone the repository
git clone https://github.com/PanOlifer/hadamt.git cd hadamt
- Prepare the environment using the commands in the Quick Start section above. A GPU is optional but speeds up training.
- Verify the installation (optional):
You should see TensorFlow correctly listing available devices.
python LaunchingAttacks/check_libs_ok.py
- Run the attack notebook
Execute all cells to generate poisoned MNIST data.
jupyter notebook LaunchingAttacks/poison_mnist_fed.ipynb
- Train the hybrid detector
After running all cells you will obtain precision, recall and ROC-AUC metrics for detecting malicious clients.
jupyter notebook DefenseTraditionalML/Mal_vs_Hon.ipynb
Below is a concise guide on how to install dependencies and run the project.
Create a fresh virtual environment with Python \u2265 3.8:
python3 -m venv venv
source venv/bin/activate
Install the libraries used in the notebooks:
pip install torch==2.1.2 torchvision==0.16.2 scikit-learn==1.4.2 \
pandas==2.2.1 yfinance==0.2.37 numpy==1.26.4 matplotlib==3.8.4 \
seaborn==0.13.2 tqdm==4.66.4 notebook==7.1.1 loguru==0.7.2
(Newer or slightly different package versions are fine if the exact ones aren\u2019t available.)
You can check whether TensorFlow/TFF and other libraries load properly by running:
python LaunchingAttacks/check_libs_ok.py
If the setup is correct, this script will print available devices (e.g., GPU) and successfully instantiate a simple TFF computation.
Execute the notebooks directly or via nbconvert:
# Generate the poisoned MNIST dataset
jupyter nbconvert --execute LaunchingAttacks/poison_mnist_fed.ipynb
# Train and evaluate the hybrid detection model
jupyter nbconvert --execute DefenseTraditionalML/Mal_vs_Hon.ipynb
Alternatively, open them interactively:
jupyter notebook LaunchingAttacks/poison_mnist_fed.ipynb
jupyter notebook DefenseTraditionalML/Mal_vs_Hon.ipynb
- Clone the repo and
cd
into it. - Create and activate a Python virtual environment.
- Install the packages listed above.
- (Optional) Run
LaunchingAttacks/check_libs_ok.py
to verify your setup. - Execute the notebooks for attacks and defense training either with
jupyter nbconvert --execute
or interactively withjupyter notebook
.