-
Changwon National University
- https://sites.google.com/view/moma-lab
Lists (14)
Sort Name ascending (A-Z)
Stars
XGBoost for label-imbalanced data: XGBoost with weighted and focal loss functions
The best repo showing why TabPFN doesn’t work.
About model release for "Sundial: A Family of Highly Capable Time Series Foundation Models" (ICML 2025 Oral)
Influence Estimation for Gradient-Boosted Decision Trees
Supporting code for the paper "Finding Influential Training Samples for Gradient Boosted Decision Trees"
The GenAI Forecasting Agent · LLMs × Foundation Time Series Models
Official implementation of AAAI'22 paper "ProtGNN: Towards Self-Explaining Graph Neural Networks"
Democratizing Deep-Learning for Drug Discovery, Quantum Chemistry, Materials Science and Biology
SigFormer: Signature Transformer for Deep Hedging (ICAIF 2023)
KDD 2019: Robust Anomaly Detection for Multivariate Time Series through Stochastic Recurrent Neural Network
The repository features a range of models designed to generate probabilistic forecasts within the framework of the M6 competition and includes a comprehensive evaluation framework to assess their p…
FinTSB: A Comprehensive and Practical Benchmark for Financial Time Series Forecasting
🛠️ Class-imbalanced Ensemble Learning Toolbox. | 类别不平衡/长尾机器学习库
[ICML 2025] Official repository of the TQNet paper: "Temporal Query Network for Efficient Multivariate Time Series Forecasting". This work is developed by the Lab of Professor Weiwei Lin (linww@scu…
Official implementation of the paper "Frequency-domain MLPs are More Effective Learners in Time Series Forecasting"
A professional list on Multi-Modalities For Time Series Analysis (MM4TSA) Papers and Resource.
Implementations, Pre-training Code and Datasets of Large Time-Series Models
Data, Benchmarks, and methods submitted to the M6 forecasting competition
Official code, datasets and checkpoints for "Timer: Generative Pre-trained Transformers Are Large Time Series Models" (ICML 2024) and subsequent works
[ICLR 2025] Official PyTorch implementation of "Forgetting Transformer: Softmax Attention with a Forget Gate"