8000 Releases · IntelLabs/LLMart · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Releases: IntelLabs/LLMart

2025.06

10 Jun 14:31
72ff5b8
Compare
Choose a tag to compare

🎉 Major Updates

  • Add support for image-text-to-text models (e.g., Llama3.2-Vision and UI-TARS)
  • Add support for additional text-to-text models (DeepAlignment, LlamaGuard3, and HarmBench Classifier)
  • Add example attack against LLaDa, a large language diffusion model
  • Add DataMapper abstraction to enable easy adaptation of existing datasets to models

🎈 Minor Updates

  • Add good_token_ids support to GCG optimizer
  • Save best attack to disk at last step and reduced save state for hard-token attacks
  • Output only continuation tokens and not full prompt in evaluation
  • Remove check for back-to-back tags in tokenizer
  • Enable command-line modification of response via response.prefix= and response.suffix=
  • TaggedTokenizer now supports returning input_map when return_tensors=None

🚧 Bug Fixes

  • Fix tokenizer prefix-space detection (e.g., Llama2's tokenizer)
  • Allow early stop with multi-sample datasets
  • All make commands now run in isolated virtual environments
  • max_new_tokens generates exactly that many tokens at test time regardless of eos_token

2025.04.1

28 Apr 16:18
20a8760
Compare
Choose a tag to compare

🚧 Bug Fixes

  • Fix bug where the final attack was not evaluated.

2025.04

25 Apr 01:28
58657c5
Compare
Choose a tag to compare

🎉 Major Updates

  • Support for Intel GPUs. We benefit from native PyTorch xpu integration and enable LLMart to run natively on Intel AI PCs
  • Support for one-click installation on Linux and Windows, powered by uv
  • Enable automatic swap batch size selection for all models and device configurations. This offers up to 2x speed-up with zero user configuration required on devices with sufficient VRAM

🎈 Minor Updates

  • Updated dependencies
  • Expanded outputs in the API of train for better modularity
  • Functionality for graceful attack run termination on KeyboardInterrupt
  • More robust tokenizer and seeding

🚧 Bug Fixes

  • Fixed KV cache functionality (enabled from command line using use_kv_cache=true)
  • Fixed a bug where device usage was imbalanced because of ordered swaps

2025.03.2

04 Apr 19:46
9b0dde2
Compare
Choose a tag to compare

🚧 Bug Fixes

  • Fix a critical bug that made the best_attack returned incorrect

2025.03.1

02 Apr 16:10
d900ea3
Compare
Choose a tag to compare

🎈 Minor Updates

  • Update poetry.lock
  • Fix type errors

2025.03

28 Mar 23:41
d6e9d1f
Compare
Choose a tag to compare

🎉 Major Updates

  • Preliminary support for automatic swap batch size optimization using accelerate.find_executable_batch_size. This can speed-up single-device llmart runtime by up to 10x compared to the default value of 1.
    Enabled in command line using per_device_bs=-1

🎈 Minor Updates

  • Specify a list of banned strings that excludes tokens from optimization
  • Specify the maximum number of tokens to generate in validation and test-time auto-regressive sampling
  • Track and output the attack with the highest training success rate
  • Write documentation for CLI arguments
  • Upgraded requirements
  • Makefiles for each example folder and command

🚧 Bug Fixes

  • Fix the random_strings example crash due to missing input embeddings
  • Correctly reference HF_TOKEN on front page documentation

🙏 Acknowledgements

2025.02

07 Feb 20:42
66d5642
Compare
Choose a tag to compare

🎉 Major Updates

  • 🚀 1.25x speed improvements (1.5x with use_kv_cache=True)
  • 📉 Introduced autoGCG - automatic GCG tuning using Bayesian optimization
  • 💼 Data subsystem refactor to enable arbitrary dataset support
  • 🧠 Add a tutorial on how to use LLMart as a standalone library.

🎈 Minor Updates

  • Support for uv
  • More intuitive dataset splitting parameters
  • Disable early stopping via early_stop=False
  • Run test only via attack=None or steps=0
  • Option to enable/disable batch splitting via data.split_batches=True/False
  • Reusable closure creation

🚧 Bug Fixes

  • Remove world_size from optimizer
  • Fix _local_swap_count being on wrong device in optimizer

2025.01.2

04 Feb 21:11
2c1942e
Compare
Choose a tag to compare

What's Changed

Added support for token forcing attacks against DeepSeek-R1 models.

Full Changelog: v2025.01.1...v2025.01.2

2025.01.1

15 Jan 17:49
05e0465
Compare
Choose a tag to compare

What's Changed

  • Added badges to README.md
  • Added automatic OSSF Scorecard scanning
  • Update jinja2 to 3.1.5

2025.01

15 Jan 17:44
7097e3e
Compare
Choose a tag to compare

Initial release of LLMart

0