-
Notifications
You must be signed in to change notification settings - Fork 875
Insights: huggingface/blog
Overview
-
0 Active issues
-
- 41 Merged pull requests
- 3 Open pull requests
- 0 Closed issues
- 0 New issues
Could not load contribution data
Please try again later
41 Pull requests merged by 24 people
-
Update gemma3n.md to fix broken links
#2923 merged
Jun 27, 2025 -
Update gemma3n.md
#2922 merged
Jun 26, 2025 -
[Gemma3n] fix link
#2921 merged
Jun 26, 2025 -
Adding Gemma 3n to
transformers
#2920 merged
Jun 26, 2025 -
Update nvidia-training-cluster.md
#2917 merged
Jun 24, 2025 -
Update hello-hf-kernels.md
#2918 merged
Jun 24, 2025 -
Update inference-providers-groq.md
#2919 merged
Jun 24, 2025 -
Fix typos in blog articles
#2913 merged
Jun 23, 2025 -
Update flux-qlora.md for formatting edits
#2915 merged
Jun 23, 2025 -
Fix sglang blog
#2916 merged
Jun 23, 2025 -
transformers + sglang
#2911 merged
Jun 23, 2025 -
Update diffusers-quantization.md to include training blog link
#2914 merged
Jun 23, 2025 -
Fix typos in documentation files
#2910 merged
Jun 21, 2025 -
fix math rendering for finetuning blog
#2909 merged
Jun 19, 2025 -
Fine-Tuning FLUX.1-dev on Consumer Hardware blogpost
#2888 merged
Jun 19, 2025 -
Minor Documentation and Link Updates
#2907 merged
Jun 19, 2025 -
minor debug
#2908 merged
Jun 19, 2025 -
Add blog SmolVLA.zh
#2892 merged
Jun 18, 2025 -
[Inference providers] Groq release blogpost
#2904 merged
Jun 16, 2025 -
Improve ASR Diarization Documentation and Fix Typo
#2905 merged
Jun 15, 2025 -
Add L2D R2
#2903 merged
Jun 13, 2025 -
[kernels] Update co-author
#2906 merged
Jun 12, 2025 -
feat: kernel hub introduction draft
#2777 merged
Jun 12, 2025 -
[Inference Providers] Featherless release blogpost
#2883 merged
Jun 12, 2025 -
Add: zh/nanovlm.md
#2901 merged
Jun 11, 2025 -
Another blog post
#2902 merged
Jun 11, 2025 -
Update diffusers-quantization.md to include the Space link
#2899 merged
Jun 9, 2025 -
Finetuning should not take as long as training from scratch
#2897 merged
Jun 9, 2025 -
add SmolVLAPolicy import
#2898 merged
Jun 9, 2025 -
Update kv-cache.md
#2894 merged
Jun 7, 2025 -
Update smolvla.md
#2895 merged
Jun 7, 2025 -
Add screensuite
#2896 merged
Jun 6, 2025 -
Add scaling factor to attention score calculation in kv-cache.md
#2893 merged
Jun 5, 2025 -
update smolvla.md
#2889 merged
Jun 4, 2025 -
Update kv-cache.md
#2891 merged
Jun 4, 2025 -
[Add] KV Cache blog post for nanoVLM
#2884 merged
Jun 4, 2025 -
Improve vllm colocate
#2887 merged
Jun 3, 2025 -
Fix authors smolvla blog
#2886 merged
Jun 3, 2025 -
smol fix
#2885 merged
Jun 3, 2025 -
NO GPU left behind: Unlocking Efficiency with Co-located vLLM in TRL
#2882 merged
Jun 3, 2025 -
Fix thumbnail
#2877 merged
May 30, 2025
3 Pull requests opened by 3 people
-
add more benchmark numbers
#2900 opened
Jun 10, 2025 -
add infrastructure-alerting blog post
#2912 opened
Jun 20, 2025 -
Sentence Transformers v5.0 - Sparse Encoder Models
#2924 opened
Jun 27, 2025