[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
letter

TV100: a TV series dataset that pre-trained CLIP has not seen

Published: 06 June 2024 Publication History

Conclusion

The era of pre-trained models has ushered in a wealth of new insights for the machine learning community. Among the myriad of questions that arise, one of paramount importance is: ‘Do pre-trained models possess comprehensive knowledge?’ This paper seeks to address this crucial inquiry. In line with our objective, we have made publicly available a novel dataset comprised of images from TV series released post-2021. This dataset holds significant potential for use in various research areas, including the evaluation of novel class iscovery and long-tailed learning, among others.

References

[1]
Floridi L and Chiriatti M GPT-3: its nature, scope, limits, and consequences Minds and Machines 2020 30 4 681-694
[2]
Ramesh A, Pavlov M, Goh G, Gray S, Voss C, Radford A, Chen M, Sutskever I. Zero-shot text-to-image generation. In: Proceedings of the 38th International Conference on Machine Learning. 2021, 8821–8831
[3]
Radford A, Kim J W, Hallacy C, Ramesh A, Goh G, Agarwal S, Sastry G, Askell A, Mishkin P, Clark J, Krueger G, Sutskever I. Learning transferable visual models from natural language supervision. In: Proceedings of the 38th International Conference on Machine Learning. 2021, 8748–8763
[4]
Deng J, Dong W, Socher R, Li L J, Li K, Fei-Fei L. ImageNet: a large-scale hierarchical image database. In: Proceedings of 2009 IEEE Conference on Computer Vision and Pattern Recognition. 2009, 248–255
[5]
Zhou D W, Wang Q W, Qi Z H, Ye H J, Zhan D C, Liu Z W. Deep class-incremental learning: a survey. 2023, arXiv preprint arXiv: 2302.03648
[6]
Schuhmann C, Beaumont R, Vencu R, Gordon C, Wightman R, Cherti M, Coombes T, Katta A, Mullis C, Wortsman M, Schramowski P, Kundurthy S, Crowson K, Schmidt L, Kaczmarczyk R, Jitsev J. LAION-5B: an open large-scale dataset for training next generation image-text models. In: Proceedings of the 36th International Conference on Neural Information Processing Systems. 2022, 25278–25294
[7]
Zhou D W, Ye H J, Zhan D C. Learning placeholders for open-set recognition. In: Proceedings of 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021, 4401–4410
[8]
Zhou D W, Sun H L, Ning J Y, Ye H J, Zhan D C. Continual learning with pre-trained models: a survey. In: Proceedings of the 33rd International Joint Conference on Artificial Intelligence. 2024
[9]
Sun H L, Zhou D W, Ye H J, Zhan D C. PILOT: a pre-trained modelbased continual learning toolbox. 2023, arXiv preprint arXiv: 2309.07117
[10]
Rawte V, Sheth A, Das A. A survey of hallucination in large foundation models. 2023, arXiv preprint arXiv: 2309.05922

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Frontiers of Computer Science: Selected Publications from Chinese Universities
Frontiers of Computer Science: Selected Publications from Chinese Universities  Volume 18, Issue 5
Oct 2024
233 pages

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 06 June 2024
Accepted: 05 May 2024
Received: 01 March 2024

Qualifiers

  • Letter

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 13 Dec 2024

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media