default search action
Xiaoshan Yang
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [j44]Xudong Yao, Xiaoshan Yang:
Self-supervised spatial-temporal feature enhancement for one-shot video object detection. Neurocomputing 600: 128219 (2024) - [j43]Hichem Metmer, Xiaoshan Yang:
An open chest X-ray dataset with benchmarks for automatic radiology report generation in French. Neurocomputing 609: 128478 (2024) - [j42]Xiaoshan Yang, Yan Jiang, Xiaowei Chen, Fu Jia:
ICT-empowered rural e-commerce development in China: an adaptive structuration perspective. Int. J. Technol. Manag. 95(1/2): 90-119 (2024) - [j41]Xiaoshan Yang, Baochen Xiong, Yi Huang, Changsheng Xu:
Cross-Modal Federated Human Activity Recognition. IEEE Trans. Pattern Anal. Mach. Intell. 46(8): 5345-5361 (2024) - [j40]Yifan Xu, Mengdan Zhang, Xiaoshan Yang, Changsheng Xu:
Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object Detection. IEEE Trans. Image Process. 33: 6253-6267 (2024) - [j39]Yaguang Song, Xiaoshan Yang, Yaowei Wang, Changsheng Xu:
Recovering Generalization via Pre-Training-Like Knowledge Distillation for Out-of-Distribution Visual Question Answering. IEEE Trans. Multim. 26: 837-851 (2024) - [j38]Fang Peng, Xiaoshan Yang, Linhui Xiao, Yaowei Wang, Changsheng Xu:
SgVA-CLIP: Semantic-Guided Visual Adapting of Vision-Language Models for Few-Shot Image Classification. IEEE Trans. Multim. 26: 3469-3480 (2024) - [j37]Linhui Xiao, Xiaoshan Yang, Fang Peng, Ming Yan, Yaowei Wang, Changsheng Xu:
CLIP-VG: Self-Paced Curriculum Adapting of CLIP for Visual Grounding. IEEE Trans. Multim. 26: 4334-4347 (2024) - [j36]Jiabo Ye, Junfeng Tian, Ming Yan, Haiyang Xu, Qinghao Ye, Yaya Shi, Xiaoshan Yang, Xuwu Wang, Ji Zhang, Liang He, Xin Lin:
UniQRNet: Unifying Referring Expression Grounding and Segmentation with QRNet. ACM Trans. Multim. Comput. Commun. Appl. 20(8): 246:1-246:28 (2024) - [c40]Baochen Xiong, Xiaoshan Yang, Yaguang Song, Yaowei Wang, Changsheng Xu:
Modality-Collaborative Test-Time Adaptation for Action Recognition. CVPR 2024: 26722-26731 - [c39]Jiabo Ye, Junfeng Tian, Xiaoshan Yang, Zhenru Zhang, Anwen Hu, Ming Yan, Ji Zhang, Liang He, Xin Lin:
VG-Annotator: Vision-Language Models as Query Annotators for Unsupervised Visual Grounding. ICME 2024: 1-6 - [c38]Yifan Xu, Xiaoshan Yang, Yaguang Song, Changsheng Xu:
Libra: Building Decoupled Vision System on Large Language Models. ICML 2024 - [c37]Linhui Xiao, Xiaoshan Yang, Fang Peng, Yaowei Wang, Changsheng Xu:
HiVG: Hierarchical Multimodal Fine-grained Modulation for Visual Grounding. ACM Multimedia 2024: 5460-5469 - [c36]Yifei Gao, Zhiyu Lin, Yunfan Yang, Jitao Sang, Xiaoshan Yang, Changsheng Xu:
Staying in the Cat-and-Mouse Game: Towards Black-box Adversarial Example Detection. MMGR@MM 2024: 35-43 - [c35]Chenlin Zhao, Jiabo Ye, Yaguang Song, Ming Yan, Xiaoshan Yang, Changsheng Xu:
Part-Aware Prompt Tuning for Weakly Supervised Referring Expression Grounding. MMM (3) 2024: 489-502 - [i15]Linhui Xiao, Xiaoshan Yang, Fang Peng, Yaowei Wang, Changsheng Xu:
HiVG: Hierarchical Multimodal Fine-grained Modulation for Visual Grounding. CoRR abs/2404.13400 (2024) - [i14]Yifan Xu, Xiaoshan Yang, Yaguang Song, Changsheng Xu:
Libra: Building Decoupled Vision System on Large Language Models. CoRR abs/2405.10140 (2024) - [i13]Yuyang Wanyan, Xiaoshan Yang, Weiming Dong, Changsheng Xu:
A Comprehensive Review of Few-shot Action Recognition. CoRR abs/2407.14744 (2024) - [i12]Linhui Xiao, Xiaoshan Yang, Fang Peng, Yaowei Wang, Changsheng Xu:
OneRef: Unified One-tower Expression Grounding and Segmentation with Mask Referring Modeling. CoRR abs/2410.08021 (2024) - 2023
- [j35]Menghao Hu, Mingxuan Luo, Menghua Huang, Wenhua Meng, Baochen Xiong, Xiaoshan Yang, Jitao Sang:
Towards a multimodal human activity dataset for healthcare. Multim. Syst. 29(1): 1-13 (2023) - [j34]Mingxuan Luo, Xiaoshan Yang:
Postpartum pelvic organ prolapse assessment via adversarial feature complementation in heterogeneous data. Neural Comput. Appl. 35(19): 13851-13860 (2023) - [j33]Chaofan Chen, Xiaoshan Yang, Jinpeng Zhang, Bo Dong, Changsheng Xu:
Category Knowledge-Guided Parameter Calibration for Few-Shot Object Detection. IEEE Trans. Image Process. 32: 1092-1107 (2023) - [j32]Wei Wang, Junyu Gao, Xiaoshan Yang, Changsheng Xu:
Many Hands Make Light Work: Transferring Knowledge From Auxiliary Tasks for Video-Text Retrieval. IEEE Trans. Multim. 25: 2661-2674 (2023) - [j31]Yiming Li, Xiaoshan Yang, Xuhui Huang, Zhe Ma, Changsheng Xu:
Zero-Shot Predicate Prediction for Scene Graph Parsing. IEEE Trans. Multim. 25: 3140-3153 (2023) - [j30]Yaguang Song, Xiaoshan Yang, Changsheng Xu:
Self-supervised Calorie-aware Heterogeneous Graph Networks for Food Recommendation. ACM Trans. Multim. Comput. Commun. Appl. 19(1s): 27:1-27:23 (2023) - [j29]Yuyang Wanyan, Xiaoshan Yang, Xuan Ma, Changsheng Xu:
Dual Scene Graph Convolutional Network for Motivation Prediction. ACM Trans. Multim. Comput. Commun. Appl. 19(3s): 137:1-137:23 (2023) - [j28]Xuan Ma, Xiaoshan Yang, Changsheng Xu:
Multi-Source Knowledge Reasoning Graph Network for Multi-Modal Commonsense Inference. ACM Trans. Multim. Comput. Commun. Appl. 19(4): 141:1-141:17 (2023) - [j27]Hao Liu, Xiaoshan Yang, Changsheng Xu:
Counterfactual Scenario-relevant Knowledge-enriched Multi-modal Emotion Reasoning. ACM Trans. Multim. Comput. Commun. Appl. 19(5s): 176:1-176:25 (2023) - [c34]Yuyang Wanyan, Xiaoshan Yang, Chaofan Chen, Changsheng Xu:
Active Exploration of Multimodal Complementarity for Few-Shot Action Recognition. CVPR 2023: 6492-6502 - [c33]Han Jiang, Xiaoshan Yang, Chaofan Chen, Changsheng Xu:
Fine-grained Primitive Representation Learning for Compositional Zero-shot Classification. ICME 2023: 456-461 - [c32]Baochen Xiong, Xiaoshan Yang, Yaguang Song, Yaowei Wang, Changsheng Xu:
Client-Adaptive Cross-Model Reconstruction Network for Modality-Incomplete Multimodal Federated Learning. ACM Multimedia 2023: 1241-1249 - [c31]Yiming Li, Xiaoshan Yang, Changsheng Xu:
Iterative Learning with Extra and Inner Knowledge for Long-tail Dynamic Scene Graph Generation. ACM Multimedia 2023: 4707-4715 - [c30]Qinghao Ye, Haiyang Xu, Ming Yan, Chenlin Zhao, Junyang Wang, Xiaoshan Yang, Ji Zhang, Fei Huang, Jitao Sang, Changsheng Xu:
mPLUG-Octopus: The Versatile Assistant Empowered by A Modularized End-to-End Multimodal LLM. ACM Multimedia 2023: 9365-9367 - [c29]Jianghai Wang, Menghao Hu, Yaguang Song, Xiaoshan Yang:
Health-Oriented Multimodal Food Question Answering. MMM (1) 2023: 191-203 - [c28]Yifan Xu, Mengdan Zhang, Chaoyou Fu, Peixian Chen, Xiaoshan Yang, Ke Li, Changsheng Xu:
Multi-modal Queried Object Detection in the Wild. NeurIPS 2023 - [i11]Linhui Xiao, Xiaoshan Yang, Fang Peng, Ming Yan, Yaowei Wang, Changsheng Xu:
CLIP-VG: Self-paced Curriculum Adapting of CLIP via Exploiting Pseudo-Language Labels for Visual Grounding. CoRR abs/2305.08685 (2023) - [i10]Yifan Xu, Mengdan Zhang, Chaoyou Fu, Peixian Chen, Xiaoshan Yang, Ke Li, Changsheng Xu:
Multi-modal Queried Object Detection in the Wild. CoRR abs/2305.18980 (2023) - [i9]Yifan Xu, Mengdan Zhang, Xiaoshan Yang, Changsheng Xu:
Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object Detection. CoRR abs/2308.15846 (2023) - 2022
- [j26]Baochen Xiong, Xiaoshan Yang, Fan Qi, Changsheng Xu:
A unified framework for multi-modal federated learning. Neurocomputing 480: 110-118 (2022) - [j25]Yi Huang, Xiaoshan Yang, Junyun Gao, Changsheng Xu:
Holographic Feature Learning of Egocentric-Exocentric Videos for Multi-Domain Action Recognition. IEEE Trans. Multim. 24: 2273-2286 (2022) - [j24]Xinhong Ma, Xiaoshan Yang, Junyu Gao, Changsheng Xu:
The Model May Fit You: User-Generalized Cross-Modal Retrieval. IEEE Trans. Multim. 24: 2998-3012 (2022) - [j23]Yaguang Song, Junyu Gao, Xiaoshan Yang, Changsheng Xu:
Learning Hierarchical Video Graph Networks for One-Stop Video Delivery. ACM Trans. Multim. Comput. Commun. Appl. 18(1): 10:1-10:23 (2022) - [c27]Xiaoshan Yang, Baochen Xiong, Yi Huang, Changsheng Xu:
Cross-Modal Federated Human Activity Recognition via Modality-Agnostic and Modality-Specific Representation Learning. AAAI 2022: 3063-3071 - [c26]Yiming Li, Xiaoshan Yang, Changsheng Xu:
Dynamic Scene Graph Generation via Anticipatory Pre-training. CVPR 2022: 13864-13873 - [c25]Jiabo Ye, Junfeng Tian, Ming Yan, Xiaoshan Yang, Xuwu Wang, Ji Zhang, Liang He, Xin Lin:
Shifting More Attention to Visual Backbone: Query-modulated Refinement Networks for End-to-End Visual Grounding. CVPR 2022: 15481-15491 - [c24]Yi Huang, Xiaoshan Yang, Ji Zhang, Changsheng Xu:
Relative Alignment Network for Source-Free Multimodal Video Domain Adaptation. ACM Multimedia 2022: 1652-1660 - [c23]Chaofan Chen, Xiaoshan Yang, Ming Yan, Changsheng Xu:
Attribute-guided Dynamic Routing Graph Network for Transductive Few-shot Learning. ACM Multimedia 2022: 6259-6268 - [i8]Jiabo Ye, Junfeng Tian, Ming Yan, Xiaoshan Yang, Xuwu Wang, Ji Zhang, Liang He, Xin Lin:
Shifting More Attention to Visual Backbone: Query-modulated Refinement Networks for End-to-End Visual Grounding. CoRR abs/2203.15442 (2022) - [i7]Fang Peng, Xiaoshan Yang, Changsheng Xu:
SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for Few-shot Image Classification. CoRR abs/2211.16191 (2022) - 2021
- [j22]Wei Wang, Junyu Gao, Xiaoshan Yang, Changsheng Xu:
Learning Coarse-to-Fine Graph Neural Networks for Video-Text Retrieval. IEEE Trans. Multim. 23: 2386-2397 (2021) - [j21]Junyu Gao, Xiaoshan Yang, Yingying Zhang, Changsheng Xu:
Unsupervised Video Summarization via Relation-Aware Assignment Learning. IEEE Trans. Multim. 23: 3203-3214 (2021) - [j20]Fan Qi, Xiaoshan Yang, Changsheng Xu:
Emotion Knowledge Driven Video Highlight Detection. IEEE Trans. Multim. 23: 3999-4013 (2021) - [j19]Yi Huang, Xiaoshan Yang, Junyu Gao, Jitao Sang, Changsheng Xu:
Knowledge-driven Egocentric Multimodal Activity Recognition. ACM Trans. Multim. Comput. Commun. Appl. 16(4): 133:1-133:133 (2021) - [j18]Xuan Ma, Xiaoshan Yang, Junyu Gao, Changsheng Xu:
Health Status Prediction with Local-Global Heterogeneous Behavior Graph. ACM Trans. Multim. Comput. Commun. Appl. 17(4): 129:1-129:21 (2021) - [c22]Chaofan Chen, Xiaoshan Yang, Changsheng Xu, Xuhui Huang, Zhe Ma:
ECKPN: Explicit Class Knowledge Propagation Network for Transductive Few-Shot Learning. CVPR 2021: 6596-6605 - [c21]Yi Huang, Xiaoshan Yang, Changsheng Xu:
Multimodal Global Relation Knowledge Distillation for Egocentric Action Anticipation. ACM Multimedia 2021: 245-254 - [c20]Fan Qi, Xiaoshan Yang, Changsheng Xu:
Zero-shot Video Emotion Recognition via Multimodal Protagonist-aware Transformer Network. ACM Multimedia 2021: 1074-1083 - [c19]Jie Chen, Qixiang Ye, Xiaoshan Yang, S. Kevin Zhou, Xiaopeng Hong, Li Zhang:
Few-shot Learning for Multi-Modality Tasks. ACM Multimedia 2021: 5673-5674 - [c18]Jinxing Pan, Xiaoshan Yang, Yi Huang, Changsheng Xu:
Few-shot Egocentric Multimodal Activity Recognition. MMAsia 2021: 23:1-23:7 - [i6]Xuan Ma, Xiaoshan Yang, Junyu Gao, Changsheng Xu:
Health Status Prediction with Local-Global Heterogeneous Behavior Graph. CoRR abs/2103.12456 (2021) - [i5]Chaofan Chen, Xiaoshan Yang, Changsheng Xu, Xuhui Huang, Zhe Ma:
ECKPN: Explicit Class Knowledge Propagation Network for Transductive Few-shot Learning. CoRR abs/2106.08523 (2021) - [i4]Jinfeng Wei, Yunxin Wang, Mengli Guo, Pei Lv, Xiaoshan Yang, Mingliang Xu:
Dynamic Hypergraph Convolutional Networks for Skeleton-Based Action Recognition. CoRR abs/2112.10570 (2021) - 2020
- [j17]Fan Qi, Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu:
Discriminative multimodal embedding for event classification. Neurocomputing 395: 160-169 (2020) - [j16]Shan Zhang, Xiaoshan Yang, Yanxia Liu, Changsheng Xu:
Asymmetric multi-stage CNNs for small-scale pedestrian detection. Neurocomputing 409: 12-26 (2020) - [j15]Cuirong Long, Xiaoshan Yang, Changsheng Xu:
Cross-domain personalized image captioning. Multim. Tools Appl. 79(45-46): 33333-33348 (2020) - [c17]Yingying Zhang, Junyu Gao, Xiaoshan Yang, Chang Liu, Yan Li, Changsheng Xu:
Find Objects and Focus on Highlights: Mining Object Semantics for Video Highlight Detection via Graph Neural Networks. AAAI 2020: 12902-12909 - [c16]Yiming Li, Xiaoshan Yang, Changsheng Xu:
Structured Neural Motifs: Scene Graph Parsing via Enhanced Context. MMM (2) 2020: 175-188 - [c15]Xuecheng Ning, Xiaoshan Yang, Changsheng Xu:
Multi-hop Interactive Cross-Modal Retrieval. MMM (2) 2020: 681-693 - [i3]Peng Jia, Ruiyu Ning, Ruiqi Sun, Xiaoshan Yang, Dongmei Cai:
Data-driven Image Restoration with Option-driven Learning for Big and Small Astronomical Image Datasets. CoRR abs/2011.03696 (2020)
2010 – 2019
- 2019
- [j14]Xiaoshan Yang, Changsheng Xu:
Image Captioning by Asking Questions. ACM Trans. Multim. Comput. Commun. Appl. 15(2s): 55:1-55:19 (2019) - [c14]Tingting Xie, Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu, Ioannis Patras:
Exploring Feature Representation and Training Strategies in Temporal Action Localization. ICIP 2019: 1605-1609 - [c13]Wenhua Meng, Shan Zhang, Xudong Yao, Xiaoshan Yang, Changsheng Xu, Xiaowen Huang:
Biomedia ACM MM Grand Challenge 2019: Using Data Enhancement to Solve Sample Unbalance. ACM Multimedia 2019: 2588-2592 - [c12]Weiming Zhang, Yi Huang, Wanting Yu, Xiaoshan Yang, Wei Wang, Jitao Sang:
Multimodal Attribute and Feature Embedding for Activity Recognition. MMAsia 2019: 44:1-44:7 - [c11]Yi Huang, Xiaoshan Yang, Changsheng Xu:
Time-Guided High-Order Attention Model of Longitudinal Heterogeneous Healthcare Data. PRICAI (1) 2019: 57-70 - [i2]Tingting Xie, Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu, Ioannis Patras:
Exploring Feature Representation and Training strategies in Temporal Action Localization. CoRR abs/1905.10608 (2019) - [i1]Yi Huang, Xiaoshan Yang, Changsheng Xu:
Time-Guided High-Order Attention Model of Longitudinal Heterogeneous Healthcare Data. CoRR abs/1912.00773 (2019) - 2018
- [j13]Junyu Gao, Tianzhu Zhang, Xiaoshan Yang, Changsheng Xu:
P2T: Part-to-Target Tracking via Deep Regression Learning. IEEE Trans. Image Process. 27(6): 3074-3086 (2018) - [j12]Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu:
Deep-Structured Event Modeling for User-Generated Photos. IEEE Trans. Multim. 20(8): 2100-2113 (2018) - [j11]Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu:
Text2Video: An End-to-end Learning Framework for Expressing Text With Videos. IEEE Trans. Multim. 20(9): 2360-2370 (2018) - [j10]Yifan Jiao, Zhetao Li, Shucheng Huang, Xiaoshan Yang, Bin Liu, Tianzhu Zhang:
Three-Dimensional Attention-Based Deep Ranking Model for Video Highlight Detection. IEEE Trans. Multim. 20(10): 2693-2705 (2018) - [j9]Qicong Zhang, Ligu Zhu, Xiaoshan Yang:
A New Incentive Policy for Improving Data Service in P2P Networks. Wirel. Pers. Commun. 103(1): 231-245 (2018) - [c10]Yikun Sheng, Xiaoshan Yang, Changsheng Xu:
A Standalone Demo for Quiz Game "Describe and Guess". MIPR 2018: 206-207 - [c9]Yikun Sheng, Xiaoshan Yang, Xueliang Liu, Changsheng Xu:
Attribute-Assisted Domain Transfer from Image to Sketch. MIPR 2018: 287-292 - [c8]Fan Qi, Xiaoshan Yang, Changsheng Xu:
A Unified Framework for Multimodal Domain Adaptation. ACM Multimedia 2018: 429-437 - 2017
- [j8]Junyu Gao, Tianzhu Zhang, Xiaoshan Yang, Changsheng Xu:
Deep Relative Tracking. IEEE Trans. Image Process. 26(4): 1845-1858 (2017) - [c7]Xiaoshan Yang, Ligu Zhu, Qicong Zhang:
Research on endurance evaluation for NAND flash-based solid state drive. ICIS 2017: 523-526 - [c6]Xiaoshan Yang, Ligu Zhu, Qicong Zhang, Dongyu Feng:
Research on Evaluation Method of Big Data Storage Utilization. ACIT/CSII/BCD 2017: 368-372 - [c5]Yifan Jiao, Xiaoshan Yang, Tianzhu Zhang, Shucheng Huang, Changsheng Xu:
Video Highlight Detection via Deep Ranking Modeling. PSIVT 2017: 28-39 - 2016
- [j7]Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu, Shuicheng Yan, M. Shamim Hossain, Ahmed Ghoneim:
Deep Relative Attributes. IEEE Trans. Multim. 18(9): 1832-1842 (2016) - [j6]Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu:
Semantic Feature Mining for Video Event Understanding. ACM Trans. Multim. Comput. Commun. Appl. 12(4): 55:1-55:22 (2016) - [c4]Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu:
Abnormal Event Discovery in User Generated Photos. ACM Multimedia 2016: 47-51 - 2015
- [j5]Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu:
A new discriminative coding method for image classification. Multim. Syst. 21(2): 133-145 (2015) - [j4]Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu:
Cross-Domain Feature Learning in Multimedia. IEEE Trans. Multim. 17(1): 64-78 (2015) - [j3]Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu, M. Shamim Hossain:
Automatic Visual Concept Learning for Social Event Understanding. IEEE Trans. Multim. 17(3): 346-358 (2015) - [j2]Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu, Ming-Hsuan Yang:
Boosted Multifeature Learning for Cross-Domain Transfer. ACM Trans. Multim. Comput. Commun. Appl. 11(3): 35:1-35:18 (2015) - 2013
- [j1]Jianbing Shen, Xiaoshan Yang, Xuelong Li, Yunde Jia:
Intrinsic Image Decomposition Using Optimization and User Scribbles. IEEE Trans. Cybern. 43(2): 425-436 (2013) - [c3]Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu:
Locality discriminative coding for image classification. ICIMCS 2013: 52-55 - [c2]Xiaoshan Yang, Tianzhu Zhang, Changsheng Xu, Min Xu:
Graph-Guided Fusion Penalty Based Sparse Coding for Image Classification. PCM 2013: 475-484 - 2011
- [c1]Jianbing Shen, Xiaoshan Yang, Yunde Jia, Xuelong Li:
Intrinsic images using optimization. CVPR 2011: 3481-3487
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-12-02 21:33 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint