[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Multi-Granularity Detector for Vulnerability Fixes

Published: 01 August 2023 Publication History

Abstract

With the increasing reliance on Open Source Software, users are exposed to third-party library vulnerabilities. Software Composition Analysis (SCA) tools have been created to alert users of such vulnerabilities. SCA requires the identification of vulnerability-fixing commits. Prior works have proposed methods that can automatically identify such vulnerability-fixing commits. However, identifying such commits is highly challenging, as only a very small minority of commits are vulnerability fixing. Moreover, code changes can be noisy and difficult to analyze. We observe that noise can occur at different levels of detail, making it challenging to detect vulnerability fixes accurately. To address these challenges and boost the effectiveness of prior works, we propose MiDas (Multi-Granularity Detector for Vulnerability Fixes). Unique from prior works, MiDas constructs different neural networks for each level of code change granularity, corresponding to commit-level, file-level, hunk-level, and line-level, following their natural organization and then use an ensemble model combining all base models to output the final prediction. This design allows MiDas to better cope with the noisy and highly-imbalanced nature of vulnerability-fixing commit data. In addition, to reduce the human effort required to inspect code changes, we have designed an effort-aware adjustment for MiDas's outputs based on commit length. The evaluation result demonstrates that MiDas outperforms the current state-of-the-art baseline on both Java and Python-based datasets in terms of AUC by 4.9% and 13.7%, respectively. Furthermore, in terms of two effort-aware metrics, i.e., EffortCost@L and Popt@L, MiDas also performs better than the state-of-the-art baseline up to 28.2% and 15.9% on Java, 60% and 51.4% on Python, respectively.

References

[1]
Log4shell: RCE 0-day exploit found in log4j 2, a popular Java logging package, 2021. [Online]. Available: https://www.lunasec.io/docs/blog/log4j-zero-day
[2]
CVE-2021–44228 – log4j 2 vulnerability analysis, 2021. [Online]. Available: https://www.randori.com/blog/cve-2021–44228
[3]
C. Liu, S. Chen, L. Fan, B. Chen, Y. Liu, and X. Peng, “Demystifying the vulnerability propagation and its evolution via dependency trees in the NPM ecosystem,” 2022,.
[4]
S. E. Ponta, H. Plate, and A. Sabetta, “Detection, assessment and mitigation of vulnerabilities in open source dependencies,” Empirical Softw. Eng., vol. 25, no. 5, pp. 3175–3215, 2020.
[5]
A. Decan, T. Mens, and E. Constantinou, “On the impact of security vulnerabilities in the NPM package dependency network,” in Proc. 15th Int. Conf. Mining Softw. Repositories, 2018, pp. 181–191.
[6]
J. M. Gonzalez-Barahona, P. Sherwood, G. Robles, and D. Izquierdo, “Technical lag in software compilations: Measuring how outdated a software deployment is,” in Proc. IFIP Int. Conf. Open Source Syst., Springer, 2017, pp. 182–192.
[7]
A. Ihara, D. Fujibayashi, H. Suwa, R. G. Kula, and K. Matsumoto, “Understanding when to adopt a library: A case study on ASF projects,” in Proc. IFIP Int. Conf. Open Source Syst., Springer, 2017, pp. 128–138.
[8]
R. G. Kula, D. M. German, A. Ouni, T. Ishio, and K. Inoue, “Do developers update their library dependencies?,” Empirical Softw. Eng., vol. 23, no. 1, pp. 384–417, 2018.
[9]
R. Shu, X. Gu, and W. Enck, “A study of security vulnerabilities on docker hub,” in Proc. 7th ACM on Conf. Data Appl. Secur. Privacy, 2017, pp. 269–280.
[10]
B. Chinthanet, R. G. Kula, S. McIntosh, T. Ishio, A. Ihara, and K. Matsumoto, “Lags in the release, adoption, and propagation of NPM vulnerability fixes,” Empirical Softw. Eng., vol. 26, no. 3, pp. 1–28, 2021.
[11]
A. Zerouali, T. Mens, A. Decan, J. Gonzalez-Barahona, and G. Robles, “A multi-dimensional analysis of technical lag in Debian-based Docker images,” Empirical Softw. Eng., vol. 26, no. 2, pp. 1–45, 2021.
[12]
Owasp dependency-check, 2020. [Online]. Available: https://owasp.org/www-project-dependency-check/
[13]
H. J. Kang, T. G. Nguyen, B. Le, C. S. Păsăreanu, and D. Lo, “Test mimicry to assess the exploitability of library vulnerabilities,” in Proc. 31st ACM SIGSOFT Int. Symp. Softw. Testing Anal., 2022, pp. 276–288.
[14]
N. Imtiaz, A. Khanom, and L. Williams, “Open or sneaky? Fast or slow? Light or heavy?: Investigating security releases of open source packages,” IEEE Trans. Softw. Eng., vol. 49, no. 4, pp. 1540–1560, Apr. 2023.
[15]
Y. Zhou and A. Sharma, “Automated identification of security issues from commit messages and bug reports,” in Proc. 11th Joint Meeting Found. Softw. Eng., 2017, pp. 914–919.
[16]
A. Sabetta and M. Bezzi, “A practical approach to the automatic classification of security-relevant commits,” in Proc. IEEE Int. Conf. Softw. Maintenance Evol., 2018, pp. 579–582.
[17]
N. Truong-Giang et al., “HERMES: Using commit-issue linking to detect vulnerability-fixing commits,” in Proc. IEEE 29th Int. Conf. Softw. Anal. Evol. Reengineering, 2022, pp. 51–62.
[18]
T. G. Nguyen, T. Le-Cong, H. J. Kang, X.-B. D. Le, and D. Lo, “VulCurator: A vulnerability-fixing commit detector,” in Proc. 30th ACM Joint Eur. Softw. Eng. Conf. Symp. Found. Softw. Eng., 2022, pp. 1726–1730.
[19]
J. Zhou et al., “Finding a needle in a haystack: Automated mining of silent vulnerability fixes,” in Proc. IEEE/ACM 36th Automated Softw. Eng. Conf., 2021, pp. 705–716.
[20]
Why do organizations trust SNYK to win the open source security battle? https://snyk.io/blog/why-snyk-wins-open-source-security-battle/
[21]
Vulnerability database, securing your open source software depends on the industry's best data, 2021. [Online]. Available: https://www.mend.io/wp-content/media/2021/11/WhiteSource-Vulnerability-Database.pdf
[22]
J. Williams and A. Dabirsiaghi, “The unfortunate reality of insecure libraries,” Aspect Secur. Inc., pp. 1–26, 2012.
[23]
R. Duan et al., “Automating patching of vulnerable open-source software versions in application binaries,” in Proc. Netw. Distrib. Syst. Secur. Symp., Feb. 24-27, 2019. [Online]. Available: https://www.ndss-symposium.org/ndss-paper/automatingpatching-of-vulnerable-open-source-software-versions-inapplication-binaries/
[24]
J. Dai et al., “BScout: Direct whole patch presence test for Java executables,” in Proc. 29th USENIX Secur. Symp., 2020, pp. 1147–1164.
[25]
S. E. Ponta, H. Plate, A. Sabetta, M. Bezzi, and C. Dangremont, “A manually-curated dataset of fixes to vulnerabilities of open-source software,” in Proc. IEEE/ACM 16th Int. Conf. Mining Softw. Repositories, 2019, pp. 383–387.
[26]
Y. Chen, A. E. Santosa, A. M. Yi, A. Sharma, A. Sharma, and D. Lo, “A machine learning approach for vulnerability curation,” in Proc. 17th Int. Conf. Mining Softw. Repositories, 2020, pp. 32–42.
[27]
R. Ramsauer, L. Bulwahn, D. Lohmann, and W. Mauerer, “The sound of silence: Mining security vulnerabilities from secret integration channels in open-source projects,” in Proc. ACM SIGSAC Conf. Cloud Comput. Secur. Workshop, 2020, pp. 147–157.
[28]
B. Wu, S. Liu, R. Feng, X. Xie, J. Siow, and S.-W. Lin, “Enhancing security patch identification by capturing structures in commits,” IEEE Trans. Dependable Secure Comput., to be published.
[29]
ISO/IEC 29147:2018 information technology – security techniques – vulnerability disclosure, 2018. [Online]. Available: https://www.iso.org/standard/72311.html
[30]
A. D. Householder, G. Wassermann, A. Manion, and C. King, “The cert guide to coordinated vulnerability disclosure.” Accessed: Jun. 14, 2023. [Online]. Available: https://resources.sei.cmu.edu/asset_files/SpecialReport/2017_003_001_503340.pdf
[31]
M. Pradel and S. Chandra, “Neural software analysis,” Commun. ACM, vol. 65, no. 1, pp. 86–96, 2021.
[32]
M. Pradel and K. Sen, “Deepbugs: A learning approach to name-based bug detection,” in Proc. ACM Program. Lang., vol. 2, no. OOPSLA, pp. 1–25, 2018.
[33]
K. Jesse, P. T. Devanbu, and T. Ahmed, “Learning type annotation: Is big data enough?,” in Proc. 29th ACM Joint Meeting Eur. Softw. Eng. Conf. Symp. Found. Softw. Eng., 2021, pp. 1483–1486.
[34]
M. Kazerounian, J. S. Foster, and B. Min, “SimTyper: Sound type inference for ruby using type equality prediction,” in Proc. ACM Program. Lang., vol. 5, no. OOPSLA, pp. 1–27, 2021.
[35]
A. Utture, S. Liu, C. G. Kalhauge, and J. Palsberg, “Striking a balance: Pruning false-positives from static call graphs,” in Proc. 44th Int. Conf. Softw. Eng., 2022, pp. 2043–2055.
[36]
T. Le-Cong et al., “AutoPruner: Transformer-based call graph pruning,” in Proc. 30th ACM Joint Eur. Softw. Eng. Conf. Symp. Found. Softw. Eng., 2022, pp. 520–532.
[37]
H. Ye, J. Gu, M. Martinez, T. Durieux, and M. Monperrus, “Automated classification of overfitting patches with statically extracted code features,” IEEE Trans. Softw. Eng., vol. 48, no. 8, pp. 2920–2938, Aug. 2022.
[38]
B. Lin, S. Wang, M. Wen, and X. Mao, “Context-aware code change embedding for better patch correctness assessment,” ACM Trans. Softw. Eng. Methodol., vol. 31, no. 3, pp. 1–29, 2022.
[39]
M. Barnett, C. Bird, J. Brunet, and S. K. Lahiri, “Helping developers help themselves: Automatic decomposition of code review changesets,” in Proc. IEEE/ACM 37th Int. Conf. Softw. Eng., 2015, pp. 134–144.
[40]
R. Polikar, “Ensemble learning,” in Ensemble Machine Learning, C. Zhang and Y. Ma, Eds. New York, NY, USA: Springer, 2012.
[41]
G. Haixiang, L. Yijing, J. Shang, G. Mingyun, H. Yuanyue, and G. Bing, “Learning from class-imbalanced data: Review of methods and applications,” Expert Syst. Appl., vol. 73, pp. 220–239, 2017.
[42]
T. M. Khoshgoftaar, A. Fazelpour, D. J. Dittman, and A. Napolitano, “Ensemble versus data sampling: Which option is best suited to improve classification performance of imbalanced bioinformatics data?,” in Proc. IEEE 27th Int. Conf. Tools Artif. Intell., 2015, pp. 705–712.
[43]
X. Dong, Z. Yu, W. Cao, Y. Shi, and Q. Ma, “A survey on ensemble learning,” Front. Comput. Sci., vol. 14, pp. 241–258, 2020.
[44]
Z. Feng et al., “CodeBERT: A pre-trained model for programming and natural languages,” in Proc. Conf. Empirical Methods Natural Lang. Process., 2020, pp. 1536–1547.
[45]
A. Vaswani et al., “Attention is all you need,” in Proc. Adv. Neural Inf. Process. Syst., 2017, pp. 5998–6008.
[46]
H. Husain, H.-H. Wu, T. Gazit, M. Allamanis, and M. Brockschmidt, “CodeSearchNet challenge: Evaluating the state of semantic code search,” 2019,.
[47]
Codebert-base, 2020. [Online]. Available: https://huggingface.co/microsoft/codebert-base
[48]
Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” in Proc. IEEE, vol. 86, no. 11, pp. 2278–2324, Nov. 1998.
[49]
V. Nair and G. E. Hinton, “Rectified linear units improve restricted Boltzmann machines,” in Proc. Int. Conf. Mach. Learn., 2010, pp. 807–814.
[50]
J. Han and C. Moraga, “The influence of the sigmoid function parameters on the speed of backpropagation learning,” in Proc. Int. Workshop Artif. Neural Netw., Springer, 1995, pp. 195–201.
[51]
B. Xu, T. Hoang, A. Sharma, C. Yang, X. Xia, and D. Lo, “Post2Vec: Learning distributed representations of stack overflow posts,” IEEE Trans. Softw. Eng., vol. 48, no. 9, pp. 3423–3441, Sep. 2022.
[52]
Y. Li, S. Wang, and T. N. Nguyen, “Fault localization with code coverage representation learning,” in Proc. IEEE/ACM 43rd Int. Conf. Softw. Eng., 2021, pp. 661–673.
[53]
Z. Sun, Q. Zhu, L. Mou, Y. Xiong, G. Li, and L. Zhang, “A grammar-based structural CNN decoder for code generation,” in Proc. AAAI Conf. Artif. Intell., 2019, pp. 7055–7062.
[54]
T. Hoang, H. K. Dam, Y. Kamei, D. Lo, and N. Ubayashi, “DeepJIT: An end-to-end deep learning framework for just-in-time defect prediction,” in Proc. IEEE/ACM 16th Int. Conf. Mining Softw. Repositories, 2019, pp. 34–45.
[55]
S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997.
[56]
A. Graves and J. Schmidhuber, “Framewise phoneme classification with bidirectional lstm and other neural network architectures,” Neural Netw., vol. 18, no. 5–6, pp. 602–610, 2005.
[57]
Z. Yang, J. Shi, J. He, and D. Lo, “Natural attack for pre-trained models of code,” in Proc. 44th Int. Conf. Softw. Eng., 2022, pp. 1482–1493.
[58]
E. Mashhadi and H. Hemmati, “Applying codeBERT for automated program repair of Java simple bugs,” in Proc. IEEE/ACM 18th Int. Conf. Mining Softw. Repositories, 2021, pp. 505–509.
[59]
C. S. Xia and L. Zhang, “Less training, more repairing please: Revisiting automated program repair via zero-shot learning,” in Proc. 30th ACM Joint Eur. Softw. Eng. Conf. Symp. Found. Softw. Eng., 2022, pp. 959–971.
[60]
J.-R. Falleri, F. Morandat, X. Blanc, M. Martinez, and M. Monperrus, “Fine-grained and accurate source code differencing,” in Proc. IEEE/ACM 29th Int. Conf. Automated Softw. Eng., 2014, pp. 313–324.
[61]
Y. Fan, X. Xia, D. Lo, A. E. Hassan, Y. Wang, and S. Li, “A differential testing approach for evaluating abstract syntax tree mapping algorithms,” in Proc. IEEE/ACM 43rd Int. Conf. Softw. Eng., 2021, pp. 1174–1185.
[62]
N. Kalchbrenner, E. Grefenstette, and P. Blunsom, “A convolutional neural network for modelling sentences,” 2014,.
[63]
S. Lawrence, C. L. Giles, A. C. Tsoi, and A. D. Back, “Face recognition: A convolutional neural-network approach,” IEEE Trans. Neural Netw., vol. 8, no. 1, pp. 98–113, Jan. 1997.
[64]
D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2014,.
[65]
Z. Zeng, Y. Zhang, H. Zhang, and L. Zhang, “Deep just-in-time defect prediction: How far are we?,” in Proc. 30th ACM SIGSOFT Int. Symp. Softw. Testing Anal., 2021, pp. 427–438.
[66]
J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve,” Radiology, vol. 143, no. 1, pp. 29–36, 1982.
[67]
S. Lessmann, B. Baesens, C. Mues, and S. Pietsch, “Benchmarking classification models for software defect prediction: A proposed framework and novel findings,” IEEE Trans. Softw. Eng., vol. 34, no. 4, pp. 485–496, Jul./Aug. 2008.
[68]
T. Mende and R. Koschke, “Effort-aware defect prediction models,” in Proc. IEEE 14th Eur. Conf. Softw. Maintenance Reengineering, 2010, pp. 107–116.
[69]
N. Ohlsson and H. Alberg, “Predicting fault-prone software modules in telephone switches,” IEEE Trans. Softw. Eng., vol. 22, no. 12, pp. 886–894, Dec. 1996.
[70]
Y. Kamei et al., “A large-scale empirical study of just-in-time quality assurance,” IEEE Trans. Softw. Eng., vol. 39, no. 6, pp. 757–773, Jun. 2013.
[71]
X. Yu, K. E. Bennin, J. Liu, J. W. Keung, X. Yin, and Z. Xu, “An empirical study of learning to rank techniques for effort-aware defect prediction,” in Proc. IEEE 26th Int. Conf. Softw. Anal. Evol. Reengineering, 2019, pp. 298–309.
[72]
Y. Yang et al., “Effort-aware just-in-time defect prediction: Simple unsupervised models could be better than supervised models,” in Proc. 24th ACM SIGSOFT Int. Symp. Found. Softw. Eng., 2016, pp. 157–168.
[73]
Q. Huang, X. Xia, and D. Lo, “Revisiting supervised and unsupervised models for effort-aware just-in-time defect prediction,” Empirical Softw. Eng., vol. 24, no. 5, pp. 2823–2862, 2019.
[74]
Y. Qi, X. Mao, Y. Lei, Z. Dai, and C. Wang, “The strength of random search on automated program repair,” in Proc. 36th Int. Conf. Softw. Eng., 2014, pp. 254–265.
[75]
T. Hoang, H. J. Kang, D. Lo, and J. Lawall, “CC2Vec: Distributed representations of code changes,” in Proc. IEEE/ACM 42nd Int. Conf. Softw. Eng., 2020, pp. 518–529.
[76]
P. S. Kochhar, F. Thung, and D. Lo, “Code coverage and test suite effectiveness: Empirical study with real bugs in large systems,” in Proc. IEEE 22nd Int. Conf. Softw. Anal. Evol. Reengineering, 2015, pp. 560–564.
[77]
J. D. Brown et al., Understanding Research in Second Language Learning: A Teacher's Guide to Statistics and Research Design. Cambridge, U.K.: Cambridge Univ. Press, 1988.
[78]
M. A. Pett, Nonparametric Statistics for Health Care Research: Statistics for Small Samples and Unusual Distributions. Newbury Park, CA, USA: Sage Publications, 2015.
[79]
D. Romano and M. Pinzger, “Using source code metrics to predict change-prone Java interfaces,” in Proc. IEEE 27th Int. Conf. Softw. Maintenance, 2011, pp. 303–312.
[80]
S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics Intell. Lab. Syst., vol. 2, no. 1/3, pp. 37–52, 1987.
[81]
L. Yu and H. Liu, “Feature selection for high-dimensional data: A fast correlation-based filter solution,” in Proc. 20th Int. Conf. Mach. Learn., 2003, pp. 856–863.
[82]
M. Massoudi, N. K. Jain, and P. Bansal, “Software defect prediction using dimensionality reduction and deep learning,” in Proc. IEEE Third Int. Conf. Intell. Commun. Technol. Virtual Mobile Netw., 2021, pp. 884–893.
[83]
S. K. Pandey, D. Rathee, and A. K. Tripathi, “Software defect prediction using K-PCA and various kernel-based extreme learning machine: An empirical study,” IET Softw., vol. 14, no. 7, pp. 768–782, 2020.
[84]
L. Goel, M. Sharma, S. K. Khatri, and D. Damodaran, “Defect prediction of cross projects using PCA and ensemble learning approach,” in Micro-Electronics and Telecommunication Engineering. Berlin, Germany: Springer, 2020, pp. 307–315.
[85]
H. Abdi and L. J. Williams, “Principal component analysis,” Wiley Interdiscipl. Rev. Comput. Statist., vol. 2, no. 4, pp. 433–459, 2010.
[86]
M. Kondo, C.-P. Bezemer, Y. Kamei, A. E. Hassan, and O. Mizuno, “The impact of feature reduction techniques on defect prediction models,” Empirical Softw. Eng., vol. 24, no. 4, pp. 1925–1963, 2019.
[87]
T. Auld, A. W. Moore, and S. F. Gull, “Bayesian neural networks for internet traffic classification,” IEEE Trans. Neural Netw., vol. 18, no. 1, pp. 223–239, Jan. 2007.
[88]
H.-L. Nguyen, Y.-K. Woon, W.-K. Ng, and L. Wan, “Heterogeneous ensemble for feature drifts in data streams,” in Proc. Pacific-Asia Conf. Knowl. Discov. Data Mining, Springer, 2012, pp. 1–12.
[89]
W. Li, M. Canini, A. W. Moore, and R. Bolla, “Efficient application identification and the temporal and spatial stability of classification schema,” Comput. Netw., vol. 53, no. 6, pp. 790–809, 2009.
[90]
S. S. Kannan and N. Ramaraj, “A novel hybrid feature selection via symmetrical uncertainty ranking based local memetic search algorithm,” Knowl.-Based Syst., vol. 23, no. 6, pp. 580–585, 2010.
[91]
F. Pendlebury et al., “Tesseract: Eliminating experimental bias in malware classification across space and time,” in Proc. 28th USENIX Secur. Symp., 2019, pp. 729–746.
[92]
Y. Lyu et al., “CHRONOS: Time-aware zero-shot identification of libraries from vulnerability reports,” in Proc. IEEE/ACM 45th Int. Conf. Softw. Eng., 2023, pp. 1037–1049.
[93]
X.-B. D. Le, L. Bao, D. Lo, X. Xia, S. Li, and C. Pasareanu, “On reliability of patch correctness assessment,” in Proc. IEEE/ACM 41st Int. Conf. Softw. Eng., 2019, pp. 524–535.
[94]
A. Kharkar et al., “Learning to reduce false positives in analytic bug detectors,” in Proc. 44th Int. Conf. Softw. Eng., 2022, pp. 1307–1316.
[95]
Q. L. Le, A. Raad, J. Villard, J. Berdine, D. Dreyer, and P. W. O’Hearn, “Finding real bugs in big programs with incorrectness logic,” in Proc. ACM Program. Lang., vol. 6, no. OOPSLA1, pp. 1–27, 2022.
[96]
F. Wilcoxon, Individual Comparisons by Ranking Methods. Berlin, Germany: Springer, 1992.
[97]
M. Yan, X. Xia, D. Lo, A. E. Hassan, and S. Li, “Characterizing and identifying reverted commits,” Empirical Softw. Eng., vol. 24, pp. 2171–2208, 2019.
[98]
M. Yan, X. Xia, Y. Fan, A. E. Hassan, D. Lo, and S. Li, “Just-in-time defect identification and localization: A two-phase framework,” IEEE Trans. Softw. Eng., vol. 48, no. 1, pp. 82–101, Jan. 2022.
[99]
S. Yatish, J. Jiarpakdee, P. Thongtanunam, and C. Tantithamthavorn, “Mining software defects: Should we consider affected releases?,” in Proc. IEEE/ACM 41st Int. Conf. Softw. Eng., 2019, pp. 654–665.
[100]
Our replication package, 2023. [Online]. Available: https://github.com/soarsmu/midas
[101]
F. Chen, F. H. Fard, D. Lo, and T. Bryksin, “On the transferability of pre-trained language models for low-resource programming languages,” in Proc. IEEE/ACM 30th Int. Conf. Prog. Comprehension, 2022, pp. 401–412.
[102]
Y. Zhou, J. K. Siow, C. Wang, S. Liu, and Y. Liu, “SPI: Automated identification of security patches via commits,” ACM Trans. Softw. Eng. Methodol., vol. 31, pp. 1–27, 2021.
[103]
H. Perl et al., “VCCFinder: Finding potential vulnerabilities in open-source projects to assist code audits,” in Proc. 22nd ACM SIGSAC Conf. Comput. Commun. Secur., 2015, pp. 426–437.
[104]
T. H. Le, D. Hin, R. Croft, and M. A. Babar, “DeepCVA: Automated commit-level vulnerability assessment with deep multi-task learning,” 2021,.
[105]
H. Zhong, X. Wang, and H. Mei, “Inferring bug signatures to detect real bugs,” IEEE Trans. Softw. Eng., vol. 48, no. 2, pp. 571–584, Feb. 2022.
[106]
H. Zhong and X. Wang, “Boosting complete-code tool for partial program,” in Proc. IEEE/ACM 32nd Int. Conf. Automated Softw. Eng., 2017, pp. 671–681.

Cited By

View all
  • (2024)VF-DetectorProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/643(5817-5825)Online publication date: 3-Aug-2024
  • (2024)Automated Commit Intelligence by Pre-trainingACM Transactions on Software Engineering and Methodology10.1145/367473133:8(1-30)Online publication date: 1-Jul-2024
  • (2024)Encoding Version History Context for Better Code RepresentationProceedings of the 21st International Conference on Mining Software Repositories10.1145/3643991.3644929(631-636)Online publication date: 15-Apr-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering  Volume 49, Issue 8
Aug. 2023
288 pages

Publisher

IEEE Press

Publication History

Published: 01 August 2023

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 02 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)VF-DetectorProceedings of the Thirty-Third International Joint Conference on Artificial Intelligence10.24963/ijcai.2024/643(5817-5825)Online publication date: 3-Aug-2024
  • (2024)Automated Commit Intelligence by Pre-trainingACM Transactions on Software Engineering and Methodology10.1145/367473133:8(1-30)Online publication date: 1-Jul-2024
  • (2024)Encoding Version History Context for Better Code RepresentationProceedings of the 21st International Conference on Mining Software Repositories10.1145/3643991.3644929(631-636)Online publication date: 15-Apr-2024
  • (2024)Refining ChatGPT-Generated Code: Characterizing and Mitigating Code Quality IssuesACM Transactions on Software Engineering and Methodology10.1145/364367433:5(1-26)Online publication date: 4-Jun-2024
  • (2024)VFCFinder: Pairing Security Advisories and PatchesProceedings of the 19th ACM Asia Conference on Computer and Communications Security10.1145/3634737.3657007(1128-1142)Online publication date: 1-Jul-2024

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media