[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3292500.3330789acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
research-article

Conversion Prediction Using Multi-task Conditional Attention Networks to Support the Creation of Effective Ad Creatives

Published: 25 July 2019 Publication History

Abstract

Accurately predicting conversions in advertisements is generally a challenging task, because such conversions do not occur frequently. In this paper, we propose a new framework to support creating high-performing ad creatives, including the accurate prediction of ad creative text conversions before delivering to the consumer. The proposed framework includes three key ideas: multi-task learning, conditional attention, and attention highlighting. Multi-task learning is an idea for improving the prediction accuracy of conversion, which predicts clicks and conversions simultaneously, to solve the difficulty of data imbalance. Furthermore, conditional attention focuses attention of each ad creative with the consideration of its genre and target gender, thus improving conversion prediction accuracy. Attention highlighting visualizes important words and/or phrases based on conditional attention. We evaluated the proposed framework with actual delivery history data (14,000 creatives displayed more than a certain number of times from Gunosy Inc.), and confirmed that these ideas improve the prediction performance of conversions, and visualize noteworthy words according to the creatives' attributes.

References

[1]
Javad Azimi, Ruofei Zhang, Yang Zhou, Vidhya Navalpakkam, Jianchang Mao, and Xiaoli Fern. 2012. Visual Appearance of Display Ads and Its Effect on Click Through Rate. In Proceedings of the 21st ACM International Conference on Information and Knowledge Management. 495--504.
[2]
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. International Conference on Learning Representations (2014).
[3]
Norris I Bruce, BPS Murthi, and Ram C Rao. 2017. A dynamic model for digital advertising: The effects of creative format, message content, and targeting on engagement. Journal of marketing research, Vol. 54, 2 (2017), 202--218.
[4]
Rich Caruana. 1997. Multitask learning. Machine learning, Vol. 28, 1 (1997), 41--75.
[5]
Deepayan Chakrabarti, Deepak Agarwal, and Vanja Josifovski. 2008. Contextual advertising by combining relevance with click feedback. In Proceedings of the 17th international conference on World Wide Web. 417--426.
[6]
Olivier Chapelle, Eren Manavoglu, and Romer Rosales. 2015. Simple and scalable response prediction for display advertising. ACM Transactions on Intelligent Systems and Technology (TIST), Vol. 5, 4 (2015), 61.
[7]
Junxuan Chen, Baigui Sun, Hao Li, Hongtao Lu, and Xian-Sheng Hua. 2016. Deep ctr prediction in display advertising. In Proceedings of the 2016 ACM on Multimedia Conference. 811--820.
[8]
Haibin Cheng, Roelof van Zwol, Javad Azimi, Eren Manavoglu, Ruofei Zhang, Yang Zhou, and Vidhya Navalpakkam. 2012. Multimedia features for click prediction of new ads in display advertising. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining. 777--785.
[9]
Heng-Tze Cheng, Levent Koc, Jeremiah Harmsen, Tal Shaked, Tushar Chandra, Hrishi Aradhye, Glen Anderson, Greg Corrado, Wei Chai, Mustafa Ispir, et almbox. 2016. Wide & deep learning for recommender systems. In Proc. of the 1st Workshop on Deep Learning for Recommender Systems. 7--10.
[10]
Xiao Chu, Wanli Ouyang, Wei Yang, and Xiaogang Wang. 2015. Multi-task recurrent neural network for immediacy prediction. In Proc. of the IEEE international conference on computer vision. 3352--3360.
[11]
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. In Advances in neural information processing systems Workshop.
[12]
Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, Vol. 12, 8 (2011), 2493--2537.
[13]
Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep neural networks for youtube recommendations. In Proc. of the 10th ACM Conference on Recommender Systems. 191--198.
[14]
Huifeng Guo, Ruiming Tang, Yunming Ye, Zhenguo Li, and Xiuqiang He. 2014. Deepfm: a factorization-machine based neural network for ctr prediction. CoRR arXiv:1703.04247 (2014).
[15]
Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR arXiv:1207.0580 (2012).
[16]
Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, Vol. 9, 8 (1997), 1735--1780.
[17]
Kalervo J"arvelin and Jaana Kek"al"ainen. 2002. Cumulated Gain-based Evaluation of IR Techniques. ACM Trans. Inf. Syst., Vol. 20, 4 (2002), 422--446.
[18]
Yuchin Juan, Damien Lefortier, and Olivier Chapelle. 2017. Field-aware factorization machines in a real-world online advertising system. In Proc. of the 26th International Conference on World Wide Web Companion. 680--688.
[19]
Yuchin Juan, Yong Zhuang, Wei-Sheng Chin, and Chih-Jen Lin. 2016. Field-aware factorization machines for CTR prediction. In Proc. of the 10th ACM Conference on Recommender Systems. 43--50.
[20]
Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR arXiv:1412.6980 (2014).
[21]
Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems. 1097--1105.
[22]
Taku Kudo. 2006. Mecab: Yet another part-of-speech and morphological analyzer. http://mecab.sourceforge.jp.
[23]
Jianxun Lian, Xiaohuan Zhou, Fuzheng Zhang, Zhongxia Chen, Xing Xie, and Guangzhong Sun. 2018. xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems. In Proc. of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
[24]
Zhouhan Lin, Minwei Feng, Cicero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. 2017. A structured self-attentive sentence embedding. International Conference on Learning Representations (2017).
[25]
Wu Liu, Tao Mei, Yongdong Zhang, Cherry Che, and Jiebo Luo. 2015. Multi-task deep visual-semantic embedding for video thumbnail selection. In Proc. of the IEEE Conference on Computer Vision and Pattern Recognition. 3707--3715.
[26]
Weiwen Liu, Ruiming Tang, Jiajin Li, Jinkai Yu, Huifeng Guo, Xiuqiang He, and Shengyu Zhang. 2018. Field-aware Probabilistic Embedding Neural Network for CTR Prediction. In Proc. of the 12th ACM Conference on Recommender Systems. 412--416.
[27]
Quan Lu, Shengjun Pan, Liang Wang, Junwei Pan, Fengdan Wan, and Hongxia Yang. 2017. A Practical Framework of Conversion Rate Prediction for Online Display Advertising. In Proc. of the ADKDD'17. 9:1--9:9.
[28]
Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task Sequence to Sequence Learning. In International Conference on Learning Representations.
[29]
Surabhi Punjabi and Priyanka Bhatt. 2018. Robust Factorization Machines for User Response Prediction. In Proc. of the 2018 World Wide Web Conference. 669--678.
[30]
Steffen Rendle. 2010. Factorization machines. In Data Mining (ICDM), 2010 IEEE 10th International Conference on. 995--1000.
[31]
Matthew Richardson, Ewa Dominowska, and Robert Ragno. 2007. Predicting Clicks: Estimating the Click-Through Rate for New Ads. In Proc. of the 16th International World Wide Web Conference(WWW-2007).
[32]
Daiki Shimada, Ryunosuke Kotani, and Hitoshi Iyatomi. 2016. Document classification through image-based character embedding and wildcard training. In Big Data (Big Data), 2016 IEEE International Conference on. 3922--3927.
[33]
Masatoshi Suzuki, Koji Matsuda, Satoshi Sekine, Naoaki Okazaki, and Kentaro Inui. 2018. A Joint Neural Model for Fine-Grained Named Entity Classification of Wikipedia Articles. IEICE Transactions on Information and Systems, Vol. E101.D, 1 (2018), 73--81.
[34]
Stamatina Thomaidou, Konstantinos Leymonis, and Michalis Vazirgiannis. 2013. GrammAds: Keyword and ad creative generator for online advertising campaigns. In Digital Enterprise Design and Management 2013. 33--44.
[35]
Stamatina Thomaidou, Kyriakos Liakopoulos, and Michalis Vazirgiannis. 2014. Toward an integrated framework for automated development and optimization of online advertising campaigns. Intelligent Data Analysis, Vol. 18, 6 (2014), 1199--1227.
[36]
Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. 2015. Chainer: a Next-Generation Open Source Framework for Deep Learning. In Proceedings of Workshop on Machine Learning Systems (LearningSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS). http://learningsys.org/papers/LearningSys_2015_paper_33.pdf
[37]
Sato Toshinori. 2015. Neologism dictionary based on the language resources on the Web for Mecab. https://github.com/neologd/mecab-ipadic-neologd
[38]
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. {n. d.}. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning. 2048--2057.
[39]
Hongxia Yang, Quan Lu, Angus Xianen Qiu, and Chun Han. 2016a. Large Scale CVR Prediction through Dynamic Transfer Learning of Global and Local Features. In Proc. of the 5th International Workshop on Big Data, Streams and Heterogeneous Source Mining: Algorithms, Systems, Programming Models and Applications at KDD 2016. 103--119.
[40]
Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016b. Hierarchical attention networks for document classification. In Proc. of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 1480--1489.
[41]
Zhanpeng Zhang, Ping Luo, Chen Change Loy, and Xiaoou Tang. 2014. Facial landmark detection by deep multi-task learning. In European Conference on Computer Vision. 94--108.

Cited By

View all
  • (2024)Multi-task Conditional Attention Network for Conversion Prediction in Logistics AdvertisingProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671549(5028-5037)Online publication date: 25-Aug-2024
  • (2023)Unbiased Delayed Feedback Label Correction for Conversion Rate PredictionProceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3580305.3599536(2456-2466)Online publication date: 6-Aug-2023
  • (2023)Visualization System to Analyze Browsing Trends of Internet Video Advertisements2023 27th International Conference Information Visualisation (IV)10.1109/IV60283.2023.00011(1-6)Online publication date: 25-Jul-2023
  • Show More Cited By

Index Terms

  1. Conversion Prediction Using Multi-task Conditional Attention Networks to Support the Creation of Effective Ad Creatives

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        KDD '19: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining
        July 2019
        3305 pages
        ISBN:9781450362016
        DOI:10.1145/3292500
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 25 July 2019

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. attention mechanism
        2. multi-task learning
        3. online advertising
        4. recurrent neural network
        5. supporting ad creative creation

        Qualifiers

        • Research-article

        Conference

        KDD '19
        Sponsor:

        Acceptance Rates

        KDD '19 Paper Acceptance Rate 110 of 1,200 submissions, 9%;
        Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)24
        • Downloads (Last 6 weeks)3
        Reflects downloads up to 14 Dec 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)Multi-task Conditional Attention Network for Conversion Prediction in Logistics AdvertisingProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671549(5028-5037)Online publication date: 25-Aug-2024
        • (2023)Unbiased Delayed Feedback Label Correction for Conversion Rate PredictionProceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3580305.3599536(2456-2466)Online publication date: 6-Aug-2023
        • (2023)Visualization System to Analyze Browsing Trends of Internet Video Advertisements2023 27th International Conference Information Visualisation (IV)10.1109/IV60283.2023.00011(1-6)Online publication date: 25-Jul-2023
        • (2022)Ad Creative Discontinuation Prediction with Multi-Modal Multi-Task Neural Survival NetworksApplied Sciences10.3390/app1207359412:7(3594)Online publication date: 1-Apr-2022

        View Options

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media