[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
nep-cmp New Economics Papers
on Computational Economics
Issue of 2022‒08‒15
thirty-two papers chosen by



  1. ETF Portfolio Construction via Neural Network trained on Financial Statement Data By Jinho Lee; Sungwoo Park; Jungyu Ahn; Jonghun Kwak
  2. Solving barrier options under stochastic volatility using deep learning By Weilong Fu; Ali Hirsa
  3. Conditionally Elicitable Dynamic Risk Measures for Deep Reinforcement Learning By Anthony Coache; Sebastian Jaimungal; \'Alvaro Cartea
  4. What constitutes a machine-learning-driven business model? A taxonomy of B2B start-ups with machine learning at their core By Vetter, Oliver A.; Hoffmann, Felix; Pumplun, Luisa; Buxmann, Peter
  5. Accelerating Machine Learning Training Time for Limit Order Book Prediction By Mark Joseph Bennett
  6. Shai-am: A Machine Learning Platform for Investment Strategies By Jonghun Kwak; Jungyu Ahn; Jinho Lee; Sungwoo Park
  7. Assessing a pay-for-performance conservation program using an agent-based modeling framework By Lee, Seungyub; Heberling, Matthew T.; Nietch, Christopher; Safwat, Amr
  8. Using Machine Learning to Test the Consistency of Food Insecurity Measures By Aveiga, Alexis H. Villacis; Badruddoza, Syed; Mayorga, Joaquin; Mishra, Ashok K.
  9. County-level USDA Crop Progress and Condition data, machine learning, and commodity market surprises By Cao, An N.Q.; Gebrekidan, Bisrat Haile; Heckelei, Thomas; Robe, Michel A.
  10. Generative Adversarial Networks Applied to Synthetic Financial Scenarios Generation By Christophe Geissler; Nicolas Morizet; Matteo Rizzato; Julien Wallart
  11. Identify Arbitrage Using Machine Learning on Multi-stock Pair Trading Price Forecasting By Zhijie Zhang
  12. An Agent-Based Model With Realistic Financial Time Series: A Method for Agent-Based Models Validation By Luis Goncalves de Faria
  13. Estimating value at risk: LSTM vs. GARCH By Weronika Ormaniec; Marcin Pitera; Sajad Safarveisi; Thorsten Schmidt
  14. DDPG based on multi-scale strokes for financial time series trading strategy By Jun-Cheng Chen; Cong-Xiao Chen; Li-Juan Duan; Zhi Cai
  15. The Virtue of Complexity in Return Prediction By Bryan T. Kelly; Semyon Malamud; Kangying Zhou
  16. Machine Learning Adoption based on the TOE Framework: A Quantitative Study By Zöll, Anne; Eitle, Verena; Buxmann, Peter
  17. Deep Learning for Systemic Risk Measures By Yichen Feng; Ming Min; Jean-Pierre Fouque
  18. Pricing multi-asset derivatives by variational quantum algorithms By Kenji Kubo; Koichi Miyamoto; Kosuke Mitarai; Keisuke Fujii
  19. Learning Underspecified Models By In-Koo Cho; Jonathan Libgober
  20. PREDICTING COMPANY INNOVATIVENESS BY ANALYSING THE WEBSITE DATA OF FIRMS: A COMPARISON ACROSS DIFFERENT TYPES OF INNOVATION By Sander Sõna; Jaan Masso; Shakshi Sharma; Priit Vahter; Rajesh Sharma
  21. Baseline validation of a bias-mitigated loan screening model based on the European Banking Authority's trust elements of Big Data & Advanced Analytics applications using Artificial Intelligence By Alessandro Danovi; Marzio Roma; Davide Meloni; Stefano Olgiati; Fernando Metelli
  22. Assessing and Comparing Fixed-Target Forecasts of Arctic Sea Ice: Glide Charts for Feature-Engineered Linear Regression and Machine Learning Models By Francis X. Diebold; Maximilian Goebel; Philippe Goulet Coulombe
  23. AI in Asset Management and Rebellion Research By Jimei Shen; Yihan Mo; Christopher Plimpton; Mustafa Kaan Basaran
  24. q-Learning in Continuous Time By Yanwei Jia; Xun Yu Zhou
  25. Promotheus: An End-to-End Machine Learning Framework for Optimizing Markdown in Online Fashion E-commerce By Eleanor Loh; Jalaj Khandelwal; Brian Regan; Duncan A. Little
  26. An Efficient Application of the Extended Path Algorithm in Matlab with Examples By Andrew Binning
  27. A Random Forest approach of the Evolution of Inequality of Opportunity in Mexico By Thibaut Plassot; Isidro Soloaga; Pedro J. Torres
  28. Predicting Economic Welfare with Images on Wealth By Jeonggil Song
  29. Integrating Prediction and Attribution to Classify News By Nelson P. Rayl; Nitish R. Sinha
  30. On data-driven chance constraint learning for mixed-integer optimization problems By Alcantara Mata, Antonio; Ruiz Mora, Carlos
  31. Neural network based human reliability analysis method in production systems By Rasoul Jamshidi; Mohammad Ebrahim Sadeghi
  32. DSGE Nash: solving Nash games in macro models By Minesso, Massimo Ferrari; Pagliari, Maria Sole

  1. By: Jinho Lee; Sungwoo Park; Jungyu Ahn; Jonghun Kwak
    Abstract: Recently, the application of advanced machine learning methods for asset management has become one of the most intriguing topics. Unfortunately, the application of these methods, such as deep neural networks, is difficult due to the data shortage problem. To address this issue, we propose a novel approach using neural networks to construct a portfolio of exchange traded funds (ETFs) based on the financial statement data of their components. Although a number of ETFs and ETF-managed portfolios have emerged in the past few decades, the ability to apply neural networks to manage ETF portfolios is limited since the number and historical existence of ETFs are relatively smaller and shorter, respectively, than those of individual stocks. Therefore, we use the data of individual stocks to train our neural networks to predict the future performance of individual stocks and use these predictions and the portfolio deposit file (PDF) to construct a portfolio of ETFs. Multiple experiments have been performed, and we have found that our proposed method outperforms the baselines. We believe that our approach can be more beneficial when managing recently listed ETFs, such as thematic ETFs, of which there is relatively limited historical data for training advanced machine learning methods.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.01187&r=
  2. By: Weilong Fu; Ali Hirsa
    Abstract: We develop an unsupervised deep learning method to solve the barrier options under the Bergomi model. The neural networks serve as the approximate option surfaces and are trained to satisfy the PDE as well as the boundary conditions. Two singular terms are added to the neural networks to deal with the non-smooth and discontinuous payoff at the strike and barrier levels so that the neural networks can replicate the asymptotic behaviors of barrier options at short maturities. After that, vanilla options and barrier options are priced in a single framework. Also, neural networks are employed to deal with the high dimensionality of the function input in the Bergomi model. Once trained, the neural network solution yields fast and accurate option values.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.00524&r=
  3. By: Anthony Coache; Sebastian Jaimungal; \'Alvaro Cartea
    Abstract: We propose a novel framework to solve risk-sensitive reinforcement learning (RL) problems where the agent optimises time-consistent dynamic spectral risk measures. Based on the notion of conditional elicitability, our methodology constructs (strictly consistent) scoring functions that are used as penalizers in the estimation procedure. Our contribution is threefold: we (i) devise an efficient approach to estimate a class of dynamic spectral risk measures with deep neural networks, (ii) prove that these dynamic spectral risk measures may be approximated to any arbitrary accuracy using deep neural networks, and (iii) develop a risk-sensitive actor-critic algorithm that uses full episodes and does not require any additional nested transitions. We compare our conceptually improved reinforcement learning algorithm with the nested simulation approach and illustrate its performance in two settings: statistical arbitrage and portfolio allocation on both simulated and real data.
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2206.14666&r=
  4. By: Vetter, Oliver A.; Hoffmann, Felix; Pumplun, Luisa; Buxmann, Peter
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:133080&r=
  5. By: Mark Joseph Bennett
    Abstract: Financial firms are interested in simulation to discover whether a given algorithm involving financial machine learning will operate profitably. While many versions of this type of algorithm have been published recently by researchers, the focus herein is on a particular machine learning training project due to the explainable nature and the availability of high frequency market data. For this task, hardware acceleration is expected to speed up the time required for the financial machine learning researcher to obtain the results. As the majority of the time can be spent in classifier training, there is interest in faster training steps. A published Limit Order Book algorithm for predicting stock market direction is our subject, and the machine learning training process can be time-intensive especially when considering the iterative nature of model development. To remedy this, we deploy Graphical Processing Units (GPUs) produced by NVIDIA available in the data center where the computer architecture is geared to parallel high-speed arithmetic operations. In the studied configuration, this leads to significantly faster training time allowing more efficient and extensive model development.
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2206.09041&r=
  6. By: Jonghun Kwak; Jungyu Ahn; Jinho Lee; Sungwoo Park
    Abstract: The finance industry has adopted machine learning (ML) as a form of quantitative research to support better investment decisions, yet there are several challenges often overlooked in practice. (1) ML code tends to be unstructured and ad hoc, which hinders cooperation with others. (2) Resource requirements and dependencies vary depending on which algorithm is used, so a flexible and scalable system is needed. (3) It is difficult for domain experts in traditional finance to apply their experience and knowledge in ML-based strategies unless they acquire expertise in recent technologies. This paper presents Shai-am, an ML platform integrated with our own Python framework. The platform leverages existing modern open-source technologies, managing containerized pipelines for ML-based strategies with unified interfaces to solve the aforementioned issues. Each strategy implements the interface defined in the core framework. The framework is designed to enhance reusability and readability, facilitating collaborative work in quantitative research. Shai-am aims to be a pure AI asset manager for solving various tasks in financial markets.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.00436&r=
  7. By: Lee, Seungyub; Heberling, Matthew T.; Nietch, Christopher; Safwat, Amr
    Keywords: Environmental Economics and Policy, Agricultural and Food Policy, Institutional and Behavioral Economics
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:ags:aaea22:322301&r=
  8. By: Aveiga, Alexis H. Villacis; Badruddoza, Syed; Mayorga, Joaquin; Mishra, Ashok K.
    Keywords: Food Consumption/Nutrition/Food Safety, International Development, Production Economics
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:ags:aaea22:322472&r=
  9. By: Cao, An N.Q.; Gebrekidan, Bisrat Haile; Heckelei, Thomas; Robe, Michel A.
    Keywords: Agricultural and Food Policy, Agricultural Finance, Research Methods/Statistical Methods
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:ags:aaea22:322281&r=
  10. By: Christophe Geissler (Advestis); Nicolas Morizet (Advestis); Matteo Rizzato (Advestis); Julien Wallart (Fujitsu Systems Europe)
    Abstract: The finance industry is producing an increasing amount of datasets that investment professionals can consider to be influential on the price of financial assets. These datasets were initially mainly limited to exchange data, namely price, capitalization and volume. Their coverage has now considerably expanded to include, for example, macroeconomic data, supply and demand of commodities, balance sheet data and more recently extra-financial data such as ESG scores. This broadening of the factors retained as influential constitutes a serious challenge for statistical modeling. Indeed, the instability of the correlations between these factors makes it practically impossible to identify the joint laws needed to construct scenarios. Fortunately, spectacular advances in Deep Learning field in recent years have given rise to GANs. GANs are a type of generative machine learning models that produce new data samples with the same characteristics as a training data distribution in an unsupervised way, avoiding data assumptions and human induced biases. In this work, we are exploring the use of GANs for synthetic financial scenarios generation. This pilot study is the result of a collaboration between Fujitsu and Advestis and it will be followed by a thorough exploration of the use cases that can benefit from the proposed solution. We propose a GANs-based algorithm that allows the replication of multivariate data representing several properties (including, but not limited to, price, market capitalization, ESG score, controversy score,. . .) of a set of stocks. This approach differs from examples in the financial literature, which are mainly focused on the reproduction of temporal asset price scenarios. We also propose several metrics to evaluate the quality of the data generated by the GANs. This approach is well fit for the generation of scenarios, the time direction simply arising as a subsequent (eventually conditioned) generation of data points drawn from the learned distribution. Our method will allow to simulate high dimensional scenarios (compared to ≲ 10 features currently employed in most recent use cases) where network complexity is reduced thanks to a wisely performed feature engineering and selection. Complete results will be presented in a forthcoming study.
    Keywords: Generative Adversarial Networks,Data Augmentation,Financial Scenarios,Risk Management
    Date: 2022–07–11
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03716692&r=
  11. By: Zhijie Zhang
    Abstract: Aims: Market neutral pair-trading strategy of two highly cointegrated stocks can be extended to a higher dimensional arbitrage algorithm. In this paper, a linear combination of multiple cointegratedstocks is introduced to overcome the limitations of a traditional one-to-one pair trading technique. Methods: First, stocks from diversified industries are pre-partitioned using clustering algorithm to break industrial boundaries. Then, cointegrated stocks will be formed using ElasticNet algorithm boosted by AdaBoost algorithm. Results: All three indicators on price prediction chosen for performance evaluation increased significantly. MSE increased by 32.21% compared to OLS, 37.06% increase on MAE, 37.73% improvement on MAPE. (Portfolio return performance is still under construction, indicators including cumulative return, draw-down and Sharpe-ratio. The comparison will be against against Buy-and-Hold strategy, a common benchmark for any portfolio)
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:toh:dssraa:127&r=
  12. By: Luis Goncalves de Faria
    Abstract: This paper proposes a methodology to empirically validate an agent-based model (ABM) that generates artificial financial time series data comparable with real-world financial data. The approach is based on comparing the results of the ABM against the stylised facts -- the statistical properties of the empirical time-series of financial data. The stylised facts appear to be universal and are observed across different markets, financial instruments and time periods, hence they can serve to validate models of financial markets. If a given model does not consistently replicate these stylised facts, then we can reject it as being empirically inadequate. We discuss each stylised fact, the empirical evidence for it, and introduce appropriate metrics for testing the presence of these in model generated data. Moreover we investigate the ability of our model to correctly reproduce these stylised facts. We validate our model against a comprehensive list of empirical phenomena that qualify as a stylised fact, of both low and high frequency financial data that can be addressed by means of a relatively simple ABM of financial markets. This procedure is able to show whether the model, as an abstraction of reality, has a meaningful empirical counterpart and the significance of this analysis for the purposes of ABM validation and their empirical reliability.
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2206.09772&r=
  13. By: Weronika Ormaniec; Marcin Pitera; Sajad Safarveisi; Thorsten Schmidt
    Abstract: Estimating value-at-risk on time series data with possibly heteroscedastic dynamics is a highly challenging task. Typically, we face a small data problem in combination with a high degree of non-linearity, causing difficulties for both classical and machine-learning estimation algorithms. In this paper, we propose a novel value-at-risk estimator using a long short-term memory (LSTM) neural network and compare its performance to benchmark GARCH estimators. Our results indicate that even for a relatively short time series, the LSTM could be used to refine or monitor risk estimation processes and correctly identify the underlying risk dynamics in a non-parametric fashion. We evaluate the estimator on both simulated and market data with a focus on heteroscedasticity, finding that LSTM exhibits a similar performance to GARCH estimators on simulated data, whereas on real market data it is more sensitive towards increasing or decreasing volatility and outperforms all existing estimators of value-at-risk in terms of exception rate and mean quantile score.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.10539&r=
  14. By: Jun-Cheng Chen; Cong-Xiao Chen; Li-Juan Duan; Zhi Cai
    Abstract: With the development of artificial intelligence,more and more financial practitioners apply deep reinforcement learning to financial trading strategies.However,It is difficult to extract accurate features due to the characteristics of considerable noise,highly non-stationary,and non-linearity of single-scale time series,which makes it hard to obtain high returns.In this paper,we extract a multi-scale feature matrix on multiple time scales of financial time series,according to the classic financial theory-Chan Theory,and put forward to an approach of multi-scale stroke deep deterministic policy gradient reinforcement learning model(MSSDDPG)to search for the optimal trading strategy.We carried out experiments on the datasets of the Dow Jones,S&P 500 of U.S. stocks, and China's CSI 300,SSE Composite,evaluate the performance of our approach compared with turtle trading strategy, Deep Q-learning(DQN)reinforcement learning strategy,and deep deterministic policy gradient (DDPG) reinforcement learning strategy.The result shows that our approach gets the best performance in China CSI 300,SSE Composite,and get an outstanding result in Dow Jones,S&P 500 of U.S.
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.10071&r=
  15. By: Bryan T. Kelly; Semyon Malamud; Kangying Zhou
    Abstract: The extant literature predicts market returns with “simple” models that use only a few parameters. Contrary to conventional wisdom, we theoretically prove that simple models severely understate return predictability compared to “complex” models in which the number of parameters exceeds the number of observations. We empirically document the virtue of complexity in US equity market return prediction. Our findings establish the rationale for modeling expected returns through machine learning.
    JEL: C1 C45 G1
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:30217&r=
  16. By: Zöll, Anne; Eitle, Verena; Buxmann, Peter
    Abstract: The increasing use of machine learning (ML) in businesses is ubiquitous in research and in practice. Even though ML has become one of the key technologies in recent years, organizations have difficulties adopting ML applications. Implementing ML is a challenging task for organizations due to its new programming paradigm and the significant organizational changes. In order to increase the adoption rate of ML, our study seeks to examine which generic and specific factors of the technological-organizational-environmental (TOE) framework leverage ML adoption. We validate the impact of these factors on ML adoption through a quantitative research design. Our study contributes to research by extending the TOE framework by adding ML specifications and demonstrating a moderator effect of firm size on the relationship between technology competence and ML adoption.
    Date: 2022–07–07
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:133079&r=
  17. By: Yichen Feng; Ming Min; Jean-Pierre Fouque
    Abstract: The aim of this paper is to study a new methodological framework for systemic risk measures by applying deep learning method as a tool to compute the optimal strategy of capital allocations. Under this new framework, systemic risk measures can be interpreted as the minimal amount of cash that secures the aggregated system by allocating capital to the single institutions before aggregating the individual risks. This problem has no explicit solution except in very limited situations. Deep learning is increasingly receiving attention in financial modelings and risk management and we propose our deep learning based algorithms to solve both the primal and dual problems of the risk measures, and thus to learn the fair risk allocations. In particular, our method for the dual problem involves the training philosophy inspired by the well-known Generative Adversarial Networks (GAN) approach and a newly designed direct estimation of Radon-Nikodym derivative. We close the paper with substantial numerical studies of the subject and provide interpretations of the risk allocations associated to the systemic risk measures. In the particular case of exponential preferences, numerical experiments demonstrate excellent performance of the proposed algorithm, when compared with the optimal explicit solution as a benchmark.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.00739&r=
  18. By: Kenji Kubo; Koichi Miyamoto; Kosuke Mitarai; Keisuke Fujii
    Abstract: Pricing a multi-asset derivative is an important problem in financial engineering, both theoretically and practically. Although it is suitable to numerically solve partial differential equations to calculate the prices of certain types of derivatives, the computational complexity increases exponentially as the number of underlying assets increases in some classical methods, such as the finite difference method. Therefore, there are efforts to reduce the computational complexity by using quantum computation. However, when solving with naive quantum algorithms, the target derivative price is embedded in the amplitude of one basis of the quantum state, and so an exponential complexity is required to obtain the solution. To avoid the bottleneck, the previous study~[Miyamoto and Kubo, IEEE Transactions on Quantum Engineering, \textbf{3}, 1--25 (2022)] utilizes the fact that the present price of a derivative can be obtained by its discounted expected value at any future point in time and shows that the quantum algorithm can reduce the complexity. In this paper, to make the algorithm feasible to run on a small quantum computer, we use variational quantum simulation to solve the Black-Scholes equation and compute the derivative price from the inner product between the solution and a probability distribution. This avoids the measurement bottleneck of the naive approach and would provide quantum speedup even in noisy quantum computers. We also conduct numerical experiments to validate our method. Our method will be an important breakthrough in derivative pricing using small-scale quantum computers.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.01277&r=
  19. By: In-Koo Cho; Jonathan Libgober
    Abstract: This paper examines whether one can learn to play an optimal action while only knowing part of true specification of the environment. We choose the optimal pricing problem as our laboratory, where the monopolist is endowed with an underspecified model of the market demand, but can observe market outcomes. In contrast to conventional learning models where the model specification is complete and exogenously fixed, the monopolist has to learn the specification and the parameters of the demand curve from the data. We formulate the learning dynamics as an algorithm that forecast the optimal price based on the data, following the machine learning literature (Shalev-Shwartz and Ben-David (2014)). Inspired by PAC learnability, we develop a new notion of learnability by requiring that the algorithm must produce an accurate forecast with a reasonable amount of data uniformly over the class of models consistent with the part of the true specification. In addition, we assume that the monopolist has a lexicographic preference over the payoff and the complexity cost of the algorithm, seeking an algorithm with a minimum number of parameters subject to PAC-guaranteeing the optimal solution (Rubinstein (1986)). We show that for the set of demand curves with strictly decreasing uniformly Lipschitz continuous marginal revenue curve, the optimal algorithm recursively estimates the slope and the intercept of the linear demand curve, even if the actual demand curve is not linear. The monopolist chooses a misspecified model to save computational cost, while learning the true optimal decision uniformly over the set of underspecified demand curves.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.10140&r=
  20. By: Sander Sõna; Jaan Masso; Shakshi Sharma; Priit Vahter; Rajesh Sharma
    Abstract: This paper investigates which of the core types of innovation can be best predicted based on the website data of firms. In particular, we focus on four distinct key standard types of innovation – product, process, organisational, and marketing innovation in firms. Web-mining of textual data on the websites of firms from Estonia combined with the application of artificial intelligence (AI) methods turned out to be a suitable approach to predict firm-level innovation indicators. The key novel addition to the existing literature is the finding that web-mining is more applicable to predicting marketing innovation than predicting the other three core types of innovation. As AI based models are often black-box in nature, for transparency, we use an explainable AI approach (SHAP - SHapley Additive exPlanations), where we look at the most important words predicting a particular type of innovation. Our models confirm that the marketing innovation indicator from survey data was clearly related to marketing-related terms on the firms' websites. In contrast, the results on the relevant words on websites for other innovation indicators were much less clear. Our analysis concludes that the effectiveness of web-scraping and web-text-based AI approaches in predicting cost-effective, granular and timely firm-level innovation indicators varies according to the type of innovation considered.
    Keywords: Innovation, Marketing Innovation, Community Innovation Survey (CIS), Machine learning, Neural network, Explainable AI, SHAP
    Date: 2022
    URL: http://d.repec.org/n?u=RePEc:mtk:febawb:143&r=
  21. By: Alessandro Danovi; Marzio Roma; Davide Meloni; Stefano Olgiati; Fernando Metelli
    Abstract: The goal of our 4-phase research project was to test if a machine-learning-based loan screening application (5D) could detect bad loans subject to the following constraints: a) utilize a minimal-optimal number of features unrelated to the credit history, gender, race or ethnicity of the borrower (BiMOPT features); b) comply with the European Banking Authority and EU Commission principles on trustworthy Artificial Intelligence (AI). All datasets have been anonymized and pseudoanonymized. In Phase 0 we selected a subset of 10 BiMOPT features out of a total of 84 features; in Phase I we trained 5D to detect bad loans in a historical dataset extracted from a mandatory report to the Bank of Italy consisting of 7,289 non-performing loans (NPLs) closed in the period 2010-2021; in Phase II we assessed the baseline performance of 5D on a distinct validation dataset consisting of an active portolio of 63,763 outstanding loans (performing and non-performing) for a total financed value of over EUR 11.5 billion as of December 31, 2021; in Phase III we will monitor the baseline performance for a period of 5 years (2023-27) to assess the prospective real-world bias-mitigation and performance of the 5D system and its utility in credit and fintech institutions. At baseline, 5D correctly detected 1,461 bad loans out of a total of 1,613 (Sensitivity = 0.91, Prevalence = 0.0253;, Positive Predictive Value = 0.19), and correctly classified 55,866 out of the other 62,150 exposures (Specificity = 0.90, Negative Predictive Value = 0.997). Our preliminary results support the hypothesis that Big Data & Advanced Analytics applications based on AI can mitigate bias and improve consumer protection in the loan screening process without compromising the efficacy of the credit risk assessment. Further validation is required to assess the prospective performance and utility of 5D in credit and fintech institutions.
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2206.08938&r=
  22. By: Francis X. Diebold; Maximilian Goebel; Philippe Goulet Coulombe
    Abstract: We use "glide charts" (plots of sequences of root mean squared forecast errors as the target date is approached) to evaluate and compare fixed-target forecasts of Arctic sea ice. We first use them to evaluate the simple feature-engineered linear regression (FELR) forecasts of Diebold and Goebel (2021), and to compare FELR forecasts to naive pure-trend benchmark forecasts. Then we introduce a much more sophisticated feature-engineered machine learning (FEML) model, and we use glide charts to evaluate FEML forecasts and compare them to a FELR benchmark. Our substantive results include the frequent appearance of predictability thresholds, which differ across months, meaning that accuracy initially fails to improve as the target date is approached but then increases progressively once a threshold lead time is crossed. Also, we find that FEML can improve appreciably over FELR when forecasting "turning point" months in the annual cycle at horizons of one to three months ahead.
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2206.10721&r=
  23. By: Jimei Shen; Yihan Mo; Christopher Plimpton; Mustafa Kaan Basaran
    Abstract: On October 30th, 2021, Rebellion Research's CEO announced in a Q3 2021 Letter to Investors that Rebellion's AI Global Equity strategy returned +6.8% gross for the first three quarters of 2021. "It's no surprise", Alex told us, "Our Machine Learning global strategy has a history of outperforming the S&P 500 for 14 years". In 2021, Rebellion's brokerage accounts can be opened in over 70 countries, and Rebellion's research covers over 50 countries. Besides being an AI asset management company, Rebellion also defines itself as a top-tier, global machine learning think tank. Alex planned to build a Rebellion ML & AI ecosystem. Should Rebellion stay in the asset management area or jump into other areas? How could the Rebellion strategically move towards a more broad area? What were Rebellion's new or alternative business models?
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2206.14876&r=
  24. By: Yanwei Jia; Xun Yu Zhou
    Abstract: We study the continuous-time counterpart of Q-learning for reinforcement learning (RL) under the entropy-regularized, exploratory diffusion process formulation introduced by Wang et al. (2020) As the conventional (big) Q-function collapses in continuous time, we consider its first-order approximation and coin the term "(little) q-function". This function is related to the instantaneous advantage rate function as well as the Hamiltonian. We develop a "q-learning" theory around the q-function that is independent of time discretization. Given a stochastic policy, we jointly characterize the associated q-function and value function by martingale conditions of certain stochastic processes. We then apply the theory to devise different actor-critic algorithms for solving underlying RL problems, depending on whether or not the density function of the Gibbs measure generated from the q-function can be computed explicitly. One of our algorithms interprets the well-known Q-learning algorithm SARSA, and another recovers a policy gradient (PG) based continuous-time algorithm proposed in Jia and Zhou (2021). Finally, we conduct simulation experiments to compare the performance of our algorithms with those of PG-based algorithms in Jia and Zhou (2021) and time-discretized conventional Q-learning algorithms.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.00713&r=
  25. By: Eleanor Loh; Jalaj Khandelwal; Brian Regan; Duncan A. Little
    Abstract: Managing discount promotional events ("markdown") is a significant part of running an e-commerce business, and inefficiencies here can significantly hamper a retailer's profitability. Traditional approaches for tackling this problem rely heavily on price elasticity modelling. However, the partial information nature of price elasticity modelling, together with the non-negotiable responsibility for protecting profitability, mean that machine learning practitioners must often go through great lengths to define strategies for measuring offline model quality. In the face of this, many retailers fall back on rule-based methods, thus forgoing significant gains in profitability that can be captured by machine learning. In this paper, we introduce two novel end-to-end markdown management systems for optimising markdown at different stages of a retailer's journey. The first system, "Ithax", enacts a rational supply-side pricing strategy without demand estimation, and can be usefully deployed as a "cold start" solution to collect markdown data while maintaining revenue control. The second system, "Promotheus", presents a full framework for markdown optimization with price elasticity. We describe in detail the specific modelling and validation procedures that, within our experience, have been crucial to building a system that performs robustly in the real world. Both markdown systems achieve superior profitability compared to decisions made by our experienced operations teams in a controlled online test, with improvements of 86% (Promotheus) and 79% (Ithax) relative to manual strategies. These systems have been deployed to manage markdown at ASOS.com, and both systems can be fruitfully deployed for price optimization across a wide variety of retail e-commerce settings.
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2207.01137&r=
  26. By: Andrew Binning (The Treasury)
    Abstract: Recent experience with interest rates hitting the effective lower bound and agents facing binding borrowing constraints has emphasised the importance of understanding the behaviour of an economy in which some variables may be restricted at times. The extended path algorithm is a commonly used and fairly general method for solving dynamic nonlinear models with rational expectations. This algorithm can be used for a wide range of cases, including for models with occasionally binding constraints, or for forecasting with models in which some variables must satisfy a certain path. In this paper I propose computational improvements to the algorithm that speed up the calculations via vectorisations of the Jacobian matrix and residual equations. I illustrate the advantages of the method with a number of policy relevant applications: conditional forecasting with both exactly identified and underidentified shocks, occasionally binding constraints on interest rates, anticipated shocks, calendar-based forward guidance, optimal monetary policy with a binding constraint and transition paths.
    Keywords: interest rates; monetary policy; shocks; Keynesian; stochastic
    JEL: C53 C61 C63 E37 E47
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:nzt:nztwps:22/02&r=
  27. By: Thibaut Plassot (Universidad Iberoamericana, Mexico City: Department of Economics); Isidro Soloaga (Universidad Iberoamericana, Mexico City: Department of Economics); Pedro J. Torres (Universidad Iberoamericana, Mexico City: Department of Economics)
    Abstract: This work presents the trend of Inequality of Opportunity (IOp) and total inequality in wealth in Mexico for the years 2006, 2011 and 2017, and provides estimations using both an ex-ante and ex-post compensation criterion. We resort on a data-driven approach using supervised machine learning models to run regression trees and random forests that consider individuals’ circumstances and effort. We find an intensification of both total inequality and IOp between 2006 and 2011, as well as a reduction of these between 2011 and 2017, being absolute IOp slightly higher in 2017 than in 2006. From an ex-ante perspective, the share of IOp within total inequality slightly decreased although using an ex-post perspective the share remains stable across time. The most important variable in determining IOp is household´s wealth at age 14, followed by both, father´s and mother´s education. Other variables such as the ability of the parents to speak an indigenous language proved to have had a lower impact over time.
    Keywords: Inequality Of Opportunity, Mexico, Shapley Decomposition, Random Forests
    JEL: C14 C81 D31 D63
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:inq:inqwps:ecineq2022-614&r=
  28. By: Jeonggil Song
    Abstract: Using images containing information on wealth, this research investigates that pictures are capable of reliably predicting the economic prosperity of households. Without surveys on wealth-related information and human-made standard of wealth quality that the traditional wealth-based approach relied on, this novel approach makes use of only images posted on Dollar Street as input data on household wealth across 66 countries and predicts the consumption or income level of each household using the Convolutional Neural Network (CNN) method. The best result predicts the log of consumption level with root mean squared error of 0.66 and R-squared of 0.80 in CNN regression problem. In addition, this simple model also performs well in classifying extreme poverty with an accuracy of 0.87 and F-beta score of 0.86. Since the model shows a higher performance in the extreme poverty classification when I applied the different threshold of poverty lines to countries by their income group, it is suggested that the decision of the World Bank to define poverty lines differently by income group was valid.
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2206.14810&r=
  29. By: Nelson P. Rayl; Nitish R. Sinha
    Abstract: Recent modeling developments have created tradeoffs between attribution-based models, models that rely on causal relationships, and “pure prediction models†such as neural networks. While forecasters have historically favored one technology or the other based on comfort or loyalty to a particular paradigm, in domains with many observations and predictors such as textual analysis, the tradeoffs between attribution and prediction have become too large to ignore. We document these tradeoffs in the context of relabeling 27 million Thomson Reuters news articles published between 1996 and 2021 as debt-related or non-debt related. Articles in our dataset were labeled by journalists at the time of publication, but these labels may be inconsistent as labeling standards and the relation between text and label has changed over time. We propose a method for identifying and correcting inconsistent labeling that combines attribution and pure prediction methods and is applicable to any domain with human-labeled data. Implementing our proposed labeling solution returns a debt-related news dataset with 54% more observations than if the original journalist labels had been used and 31% more observation than if our solution had been implemented using attribution-based methods only.
    Keywords: News; Text Analysis; Debt; Labeling; Supervised Learning; DMR
    JEL: C40 C45 C55
    Date: 2022–07–01
    URL: http://d.repec.org/n?u=RePEc:fip:fedgfe:2022-42&r=
  30. By: Alcantara Mata, Antonio; Ruiz Mora, Carlos
    Abstract: When dealing with real-world optimization problems, decision-makers usually face high levels of uncertainty associated with partial information, unknown parameters, or complex relationships between these and the problem decision variables. In this work, we develop a novel Chance Constraint Learning (CCL) methodology with a focus on mixedinteger linear optimization problems which combines ideas from the chance constraint and constraint learning literature. Chance constraints set a probabilistic confidence level for a single or a set of constraints to be fulfilled, whereas the constraint learning methodology aims to model the functional relationship between the problem variables through predictive models. One of the main issues when establishing a learned constraint arises when we need to set further bounds for its response variable: the fulfillment of these is directly related to the accuracy of the predictive model and its probabilistic behaviour. In this sense, CCL makes use of linearizable machine learning models to estimate conditional quantiles of the learned variables, providing a data-driven solution for chance constraints. An open-access software has been developed to be used by practitioners. Furthermore, benefits from CCL have been tested in two real-world case studies, proving how robustness is added to optimal solutions when probabilistic bounds are set for learned constraints.
    Keywords: Chance Constraint; Constraint Learning; Data-Driven Optimization; Quantile Estimation; Machine Learning
    Date: 2022–07–07
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:35425&r=
  31. By: Rasoul Jamshidi; Mohammad Ebrahim Sadeghi
    Abstract: Purpose: In addition to playing an important role in creating economic security and investment development, insurance companies also invest. The country's insurance industry as one of the country's financial institutions has a special place in the investment process and special attention to appropriate investment policies in the field of insurance industry is essential. So that the efficiency of this industry in allocating the existing budget stimulates other economic sectors. This study seeks to model investment in the performance of dynamic networks of insurance companies. Methodology: In this paper, a new investment model is designed to examine the dynamic network performance of insurance companies in Iran. The designed model is implemented using GAMS software and the outputs of the model are analyzed based on regression method. The required information has been collected based on the statistics of insurance companies in Iran between 1393 and 1398. Findings: After evaluating these units, out of 15 companies evaluated, 6 companies had unit performance and were introduced as efficient companies. The average efficiency of insurance companies is 0.78 and the standard deviation is 0.2. The results show that the increase in the value of investments is due to the large reduction in costs and in terms of capital and net profit of companies is a large number that has a clear and strong potential for insurance companies. Originality/Value: In this paper, investment modeling is performed to examine the performance of dynamic networks of insurance companies in Iran.
    Date: 2022–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2206.11850&r=
  32. By: Minesso, Massimo Ferrari; Pagliari, Maria Sole
    Abstract: This paper presents DSGE Nash, a toolkit to solve for pure strategy Nash equilibria of global games in macro models. Although primarily designed to solve for Nash equilibria in DSGE models, the toolkit encompasses a broad range of options including solutions up to the third order, multiple players/strategies, the use of user-de_ned objective functions and the possibility of matching empirical moments and IRFs. When only one player is selected, the problem is re-framed as a standard optimal policy problem. We apply the algorithm to an open-economy model where a commodity importing country and a monopolistic commodity producer compete on the commodities market with limits to entrance. If the commodity price becomes relevant in production, the central bank in the commodity importing economy deviates from the _rst best policy to act strategically. In particular, the monetary authority tolerates relatively higher commodity price volatility to ease barriers to entry in commodity production and to limit the market power of the dominant exporter. JEL Classification: C63, E32, E61
    Keywords: computational economics, DSGE model, optimal policies
    Date: 2022–07
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20222678&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.