[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
nep-cmp New Economics Papers
on Computational Economics
Issue of 2021‒05‒10
nineteen papers chosen by



  1. Learning Bermudans By Riccardo Aiolfi; Nicola Moreni; Marco Bianchetti; Marco Scaringi; Filippo Fogliani
  2. Detecting bid-rigging coalitions in different countries and auction formats By David Imhof; Hannes Wallimann
  3. Automatic Debiased Machine Learning via Neural Nets for Generalized Linear Regression By Victor Chernozhukov; Whitney K. Newey; Victor Quintas-Martinez; Vasilis Syrgkanis
  4. Using machine learning and qualitative interviews to design a five-question women's agency index By Biradavolu, Monica; Cooper, Jan; Jayachandran, Seema
  5. Stock Price Forecasting in Presence of Covid-19 Pandemic and Evaluating Performances of Machine Learning Models for Time-Series Forecasting By Navid Mottaghi; Sara Farhangdoost
  6. Business analytics meets artificial intelligence: Assessing the demand effects of discounts on Swiss train tickets By Martin Huber; Jonas Meier; Hannes Wallimann
  7. Household Savings and Monetary Policy under Individual and Aggregate Stochastic Volatility By Gorodnichenko, Yuriy; Maliar, Lilia; Maliar, Serguei; Naubert, Christopher
  8. Trade sentiment and the stock market: new evidence based on big data textual analysis of Chinese media By Amstad, Marlene; Gambacorta, Leonardo; He, Chao; Xia, Fan Dora
  9. Can Machine Learning Catch the COVID-19 Recession? By Goulet Coulombe, Philippe; Marcellino, Massimiliano; Stevanovic, Dalibor
  10. Machine Collaboration By Qingfeng Liu; Yang Feng
  11. Artificial Intelligence, Globalization, and Strategies for Economic Development By Korinek, Anton; Stiglitz, Joseph E
  12. Optimal Targeting in Fundraising: A Machine-Learning Approach By Tobias Cagala; Ulrich Glogowsky; Johannes Rincke; Anthony Strittmatter
  13. MRC-LSTM: A Hybrid Approach of Multi-scale Residual CNN and LSTM to Predict Bitcoin Price By Qiutong Guo; Shun Lei; Qing Ye; Zhiyang Fang
  14. The Gender Pay Gap Revisited with Big Data: Do Methodological Choices Matter? By STRITTMATTER, Anthony; Wunsch, Conny
  15. Human Biographical Record (HBR) By Nekoei, Arash; Sinn, Fabian
  16. Deep Reinforcement Trading with Predictable Returns By Alessio Brini; Daniele Tantari
  17. Epidemics in modern economies By Heinrich, Torsten
  18. How effective is carbon pricing? A machine learning approach to policy evaluation By Abrell, Jan; Kosch, Mirjam; Rausch, Sebastian
  19. Artificial intelligence and Pricing: The Impact of Algorithm Design By Asker, John; Fershtman, Chaim; Pakes, Ariel

  1. By: Riccardo Aiolfi; Nicola Moreni; Marco Bianchetti; Marco Scaringi; Filippo Fogliani
    Abstract: American and Bermudan-type financial instruments are often priced with specific Monte Carlo techniques whose efficiency critically depends on the effective dimensionality of the problem and the available computational power. In our work we focus on Bermudan Swaptions, well-known interest rate derivatives embedded in callable debt instruments or traded in the OTC market for hedging or speculation purposes, and we adopt an original pricing approach based on Supervised Learning (SL) algorithms. In particular, we link the price of a Bermudan Swaption to its natural hedges, i.e. the underlying European Swaptions, and other sound financial quantities through SL non-parametric regressions. We test different algorithms, from linear models to decision tree-based models and Artificial Neural Networks (ANN), analyzing their predictive performances. All the SL algorithms result to be reliable and fast, allowing to overcome the computational bottleneck of standard Monte Carlo simulations; the best performing algorithms for our problem result to be Ridge, ANN and Gradient Boosted Regression Tree. Moreover, using feature importance techniques, we are able to rank the most important driving factors of a Bermudan Swaption price, confirming that the value of the maximum underlying European Swaption is the prevailing feature.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.00655&r=
  2. By: David Imhof; Hannes Wallimann
    Abstract: We propose an original application of screening methods using machine learning to detect collusive groups of firms in procurement auctions. As a methodical innovation, we calculate coalition-based screens by forming coalitions of bidders in tenders to flag bid-rigging cartels. Using Swiss, Japanese and Italian procurement data, we investigate the effectiveness of our method in different countries and auction settings, in our cases first-price sealed-bid and mean-price sealed-bid auctions. We correctly classify 90\% of the collusive and competitive coalitions when applying four machine learning algorithms: lasso, support vector machine, random forest, and super learner ensemble method. Finally, we find that coalition-based screens for the variance and the uniformity of bids are in all the cases the most important predictors according the random forest.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.00337&r=
  3. By: Victor Chernozhukov; Whitney K. Newey; Victor Quintas-Martinez; Vasilis Syrgkanis
    Abstract: We give debiased machine learners of parameters of interest that depend on generalized linear regressions, which regressions make a residual orthogonal to regressors. The parameters of interest include many causal and policy effects. We give neural net learners of the bias correction that are automatic in only depending on the object of interest and the regression residual. Convergence rates are given for these neural nets and for more general learners of the bias correction. We also give conditions for asymptotic normality and consistent asymptotic variance estimation of the learner of the object of interest.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.14737&r=
  4. By: Biradavolu, Monica; Cooper, Jan; Jayachandran, Seema
    Abstract: We propose a new method to design a short survey measure of a complex concept such as women's agency. The approach combines mixed-methods data collection and machine learning. We select the best survey questions based on how strongly correlated they are with a "gold standard" measure of the concept derived from qualitative interviews. In our application, we measure agency for 209 women in Haryana, India, first, through a semi-structured interview and, second, through a large set of close-ended questions. We use qualitative coding methods to score each woman's agency based on the interview, which we treat as her true agency. To identify the close-ended questions most predictive of the "truth," we apply statistical algorithms that build on LASSO and random forest but constrain how many variables are selected for the model (five in our case). The resulting five-question index is as strongly correlated with the coded qualitative interview as is an index that uses all of the candidate questions. This approach of selecting survey questions based on their statistical correspondence to coded qualitative interviews could be used to design short survey modules for many other latent constructs.
    Keywords: feature selection; psychometrics; Survey Design; Women's Empowerment
    JEL: C83 D13 J16 O12
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:15961&r=
  5. By: Navid Mottaghi; Sara Farhangdoost
    Abstract: With the heightened volatility in stock prices during the Covid-19 pandemic, the need for price forecasting has become more critical. We investigated the forecast performance of four models including Long-Short Term Memory, XGBoost, Autoregression, and Last Value on stock prices of Facebook, Amazon, Tesla, Google, and Apple in COVID-19 pandemic time to understand the accuracy and predictability of the models in this highly volatile time region. To train the models, the data of all stocks are split into train and test datasets. The test dataset starts from January 2020 to April 2021 which covers the COVID-19 pandemic period. The results show that the Autoregression and Last value models have higher accuracy in predicting the stock prices because of the strong correlation between the previous day and the next day's price value. Additionally, the results suggest that the machine learning models (Long-Short Term Memory and XGBoost) are not performing as well as Autoregression models when the market experiences high volatility.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.02785&r=
  6. By: Martin Huber; Jonas Meier; Hannes Wallimann
    Abstract: We assess the demand effects of discounts on train tickets issued by the Swiss Federal Railways, the so-called `supersaver tickets', based on machine learning, a subfield of artificial intelligence. Considering a survey-based sample of buyers of supersaver tickets, we investigate which customer- or trip-related characteristics (including the discount rate) predict buying behavior, namely: booking a trip otherwise not realized by train, buying a first- rather than second-class ticket, or rescheduling a trip (e.g.\ away from rush hours) when being offered a supersaver ticket. Predictive machine learning suggests that customer's age, demand-related information for a specific connection (like departure time and utilization), and the discount level permit forecasting buying behavior to a certain extent. Furthermore, we use causal machine learning to assess the impact of the discount rate on rescheduling a trip, which seems relevant in the light of capacity constraints at rush hours. Assuming that (i) the discount rate is quasi-random conditional on our rich set of characteristics and (ii) the buying decision increases weakly monotonically in the discount rate, we identify the discount rate's effect among `always buyers', who would have traveled even without a discount, based on our survey that asks about customer behavior in the absence of discounts. We find that on average, increasing the discount rate by one percentage point increases the share of rescheduled trips by 0.16 percentage points among always buyers. Investigating effect heterogeneity across observables suggests that the effects are higher for leisure travelers and during peak hours when controlling several other characteristics.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.01426&r=
  7. By: Gorodnichenko, Yuriy; Maliar, Lilia; Maliar, Serguei; Naubert, Christopher
    Abstract: In this paper, we study household consumption-saving and portfolio choices in a heterogeneous-agent economy with sticky prices and time-varying total factor productivity and idiosyncratic stochastic volatility. Agents can save through liquid bonds and illiquid capital and shares. With rich heterogeneity at the household level, we are able to quantify the impact of uncertainty across the income and wealth distribution. Our results help us in identifying who wins and who loses when during periods of heightened individual and aggregate uncertainty. To study the importance of heterogeneity in understanding the transmission of economic shocks, we use a deep learning algorithm. Our method preserves non-linearities, which is essential for understanding the pricing decisions for illiquid assets.
    Keywords: deep learning; HANK; Heterogeneous Agents; Machine Learning; neural network
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:15614&r=
  8. By: Amstad, Marlene; Gambacorta, Leonardo; He, Chao; Xia, Fan Dora
    Abstract: Trade tensions between China and US have played an important role in swinging global stock markets but effects are difficult to quantify. We develop a novel trade sentiment index (TSI) based on textual analysis and machine learning applied on a big data pool that assesses the positive or negative tone of the Chinese media coverage, and evaluates its capacity to explain the behaviour of 60 global equity markets. We find the TSI to contribute around 10% of model capacity to explain the stock price variability from January 2018 to June 2019 in countries that are more exposed to the China-US value chain. Most of the contribution is given by the tone extracted from social media (9%), while that obtained from traditional media explains only a modest part of stock price variability (1%). No equity market benefits from the China-US trade war, and Asian markets tend to be more negatively affected. In particular, we find that sectors most affected by tariffs such as information technology related ones are particularly sensitive to the tone in trade tension.
    Keywords: Big Data; Machine Learning; neural network; sentiment; Stock returns; Trade
    JEL: C45 C55 D80 F13 F14 G15
    Date: 2021–01
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:15682&r=
  9. By: Goulet Coulombe, Philippe; Marcellino, Massimiliano; Stevanovic, Dalibor
    Abstract: Based on evidence gathered from a newly built large macroeconomic data set for the UK, labeled UK-MD and comparable to similar datasets for the US and Canada, it seems the most promising avenue for forecasting during the pandemic is to allow for general forms of nonlinearity by using machine learning (ML) methods. But not all nonlinear ML methods are alike. For instance, some do not allow to extrapolate (like regular trees and forests) and some do (when complemented with linear dynamic components). This and other crucial aspects of ML-based forecasting in unprecedented times are studied in an extensive pseudo-out-of-sample exercise.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:15867&r=
  10. By: Qingfeng Liu; Yang Feng
    Abstract: We propose a new ensemble framework for supervised learning, named machine collaboration (MaC), based on a collection of base machines for prediction tasks. Different from bagging/stacking (a parallel & independent framework) and boosting (a sequential & top-down framework), MaC is a type of circular & interactive learning framework. The circular & interactive feature helps the base machines to transfer information circularly and update their own structures and parameters accordingly. The theoretical result on the risk bound of the estimator based on MaC shows that circular & interactive feature can help MaC reduce the risk via a parsimonious ensemble. We conduct extensive experiments on simulated data and 119 benchmark real data sets. The results of the experiments show that in most cases, MaC performs much better than several state-of-the-art methods, including CART, neural network, stacking, and boosting.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.02569&r=
  11. By: Korinek, Anton; Stiglitz, Joseph E
    Abstract: Progress in artificial intelligence and related forms of automation technologies threatens to reverse the gains that developing countries and emerging markets have experienced from integrating into the world economy over the past half century, aggravating poverty and inequality. The new technologies have the tendency to be labor-saving, resource-saving, and to give rise to winner-takes-all dynamics that advantage developed countries. We analyze the economic forces behind these developments and describe economic policies that would mitigate the adverse effects on developing and emerging economies while leveraging the potential gains from technological advances. We also describe reforms to our global system of economic governance that would share the benefits of AI more widely with developing countries.
    Keywords: artificial intelligence; inequality; labor-saving progress; terms-of-trade losses
    JEL: D63 F63 O25 O32
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:15772&r=
  12. By: Tobias Cagala; Ulrich Glogowsky; Johannes Rincke; Anthony Strittmatter
    Abstract: Ineffective fundraising lowers the resources charities can use for goods provision. We combine a field experiment and a causal machine-learning approach to increase a charity’s fundraising effectiveness. The approach optimally targets fundraising to individuals whose expected donations exceed solicitation costs. Among past donors, optimal targeting substantially increases donations (net of fundraising costs) relative to benchmarks that target everybody or no one. Instead, individuals who were previously asked but never donated should not be targeted. Further, the charity requires only publicly available geospatial information to realize the gains from targeting. We conclude that charities not engaging in optimal targeting waste resources.
    Keywords: Fundraising; charitable giving; gift exchange; targeting; optimal policy learning; individualized treatment rules
    JEL: C93 D64 H41 L31 C21
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:jku:econwp:2021-08&r=
  13. By: Qiutong Guo; Shun Lei; Qing Ye; Zhiyang Fang
    Abstract: Bitcoin, one of the major cryptocurrencies, presents great opportunities and challenges with its tremendous potential returns accompanying high risks. The high volatility of Bitcoin and the complex factors affecting them make the study of effective price forecasting methods of great practical importance to financial investors and researchers worldwide. In this paper, we propose a novel approach called MRC-LSTM, which combines a Multi-scale Residual Convolutional neural network (MRC) and a Long Short-Term Memory (LSTM) to implement Bitcoin closing price prediction. Specifically, the Multi-scale residual module is based on one-dimensional convolution, which is not only capable of adaptive detecting features of different time scales in multivariate time series, but also enables the fusion of these features. LSTM has the ability to learn long-term dependencies in series, which is widely used in financial time series forecasting. By mixing these two methods, the model is able to obtain highly expressive features and efficiently learn trends and interactions of multivariate time series. In the study, the impact of external factors such as macroeconomic variables and investor attention on the Bitcoin price is considered in addition to the trading information of the Bitcoin market. We performed experiments to predict the daily closing price of Bitcoin (USD), and the experimental results show that MRC-LSTM significantly outperforms a variety of other network structures. Furthermore, we conduct additional experiments on two other cryptocurrencies, Ethereum and Litecoin, to further confirm the effectiveness of the MRC-LSTM in short-term forecasting for multivariate time series of cryptocurrencies.
    Date: 2021–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2105.00707&r=
  14. By: STRITTMATTER, Anthony; Wunsch, Conny
    Abstract: The vast majority of existing studies that estimate the average unexplained gender pay gap use unnecessarily restrictive linear versions of the Blinder-Oaxaca decomposition. Using a notably rich and large data set of 1.7 million employees in Switzerland, we investigate how the methodological improvements made possible by such big data affect estimates of the unexplained gender pay gap. We study the sensitivity of the estimates with regard to i) the availability of observationally comparable men and women, ii) model flexibility when controlling for wage determinants, and iii) the choice of different parametric and semi-parametric estimators, including variants that make use of machine learning methods. We find that these three factors matter greatly. Blinder-Oaxaca estimates of the unexplained gender pay gap decline by up to 39% when we enforce comparability between men and women and use a more flexible specification of the wage equation. Semi-parametric matching yields estimates that when compared with the Blinder-Oaxaca estimates, are up to 50% smaller and also less sensitive to the way wage determinants are included.
    Keywords: Common Support; Gender Inequality; Gender pay gap; Machine Learning; Matching estimator; Model specification
    JEL: C21 J31
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:15840&r=
  15. By: Nekoei, Arash; Sinn, Fabian
    Abstract: We construct a new dataset of more than seven million notable individuals across recorded human history, the Human Biographical Record (HBR). With Wikidata as the backbone, HBR adds further information from various digital sources, including Wikipedia in all 292 languages. Machine learning and text analysis combine the sources and extract information on date and place of birth and death, gender, occupation, education, and family background. This paper discusses HBR's construction and its completeness, coverage, accuracy, and also its strength and weakness relative to prior datasets. HBR is the first part of a larger project, the human record project that we briefly introduce.
    Keywords: Bid data; economic history; Machine Learning
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:15825&r=
  16. By: Alessio Brini; Daniele Tantari
    Abstract: Classical portfolio optimization often requires forecasting asset returns and their corresponding variances in spite of the low signal-to-noise ratio provided in the financial markets. Deep reinforcement learning (DRL) offers a framework for optimizing sequential trader decisions through an objective which represents its reward function penalized by risk and transaction costs. We investigate the performance of model-free DRL traders in a market environment with frictions and different mean-reverting factors driving the dynamics of the returns. Since this framework admits an exact dynamic programming solution, we can assess limits and capabilities of different value-based algorithms to retrieve meaningful trading signals in a data-driven manner and to reach the benchmark performance. Moreover, extensive simulations show that this approach guarantees flexibility, outperforming the benchmark when the price dynamics is misspecified and some original assumptions on the market environment are violated with the presence of extreme events and volatility clustering.
    Date: 2021–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2104.14683&r=
  17. By: Heinrich, Torsten
    Abstract: How are economies in a modern age impacted by epidemics? In what ways is economic life disrupted? How can pandemics be modeled? What can be done to mitigate and manage the danger? Does the threat of pandemics increase or decrease in the modern world? The Covid-19 pandemic has demonstrated the importance of these questions and the potential of complex systems science to provide answers. This article offers a broad overview of the history of pandemics, of established facts, and of models of infection diffusion, mitigation strategies, and economic impact. The example of the Covid-19 pandemic is used to illustrate the theoretical aspects, but the article also includes considerations concerning other historic epidemics and the danger of more infectious and less controllable outbreaks in the future.
    Keywords: epidemics and economics; public health; complex systems; SIR models; Agent-based models; mean-field models; Covid-19
    JEL: C63 I10 N30
    Date: 2021–04–30
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:107578&r=
  18. By: Abrell, Jan; Kosch, Mirjam; Rausch, Sebastian
    Abstract: While carbon taxes are generally seen as a rational policy response to climate change, knowledge about their performance from an expost perspective is still limited. This paper analyzes the emissions and cost impacts of the UK CPS, a carbon tax levied on all fossil-fired power plants. To overcome the problem of a missing control group, we propose a policy evaluation approach which leverages economic theory and machine learning for counterfactual prediction. Our results indicate that in the period 2013-2016 the CPS lowered emissions by 6.2 percent at an average cost of €18 per ton. We find substantial temporal heterogeneity in tax-induced impacts which stems from variation in relative fuel prices. An important implication for climate policy is that in the short run a higher carbon tax does not necessarily lead to higher emissions reductions or higher costs.
    JEL: C54 Q48 Q52 Q58 L94
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:zbw:zewdip:21039&r=
  19. By: Asker, John; Fershtman, Chaim; Pakes, Ariel
    Abstract: The behavior of artificial intelligences algorithms (AIAs) is shaped by how they learn about their environment. We compare the prices generated by AIAs that use different learning protocols when there is market interaction. Asynchronous learning occurs when the AIA only learns about the return from the action it took. Synchronous learning occurs when the AIA conducts counterfactuals to learn about the returns it would have earned had it taken an alternative action. The two lead to markedly different market prices. When future profits are not given positive weight by the AIA, synchronous updating leads to competitive pricing, while asynchronous can lead to pricing close to monopoly levels. We investigate how this result varies when either counterfactuals can only be calculated imperfectly and/or when the AIA places a weight on future profits.
    Date: 2021–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:15880&r=

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.