[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒10‒21
sixteen papers chosen by



  1. Predicting Auction Price of Vehicle License Plate with Deep Residual Learning By Vinci Chow
  2. Branch-Price-and-Cut for the Soft-Clustered Capacitated Arc-Routing Problem By Stefan Irnich; Timo Hintsch; Lone Kiilerich
  3. An Inertial Newton Algorithm for Deep Learning By Bolte, Jérôme; Castera, Camille; Pauwels, Edouard; Févotte, Cédric
  4. Stock price formation: useful insights from a multi-agent reinforcement learning model By J. Lussange; S. Bourgeois-Gironde; S. Palminteri; B. Gutkin
  5. Weighted Monte Carlo with least squares and randomized extended Kaczmarz for option pricing By Damir Filipovi\'c; Kathrin Glau; Yuji Nakatsukasa; Francesco Statti
  6. Smart hedging against carbon leakage By Böhringer, Christoph; Rosendahl, Knut Einar; Briseid Storrøsten, Halvor
  7. Variation and adaptation: learning from success in patient safety-oriented simulation training By Dieckmann, Peter; Patterson, Mary; Lahlou, Saadi; Mesman, Jessica; Nyström, Patrik; Krage, Ralf
  8. Nowcasting and forecasting US recessions: Evidence from the Super Learner By Maas, Benedikt
  9. Conservative set valued fields, automatic differentiation, stochastic gradient methods and deep learning By Bolte, Jérôme; Pauwels, Edouard
  10. Principled estimation of regression discontinuity designs with covariates: a machine learning approach By Jason Anastasopoulos
  11. Does South African Affirmative Action Policy Reduce Poverty? A CGE Analysis By Helene Maisonnave; Bernard Decaluwé; Margaret Chitiga
  12. Prediksi Pendapatan Terbesar pada Penjualan Produk Cat dengan Menggunakan Metode Monte Carlo By Geni, Bias Yulisa; Santony, Julius; Sumijan, Sumijan
  13. Incorporating Fine-grained Events in Stock Movement Prediction By Deli Chen; Yanyan Zou; Keiko Harimoto; Ruihan Bao; Xuancheng Ren; Xu Sun
  14. The Paradox of Big Data By Smith, Gary
  15. Residual Switching Network for Portfolio Optimization By Jifei Wang; Lingjing Wang
  16. Predicting Consumer Default: A Deep Learning Approach By Stefania Albanesi; Domonkos F. Vamossy

  1. By: Vinci Chow
    Abstract: Due to superstition, license plates with desirable combinations of characters are highly sought after in China, fetching prices that can reach into the millions in government-held auctions. Despite the high stakes involved, there has been essentially no attempt to provide price estimates for license plates. We present an end-to-end neural network model that simultaneously predict the auction price, gives the distribution of prices and produces latent feature vectors. While both types of neural network architectures we consider outperform simpler machine learning methods, convolutional networks outperform recurrent networks for comparable training time or model complexity. The resulting model powers our online price estimator and search engine.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.04879&r=all
  2. By: Stefan Irnich (Johannes Gutenberg University Mainz); Timo Hintsch (Johannes Gutenberg University Mainz); Lone Kiilerich (Aarhus University, Denmark)
    Abstract: The soft-clustered capacitated arc-routing problem (SoftCluCARP) is a restricted variant of the classical capacitated arc-routing problem. The only additional constraint is that the set of required edges, i.e., the streets to be serviced, is partitioned into clusters and feasible routes must respect the soft-cluster constraint, that is, all required edges of the same cluster must be served by the same vehicle. In this article, we design an effectivebranch-price-and-cutalgorithmfortheexactsolutionoftheSoftCluCARP.Itsnewcomponentsarea metaheuristic and branch-and-cut-based solvers for the solution of the column-generation subproblem, which is a proï¬ table rural clustered postman tour problem. Although postman problems with these characteristics have been studied before, there is one fundamental difference here: clusters are not necessarily vertexdisjoint, which prohibits many preprocessing and modeling approaches for clustered postman problems from the literature. We present an undirected and a windy formulation for the pricing subproblem and develop and computationally compare two corresponding branch-and-cut algorithms. Cutting is also performed at the master-program level using subset-row inequalities for subsets of size up to ï¬ ve. For the ï¬ rst time, these non-robust cuts are incorporated into MIP-based routing subproblem solvers using two different modeling approaches. In several computational studies, we calibrate the individual algorithmic components. The ï¬ nal computational experiments prove that the branch-price-and-cut algorithm equipped with these problemtailored components is effective: The largest SoftCluCARP instances solved to optimality have more than 150 required edges or more than 50 clusters.
    JEL: J22 J61 R23
    Date: 2019–10–10
    URL: http://d.repec.org/n?u=RePEc:jgu:wpaper:1911&r=all
  3. By: Bolte, Jérôme; Castera, Camille; Pauwels, Edouard; Févotte, Cédric
    Abstract: We devise a learning algorithm for possibly nonsmooth deep neural networks featuring inertia and Newtonian directional intelligence only by means of a backpropagation oracle. Our algorithm, called INDIAN, has an appealing mechanical interpretation, making the role of its two hyperparameters transparent. An elementary phase space lifting allows both for its implementation and its theoretical study under very general assumptions. We handle in particular a stochastic version of our method (which encompasses usual mini-batch approaches) for nonsmooth activation functions (such as ReLU). Our algorithm shows high efficiency and reaches state of the art on image classification problems.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:123630&r=all
  4. By: J. Lussange; S. Bourgeois-Gironde; S. Palminteri; B. Gutkin
    Abstract: In the past, financial stock markets have been studied with previous generations of multi-agent systems (MAS) that relied on zero-intelligence agents, and often the necessity to implement so-called noise traders to sub-optimally emulate price formation processes. However recent advances in the fields of neuroscience and machine learning have overall brought the possibility for new tools to the bottom-up statistical inference of complex systems. Most importantly, such tools allows for studying new fields, such as agent learning, which in finance is central to information and stock price estimation. We present here the results of a new generation MAS stock market simulator, where each agent autonomously learns to do price forecasting and stock trading via model-free reinforcement learning, and where the collective behaviour of all agents decisions to trade feed a centralised double-auction limit order book, emulating price and volume microstructures. We study here what such agents learn in detail, and how heterogenous are the policies they develop over time. We also show how the agents learning rates, and their propensity to be chartist or fundamentalist impacts the overall market stability and agent individual performance. We conclude with a study on the impact of agent information via random trading.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.05137&r=all
  5. By: Damir Filipovi\'c; Kathrin Glau; Yuji Nakatsukasa; Francesco Statti
    Abstract: We propose a methodology for computing single and multi-asset European option prices, and more generally expectations of scalar functions of (multivariate) random variables. This new approach combines the ability of Monte Carlo simulation to handle high-dimensional problems with the efficiency of function approximation. Specifically, we first generalize the recently developed method for multivariate integration in [arXiv:1806.05492] to integration with respect to probability measures. The method is based on the principle "approximate and integrate" in three steps i) sample the integrand at points in the integration domain, ii) approximate the integrand by solving a least-squares problem, iii) integrate the approximate function. In high-dimensional applications we face memory limitations due to large storage requirements in step ii). Combining weighted sampling and the randomized extended Kaczmarz algorithm we obtain a new efficient approach to solve large-scale least-squares problems. Our convergence and cost analysis along with numerical experiments show the effectiveness of the method in both low and high dimensions, and under the assumption of a limited number of available simulations.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.07241&r=all
  6. By: Böhringer, Christoph (University of Oldenburg); Rosendahl, Knut Einar (School of Economics and Business, Norwegian University of Life Sciences); Briseid Storrøsten, Halvor (Statistics Norway)
    Abstract: Policy makers in the EU and elsewhere are concerned that unilateral carbon pricing induces carbon leakage through relocation of emission-intensive and trade-exposed industries to other regions. A common measure to mitigate such leakage is to combine an emission trading system (ETS) with output-based allocation (OBA) of allowances to exposed industries. We first show analytically that in a situation with an ETS combined with OBA, it is optimal to impose a consumption tax on the goods that are entitled to OBA, where the tax is equivalent in value to the OBA-rate. Then, using a multi-region, multi-sector computable general equilibrium (CGE) model calibrated to empirical data, we quantify the welfare gains for the EU to impose such a consumption tax on top of its existing ETS with OBA. We run Monte Carlo simulations to account for uncertain leakage exposure of goods entitled to OBA. The consumption tax increases welfare whether the goods are highly exposed to leakage or not. Thus, policy makers in regions with OBA can only gain by introducing the consumption tax. It can hence be regarded as smart hedging against carbon leakage.
    Keywords: Carbon leakage; output-based allocation; consumption tax
    JEL: D61 F18 H23 Q54
    Date: 2019–10–10
    URL: http://d.repec.org/n?u=RePEc:hhs:nlsseb:2019_004&r=all
  7. By: Dieckmann, Peter; Patterson, Mary; Lahlou, Saadi; Mesman, Jessica; Nyström, Patrik; Krage, Ralf
    Abstract: Simulation is traditionally used to reduce errors and their negative consequences. But according to modern safety theories, this focus overlooks the learning potential of the positive performance, which is much more common than errors. Therefore, a supplementary approach to simulation is needed to unfold its full potential. In our commentary, we describe the learning from success (LFS) approach to simulation and debriefing. Drawing on several theoretical frameworks, we suggest supplementing the widespread deficit-oriented, corrective approach to simulation with an approach that focusses on systematically understanding how good performance is produced in frequent (mundane) simulation scenarios. We advocate to investigate and optimize human activity based on the connected layers of any setting: the embodied competences of the healthcare professionals, the social and organizational rules that guide their actions, and the material aspects of the setting. We discuss implications of these theoretical perspectives for the design and conduct of simulation scenarios, post-simulation debriefings, and faculty development programs.
    JEL: G32
    Date: 2017–10–31
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:101889&r=all
  8. By: Maas, Benedikt
    Abstract: This paper introduces the Super Learner to nowcast and forecast the probability of a US economy recession in the current quarter and future quarters. The Super Learner is an algorithm that selects an optimal weighted average from several machine learning algorithms. In this paper, elastic net, random forests, gradient boosting machines and kernel support vector machines are used as underlying base learners of the Super Learner, which is trained with real-time vintages of the FRED-MD database as input data. The Super Learner’s ability to categorise future time periods into recessions versus expansions is compared with eight different alternatives based on probit models. The relative model performance is evaluated based on receiver operating characteristic (ROC) curves. In summary, the Super Learner predicts a recession very reliably across all forecast horizons, although it is defeated by different individual benchmark models on each horizon.
    Keywords: Machine Learning; Nowcasting; Forecasting; Business cycle analysis
    JEL: C32 C53 E32
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:96408&r=all
  9. By: Bolte, Jérôme; Pauwels, Edouard
    Abstract: Modern problems in AI or in numerical analysis require nonsmooth approaches with a exible calculus. We introduce generalized derivatives called conservative fields for which we develop a calculus and provide representation formulas. Functions having a conservative field are called path differentiable: convex, concave, Clarke regular and any semialgebraic Lipschitz continuous functions are path differentiable. Using Whitney stratification techniques for semialgebraic and definable sets, our model provides variational formulas for nonsmooth automatic diffrentiation oracles, as for instance the famous backpropagation algorithm in deep learning. Our differential model is applied to establish the convergence in values of nonsmooth stochastic gradient methods as they are implemented in practice.
    Keywords: Deep Learning, Automatic differentiation, Backpropagation algorithm,; Nonsmooth stochastic optimization, Defiable sets, o-minimal structures, Stochastic gradient, Clarke subdifferential, First order methods
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:123631&r=all
  10. By: Jason Anastasopoulos
    Abstract: The regression discontinuity design (RDD) has become the "gold standard" for causal inference with observational data. Local average treatment effects (LATE) for RDDs are often estimated using local linear regressions with pre-treatment covariates typically added to increase the efficiency of treatment effect estimates, but their inclusion can have large impacts on LATE point estimates and standard errors, particularly in small samples. In this paper, I propose a principled, efficiency-maximizing approach for covariate adjustment of LATE in RDDs. This approach allows researchers to combine context-specific, substantive insights with automated model selection via a novel adaptive lasso algorithm. When combined with currently existing robust estimation methods, this approach improves the efficiency of LATE RDD with pre-treatment covariates. The approach will be implemented in a forthcoming R package, AdaptiveRDD which can be used to estimate and compare treatment effects generated by this approach with extant approaches.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.06381&r=all
  11. By: Helene Maisonnave (ULH - Université Le Havre Normandie - NU - Normandie Université); Bernard Decaluwé (Université Laval); Margaret Chitiga
    Date: 2019–10–11
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02314221&r=all
  12. By: Geni, Bias Yulisa; Santony, Julius; Sumijan, Sumijan
    Abstract: Completing cat products in meeting consumer demand is something that must be addressed. Sales are very important for sales. The amount of demand for goods increases, it will get a large income. The purpose of this study is to predict the sales revenue of paint products at UD. Masdi Related, makes it easy for the leadership of the company to find out the amount of money obtained quickly. This research also makes it easy for companies to take business strategies quickly and optimally. The data used in this research is the data of paint product sales for January 2016 to December 2018 which is processed using the Monte carlo method. Income prediction will be done every year. In addition to predicting revenue, the sales data is also used to predict product demand every year. To predict the sales of paint products using the Monte Carlo method. The results of this study can predict sales revenue of paint products very well. Based on the results of tests conducted on the system used to predict sales revenue of cat products with an average rating of 89%. With a fairly high degree of accuracy, the application of the Monte Carlo method can be estimated to make an estimate of the income and demand for each paint product every year. Necessary, will facilitate the leadership to choose the right business strategy to increase sales of cat product sales.
    Keywords: Modeling and Simulation, Monte Carlo, Revenue Prediction, Paint Products, Building Stores
    JEL: G0
    Date: 2019–10–14
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:96524&r=all
  13. By: Deli Chen; Yanyan Zou; Keiko Harimoto; Ruihan Bao; Xuancheng Ren; Xu Sun
    Abstract: Considering event structure information has proven helpful in text-based stock movement prediction. However, existing works mainly adopt the coarse-grained events, which loses the specific semantic information of diverse event types. In this work, we propose to incorporate the fine-grained events in stock movement prediction. Firstly, we propose a professional finance event dictionary built by domain experts and use it to extract fine-grained events automatically from finance news. Then we design a neural model to combine finance news with fine-grained event structure and stock trade data to predict the stock movement. Besides, in order to improve the generalizability of the proposed method, we design an advanced model that uses the extracted fine-grained events as the distant supervised label to train a multi-task framework of event extraction and stock prediction. The experimental results show that our method outperforms all the baselines and has good generalizability.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.05078&r=all
  14. By: Smith, Gary (Pomona College)
    Abstract: Data-mining is often used to discover patterns in Big Data. It is tempting believe that because an unearthed pattern is unusual it must be meaningful, but patterns are inevitable in Big Data and usually meaningless. The paradox of Big Data is that data mining is most seductive when there are a large number of variables, but a large number of variables exacerbates the perils of data mining.
    Keywords: data mining, big data, machine learning
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:clm:pomwps:1003&r=all
  15. By: Jifei Wang; Lingjing Wang
    Abstract: This paper studies deep learning methodologies for portfolio optimization in the US equities market. We present a novel residual switching network that can automatically sense changes in market regimes and switch between momentum and reversal predictors accordingly. The residual switching network architecture combines two separate residual networks (ResNets), namely a switching module that learns stock market conditions, and the main module that learns momentum and reversal predictors. We demonstrate that over-fitting noisy financial data can be controlled with stacked residual blocks and further incorporating the attention mechanism can enhance powerful predictive properties. Over the period 2008 to H12017, the residual switching network (Switching-ResNet) strategy verified superior out-of-sample performance with an average annual Sharpe ratio of 2.22, compared with an average annual Sharpe ratio of 0.81 for the ANN-based strategy and 0.69 for the linear model.
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1910.07564&r=all
  16. By: Stefania Albanesi; Domonkos F. Vamossy
    Abstract: We develop a model to predict consumer default based on deep learning. We show that the model consistently outperforms standard credit scoring models, even though it uses the same data. Our model is interpretable and is able to provide a score to a larger class of borrowers relative to standard credit scoring models while accurately tracking variations in systemic risk. We argue that these properties can provide valuable insights for the design of policies targeted at reducing consumer default and alleviating its burden on borrowers and lenders, as well as macroprudential regulation.
    Keywords: consumer default, credit scores, deep learning, macroprudential policy
    JEL: D14 E44 G21
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:hka:wpaper:2019-056&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.