[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
nep-cmp New Economics Papers
on Computational Economics
Issue of 2023‒05‒22
23 papers chosen by



  1. The cross-sectional stock return predictions via quantum neural network and tensor network By Nozomu Kobayashi; Yoshiyuki Suimon; Koichi Miyamoto; Kosuke Mitarai
  2. Application of Tensor Neural Networks to Pricing Bermudan Swaptions By Raj G. Patel; Tomas Dominguez; Mohammad Dib; Samuel Palmer; Andrea Cadarso; Fernando De Lope Contreras; Abdelkader Ratnani; Francisco Gomez Casanova; Senaida Hern\'andez-Santana; \'Alvaro D\'iaz-Fern\'andez; Eva Andr\'es; Jorge Luis-Hita; Escol\'astico S\'anchez-Mart\'inez; Samuel Mugel; Roman Orus
  3. Enhanced multilayer perceptron with feature selection and grid search for travel mode choice prediction By Li Tang; Chuanli Tang; Qi Fu
  4. Machine Learning for Economics Research: When What and How? By Ajit Desai
  5. An innovative Deep Learning Based Approach for Accurate Agricultural Crop Price Prediction By Mayank Ratan Bhardwaj; Jaydeep Pawar; Abhijnya Bhat; Deepanshu; Inavamsi Enaganti; Kartik Sagar; Y. Narahari
  6. Application of Machine Learning to a Credit Rating Classification Model: Techniques for Improving the Explainability of Machine Learning By Ryuichiro Hashimoto; Kakeru Miura; Yasunori Yoshizaki
  7. Deep parametric portfolio policies By Simon, Frederik; Weibels, Sebastian; Zimmermann, Tom
  8. Recurrent neural network based parameter estimation of Hawkes model on high-frequency financial data By Kyungsub Lee
  9. On suspicious tracks: machine-learning based approaches to detect cartels in railway-infrastructure procurement By Hannes Wallimann; Silvio Sticher
  10. RCTs Against the Machine: Can Machine Learning Prediction Methods Recover Experimental Treatment Effects? By Prest, Brian C.; Wichman, Casey; Palmer, Karen
  11. Improving the effectiveness of financial education programs. A targeting approach By Ginevra Buratti; Alessio D'Ignazio
  12. Error Spotting with Gradient Boosting: A Machine Learning-Based Application for Central Bank Data Quality By Csaba Burger; Mihály Berndt
  13. Measuring Human Capital with Social Media Data and Machine Learning By Martina Jakob; Sebastian Heinrich
  14. One Threshold Doesn’t Fit All: Tailoring Machine Learning Predictions of Consumer Default for Lower-Income Areas By Vitaly Meursault; Daniel Moulton; Larry Santucci; Nathan Schor
  15. The inflation attention cycle: Updating the Inflation Perception Indicator (IPI) up to February 2023. A research note By Müller, Henrik; Schmidt, Tobias; Rieger, Jonas; Hornig, Nico; Hufnagel, Lena Marie
  16. Smiles in Profiles: Improving Fairness and Efficiency Using Estimates of User Preferences in Online Marketplaces By Athey, Susan; Karlan, Dean; Palikot, Emil; Yuan, Yuan
  17. Early Warning System for Currency Crises using Long Short-Term Memory and Gated Recurrent Unit Neural Networks By Sylvain Barthélémy; Fabien Rondeau; Virginie Gautier
  18. Energy efficiency policies in an agent-based macroeconomic model By Marco Amendola; Francesco Lamperti; Andrea Roventini; Alessandro Sapio
  19. Portfolio Optimization using Predictive Auxiliary Classifier Generative Adversarial Networks with Measuring Uncertainty By Jiwook Kim; Minhyeok Lee
  20. The Unpredictability of Individual-Level Longevity By Breen, Casey; Seltzer, Nathan
  21. Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus? By John J. Horton
  22. The Political Economy of AI: Towards Democratic Control of the Means of Prediction By Kasy, Maximilian
  23. The Heterogeneous Effects of Lockdown Policies on Air Pollution By Simon Briole; Augustin Colette; Emmanuelle Lavaine

  1. By: Nozomu Kobayashi; Yoshiyuki Suimon; Koichi Miyamoto; Kosuke Mitarai
    Abstract: In this paper we investigate the application of quantum and quantum-inspired machine learning algorithms to stock return predictions. Specifically, we evaluate performance of quantum neural network, an algorithm suited for noisy intermediate-scale quantum computers, and tensor network, a quantum-inspired machine learning algorithm, against classical models such as linear regression and neural networks. To evaluate their abilities, we construct portfolios based on their predictions and measure investment performances. The empirical study on the Japanese stock market shows the tensor network model achieves superior performance compared to classical benchmark models, including linear and neural network models. Though the quantum neural network model attains the lowered risk-adjusted excess return than the classical neural network models over the whole period, both the quantum neural network and tensor network models have superior performances in the latest market environment, which suggests capability of model's capturing non-linearity between input features.
    Date: 2023–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2304.12501&r=cmp
  2. By: Raj G. Patel; Tomas Dominguez; Mohammad Dib; Samuel Palmer; Andrea Cadarso; Fernando De Lope Contreras; Abdelkader Ratnani; Francisco Gomez Casanova; Senaida Hern\'andez-Santana; \'Alvaro D\'iaz-Fern\'andez; Eva Andr\'es; Jorge Luis-Hita; Escol\'astico S\'anchez-Mart\'inez; Samuel Mugel; Roman Orus
    Abstract: The Cheyette model is a quasi-Gaussian volatility interest rate model widely used to price interest rate derivatives such as European and Bermudan Swaptions for which Monte Carlo simulation has become the industry standard. In low dimensions, these approaches provide accurate and robust prices for European Swaptions but, even in this computationally simple setting, they are known to underestimate the value of Bermudan Swaptions when using the state variables as regressors. This is mainly due to the use of a finite number of predetermined basis functions in the regression. Moreover, in high-dimensional settings, these approaches succumb to the Curse of Dimensionality. To address these issues, Deep-learning techniques have been used to solve the backward Stochastic Differential Equation associated with the value process for European and Bermudan Swaptions; however, these methods are constrained by training time and memory. To overcome these limitations, we propose leveraging Tensor Neural Networks as they can provide significant parameter savings while attaining the same accuracy as classical Dense Neural Networks. In this paper we rigorously benchmark the performance of Tensor Neural Networks and Dense Neural Networks for pricing European and Bermudan Swaptions, and we show that Tensor Neural Networks can be trained faster than Dense Neural Networks and provide more accurate and robust prices than their Dense counterparts.
    Date: 2023–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2304.09750&r=cmp
  3. By: Li Tang; Chuanli Tang; Qi Fu
    Abstract: Accurate and reliable prediction of individual travel mode choices is crucial for developing multi-mode urban transportation systems, conducting transportation planning and formulating traffic demand management strategies. Traditional discrete choice models have dominated the modelling methods for decades yet suffer from strict model assumptions and low prediction accuracy. In recent years, machine learning (ML) models, such as neural networks and boosting models, are widely used by researchers for travel mode choice prediction and have yielded promising results. However, despite the superior prediction performance, a large body of ML methods, especially the branch of neural network models, is also limited by overfitting and tedious model structure determination process. To bridge this gap, this study proposes an enhanced multilayer perceptron (MLP; a neural network) with two hidden layers for travel mode choice prediction; this MLP is enhanced by XGBoost (a boosting method) for feature selection and a grid search method for optimal hidden neurone determination of each hidden layer. The proposed method was trained and tested on a real resident travel diary dataset collected in Chengdu, China.
    Date: 2023–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2304.12698&r=cmp
  4. By: Ajit Desai
    Abstract: This article provides a curated review of selected papers published in prominent economics journals that use machine learning (ML) tools for research and policy analysis. The review focuses on three key questions: (1) when ML is used in economics, (2) what ML models are commonly preferred, and (3) how they are used for economic applications. The review highlights that ML is particularly used in processing nontraditional and unstructured data, capturing strong nonlinearity, and improving prediction accuracy. Deep learning models are suitable for nontraditional data, whereas ensemble learning models are preferred for traditional datasets. While traditional econometric models may suffice for analyzing low-complexity data, the increasing complexity of economic data due to rapid digitalization and the growing literature suggest that ML is becoming an essential addition to the econometrician's toolbox.
    Date: 2023–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2304.00086&r=cmp
  5. By: Mayank Ratan Bhardwaj (Indian Institute of Science); Jaydeep Pawar (Indian Institute of Science); Abhijnya Bhat (PES University); Deepanshu (Indian Institute of Science); Inavamsi Enaganti (Indian Institute of Science); Kartik Sagar (Indian Institute of Science); Y. Narahari (Indian Institute of Science)
    Abstract: Accurate prediction of agricultural crop prices is a crucial input for decision-making by various stakeholders in agriculture: farmers, consumers, retailers, wholesalers, and the Government. These decisions have significant implications including, most importantly, the economic well-being of the farmers. In this paper, our objective is to accurately predict crop prices using historical price information, climate conditions, soil type, location, and other key determinants of crop prices. This is a technically challenging problem, which has been attempted before. In this paper, we propose an innovative deep learning based approach to achieve increased accuracy in price prediction. The proposed approach uses graph neural networks (GNNs) in conjunction with a standard convolutional neural network (CNN) model to exploit geospatial dependencies in prices. Our approach works well with noisy legacy data and produces a performance that is at least 20% better than the results available in the literature. We are able to predict prices up to 30 days ahead. We choose two vegetables, potato (stable price behavior) and tomato (volatile price behavior) and work with noisy public data available from Indian agricultural markets.
    Date: 2023–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2304.09761&r=cmp
  6. By: Ryuichiro Hashimoto (Bank of Japan); Kakeru Miura (Bank of Japan); Yasunori Yoshizaki (Bank of Japan)
    Abstract: Machine learning (ML) has been used increasingly in a wide range of operations at financial institutions. In the field of credit risk management, many financial institutions are starting to apply ML to credit scoring models and default models. In this paper we apply ML to a credit rating classification model. First, we estimate classification models based on both ML and ordinal logistic regression using the same dataset to see how model structure affects the prediction accuracy of models. In addition, we measure variable importance and decompose model predictions using so-called eXplainable AI (XAI) techniques that have been widely used in recent years. The results of our analysis are twofold. First, ML captures more accurately than ordinal logit regression the nonlinear relationships between financial indicators and credit ratings, leading to a significant improvement in prediction accuracy. Second, SHAP (Shapley Additive exPlanations) and PDP (Partial Dependence Plot) show that several financial indicators such as total revenue, total assets turnover, and ICR have a significant impact on firms’ credit quality. Nonlinear relationships between financial indicators and credit rating are also observed: a decrease in ICR below about 2 lowers firms’ credit quality sharply. Our analysis suggests that using XAI while understanding its underlying assumptions improves the low explainability of ML.
    Keywords: Credit risk management; Machine learning; Explainability; eXplainable AI (XAI)
    JEL: C49 C55 G32
    Date: 2023–04–21
    URL: http://d.repec.org/n?u=RePEc:boj:bojwps:wp23e06&r=cmp
  7. By: Simon, Frederik; Weibels, Sebastian; Zimmermann, Tom
    Abstract: We directly optimize portfolio weights as a function of firm characteristics via deep neural networks by generalizing the parametric portfolio policy framework. Our results show that network-based portfolio policies result in an increase of investor utility of between 30 and 100 percent over a comparable linear portfolio policy, depending on whether portfolio restrictions on individual stock weights, short-selling or transaction costs are imposed, and depending on an investor's utility function. We provide extensive model interpretation and show that network-based policies better capture the non-linear relationship between investor utility and firm characteristics. Improvements can be traced to both variable interactions and non-linearity in functional form. Both the linear and the network-based approach agree on the same dominant predictors, namely past return-based firm characteristics.
    Keywords: Portfolio Choice, Machine Learning, Expected Utility
    JEL: G11 G12 C58 C45
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:cfrwps:2301&r=cmp
  8. By: Kyungsub Lee
    Abstract: This study examines the use of a recurrent neural network for estimating the parameters of a Hawkes model based on high-frequency financial data, and subsequently, for computing volatility. Neural networks have shown promising results in various fields, and interest in finance is also growing. Our approach demonstrates significantly faster computational performance compared to traditional maximum likelihood estimation methods while yielding comparable accuracy in both simulation and empirical studies. Furthermore, we demonstrate the application of this method for real-time volatility measurement, enabling the continuous estimation of financial volatility as new price data keeps coming from the market.
    Date: 2023–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2304.11883&r=cmp
  9. By: Hannes Wallimann; Silvio Sticher
    Abstract: In railway infrastructure, construction and maintenance is typically procured using competitive procedures such as auctions. However, these procedures only fulfill their purpose - using (taxpayers') money efficiently - if bidders do not collude. Employing a unique dataset of the Swiss Federal Railways, we present two methods in order to detect potential collusion: First, we apply machine learning to screen tender databases for suspicious patterns. Second, we establish a novel category-managers' tool, which allows for sequential and decentralized screening. To the best of our knowledge, we pioneer illustrating the adaption and application of machine-learning based price screens to a railway-infrastructure market.
    Date: 2023–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2304.11888&r=cmp
  10. By: Prest, Brian C. (Resources for the Future); Wichman, Casey (Resources for the Future); Palmer, Karen (Resources for the Future)
    Abstract: We investigate how well machine learning counterfactual prediction tools can estimate causal treatment effects. We use three prediction algorithms—XGBoost, random forests, and LASSO—to estimate treatment effects using observational data. We compare those results to causal effects from a randomized experiment for electricity customers who faced critical-peak pricing and information treatments. Our results show that each algorithm replicates the true treatment effects, even when using data from treated households only. Additionally, when using both treatment households and nonexperimental comparison households, simpler difference-in-differences methods replicate the experimental benchmark, suggesting little benefit from ML approaches over standard program evaluation methods.Click "Download" above to read the full paper.
    Date: 2021–09–29
    URL: http://d.repec.org/n?u=RePEc:rff:dpaper:dp-21-30&r=cmp
  11. By: Ginevra Buratti (Bank of Italy); Alessio D'Ignazio (Bank of Italy)
    Abstract: We investigate whether targeting algorithms can improve the effectiveness of financial education programs by identifying the most appropriate recipients in advance. To this end, we use micro-data from approximately 3, 800 individuals who recently participated in a financial education campaign conducted in Italy. Firstly, we employ machine learning (ML) tools to devise a targeting rule that identifies the individuals who should be targeted primarily by a financial education campaign based on easily observable characteristics. Secondly, we simulate a policy scenario and show that pairing a financial education campaign with an ML-based targeting rule enhances its effectiveness. Finally, we discuss a number of conditions that must be met for ML-based targeting to be effectively implemented by policymakers.
    Keywords: financial education, machine learning, policy targeting, randomized controlled trials
    JEL: C38 I21 G5
    Date: 2023–04
    URL: http://d.repec.org/n?u=RePEc:bdi:opques:qef_765_23&r=cmp
  12. By: Csaba Burger (Magyar Nemzeti Bank (the Central Bank of Hungary)); Mihály Berndt (Clarity Consulting Kft)
    Abstract: Supervised machine learning methods, in which no error labels are present, are increasingly popular methods for identifying potential data errors. Such algorithms rely on the tenet of a ‘ground truth’ in the data, which in other words assumes correctness in the majority of the cases. Points deviating from such relationships, outliers, are flagged as potential data errors. This paper implements an outlier-based error-spotting algorithm using gradient boosting, and presents a blueprint for the modelling pipeline. More specifically, it underpins three main modelling hypotheses with empirical evidence, which are related to (1) missing value imputation, (2) the loss-function choice and (3) the location of the error. By doing so, it uses a cross sectional view on the loan-to-value and its related columns of the Credit Registry (Hitelregiszter) of the Central Bank of Hungary (MNB), and introduces a set of synthetic error types to test its hypotheses. The paper shows that gradient boosting is not materially impacted by the choice of the imputation method, hence, replacement with a constant, the computationally most efficient, is recommended. Second, the Huber-loss function, which is piecewise quadratic up until the Huber-slope parameter and linear above it, is better suited to cope with outlier values; it is therefore better in capturing data errors. Finally, errors in the target variable are captured best, while errors in the predictors are hardly found at all. These empirical results may generalize to other cases, depending on data specificities, and the modelling pipeline described underscores significant modelling decisions.
    Keywords: data quality, machine learning, gradient boosting, central banking, loss functions, missing values
    JEL: C5 C81 E58
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:mnb:opaper:2023/148&r=cmp
  13. By: Martina Jakob; Sebastian Heinrich
    Abstract: In response to persistent gaps in the availability of survey data, a new strand of research leverages alternative data sources through machine learning to track global development. While previous applications have been successful at predicting outcomes such as wealth, poverty or population density, we show that educational outcomes can be accurately estimated using geo-coded Twitter data and machine learning. Based on various input features, including user and tweet characteristics, topics, spelling mistakes, and network indicators, we can account for ~70 percent of the variation in educational attainment in Mexican municipalities and US counties.
    Keywords: machine learning, social media data, education, human capital, indicators, natural language processing
    JEL: C53 C80 O11 O15 I21 I25
    Date: 2023–05–05
    URL: http://d.repec.org/n?u=RePEc:bss:wpaper:46&r=cmp
  14. By: Vitaly Meursault; Daniel Moulton; Larry Santucci; Nathan Schor
    Abstract: Modeling advances create credit scores that predict default better overall, but raise concerns about their effect on protected groups. Focusing on low- and moderate-income (LMI) areas, we use an approach from the Fairness in Machine Learning literature — fairness constraints via group-specific prediction thresholds — and show that gaps in true positive rates (% of non-defaulters identified by the model as such) can be significantly reduced if separate thresholds can be chosen for non-LMI and LMI tracts. However, the reduction isn’t free as more defaulters are classified as good risks, potentially affecting both consumers’ welfare and lenders’ profits. The trade-offs become more favorable if the introduction of fairness constraints is paired with the introduction of more sophisticated models, suggesting a way forward. Overall, our results highlight the potential benefits of explicitly considering sensitive attributes in the design of loan approval policies and the potential benefits of output-based approaches to fairness in lending.
    Keywords: Credit Scores; Group Disparities; Machine Learning; Fairness
    JEL: G51 C38 C53
    Date: 2022–11–21
    URL: http://d.repec.org/n?u=RePEc:fip:fedpwp:95158&r=cmp
  15. By: Müller, Henrik; Schmidt, Tobias; Rieger, Jonas; Hornig, Nico; Hufnagel, Lena Marie
    Keywords: Inflation, expectations, narratives, latent Dirichlet allocation, text mining, computational methods
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:docmaw:13&r=cmp
  16. By: Athey, Susan (Stanford U); Karlan, Dean (Northwestern U); Palikot, Emil (Stanford U); Yuan, Yuan (Carnegie Mellon U)
    Abstract: Online platforms often face challenges being both fair (i.e., non-discriminatory) and efficient (i.e., maximizing revenue). Using computer vision algorithms and observational data from a microlending marketplace, we find that choices made by borrowers creating online profiles impact both of these objectives. We further support this conclusion with a web-based randomized survey experiment. In the experiment, we create profile images using Generative Adversarial Networks that differ in a specific feature and estimate its impact on lender demand. We then counterfactually evaluate alternative platform policies and identify particular approaches to influencing the changeable profile photo features that can ameliorate the fairness-efficiency tension.
    JEL: D0 D41 J0 O1
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:ecl:stabus:4071&r=cmp
  17. By: Sylvain Barthélémy (TAC Economics, Saint-Hilaire-des-Landes, France); Fabien Rondeau (Univ Rennes, CNRS, CREM – UMR6211, F-35000 Rennes France); Virginie Gautier (TAC Economics and University of Rennes, France.)
    Abstract: Currency crises, recurrent events in economic history for developing, emerging and developed countries, generate disastrous economic consequences. This paper proposes an early warning system for currency crises using sophisticated recurrent neural networks like Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). These models were initially used in language processing where they performed well. Such models are increasingly used in forecasting nancial asset prices, including exchange rates, but they have not yet been applied to the prediction of currency crises. As for all recurrent neural networks, they allow to take into account non-linear interactions between variables and the inuence of past data in a dynamic form. For a set of 68 countries including developed, emerging and developing economies over the period 1995-2020, LSTM and GRU outperformed our benchmark models. LSTM and GRU correctly sent continous signals within a two-year warning window to alert 91% of the crises. For LSTM, false signals represent only 14% of the emitted signals compared to 23% for the logistic regression, making them ecient early warning systems for policymakers.
    Keywords: currency crises, early warning system, neural network, long short-term memory, gated recurrent unit
    JEL: F14 F31 F47
    Date: 2023–04
    URL: http://d.repec.org/n?u=RePEc:tut:cremwp:2023-05&r=cmp
  18. By: Marco Amendola; Francesco Lamperti; Andrea Roventini; Alessandro Sapio
    Abstract: Improvements in energy efficiency can help facing the on-going climate and energy crises, yet the energy intensity of economic activities at the global level in recent years has decreased more slowly than it is required to achieve climate goals. Based on this premise, the paper builds a macroeconomic agent-based K+S model to study the effects of different policies on energy efficiency. In the model, energy efficiency of capital goods improves as the outcome of endogenous, bottom-up technical change. Public policies analysed range from indirect policies based on taxes, incentives, and subsidies, rooted in the traditional role of the State as fixing market failures, to direct technological policies, akin to the entrepreneurial state approach, in which a public research laboratory invests in R&D with the aim to establish a new technological paradigm on energy efficiency. Simulation results show that while most policies tested are effective in reducing energy intensity, the public research lab is extremely effective in promoting energy efficiency without deteriorating macroeconomic and public finance conditions. The superiority of the national lab policy, however, emerges on a relatively long time-horizon, highlighting the importance of governments that are patient enough to wait for the returns of that policy and the necessity to complement this strategy with more ''ready to use'' indirect measures. Additionally, results indicate that the macroeconomic rebound effect induced by most of the policies is rather small. Concerns about macroeconomic rebound effects are, therefore, most likely often overstated.
    Keywords: Energy efficiency policies; Sustainability; Rebound effect; Agent-based modelling.
    Date: 2023–05–09
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2023/20&r=cmp
  19. By: Jiwook Kim; Minhyeok Lee
    Abstract: In financial engineering, portfolio optimization has been of consistent interest. Portfolio optimization is a process of modulating asset distributions to maximize expected returns and minimize risks. To obtain the expected returns, deep learning models have been explored in recent years. However, due to the deterministic nature of the models, it is difficult to consider the risk of portfolios because conventional deep learning models do not know how reliable their predictions can be. To address this limitation, this paper proposes a probabilistic model, namely predictive auxiliary classifier generative adversarial networks (PredACGAN). The proposed PredACGAN utilizes the characteristic of the ACGAN framework in which the output of the generator forms a distribution. While ACGAN has not been employed for predictive models and is generally utilized for image sample generation, this paper proposes a method to use the ACGAN structure for a probabilistic and predictive model. Additionally, an algorithm to use the risk measurement obtained by PredACGAN is proposed. In the algorithm, the assets that are predicted to be at high risk are eliminated from the investment universe at the rebalancing moment. Therefore, PredACGAN considers both return and risk to optimize portfolios. The proposed algorithm and PredACGAN have been evaluated with daily close price data of S&P 500 from 1990 to 2020. Experimental scenarios are assumed to rebalance the portfolios monthly according to predictions and risk measures with PredACGAN. As a result, a portfolio using PredACGAN exhibits 9.123% yearly returns and a Sharpe ratio of 1.054, while a portfolio without considering risk measures shows 1.024% yearly returns and a Sharpe ratio of 0.236 in the same scenario. Also, the maximum drawdown of the proposed portfolio is lower than the portfolio without PredACGAN.
    Date: 2023–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2304.11856&r=cmp
  20. By: Breen, Casey; Seltzer, Nathan (University of California, Berkeley)
    Abstract: How accurately can age of death be predicted using basic sociodemographic characteristics? We test this question using a large-scale administrative dataset combining the complete count 1940 Census with Social Security death records. We fit eight machine learning algorithms using 35 sociodemographic predictors to generate individual-level predictions of age of death for birth cohorts born at the beginning of the 20th century. We find that none of these algorithms are able to explain more than 1.5% of the variation in age of death. Our results suggest mortality is inherently unpredictable and underscore the challenges of using algorithms to predict major life outcomes.
    Date: 2023–04–08
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:znsqg&r=cmp
  21. By: John J. Horton
    Abstract: Newly-developed large language models (LLM)—because of how they are trained and designed—are implicit computational models of humans—a homo silicus. LLMs can be used like economists use homo economicus: they can be given endowments, information, preferences, and so on, and then their behavior can be explored in scenarios via simulation. Experiments using this approach, derived from Charness and Rabin (2002), Kahneman, Knetsch and Thaler (1986), and Samuelson and Zeckhauser (1988) show qualitatively similar results to the original, but it is also easy to try variations for fresh insights. LLMs could allow researchers to pilot studies via simulation first, searching for novel social science insights to test in the real world.
    JEL: D0
    Date: 2023–04
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31122&r=cmp
  22. By: Kasy, Maximilian
    Abstract: This chapter discusses the regulation of artificial intelligence (AI) from the vantage point of political economy, based on the following premises: (i) AI systems maximize a single, measurable objective. (ii) In society, different individuals have different objectives. AI systems generate winners and losers. (iii) Society-level assessments of AI require trading off individual gains and losses. (iv) AI requires democratic control of algorithms, data, and computational infrastructure, to align algorithm objectives and social welfare. I address several debates regarding the ethics and social impact of AI, including (i) fairness, discrimination, and inequality, (ii) privacy, data property rights, and data governance, (iii) value alignment and the impending robot apocalypse, (iv) explainability and accountability for automated decision-making, and (v) automation and the impact of AI on the labor market and on wage inequality. (Stone Center on Socio-Economic Inequality Working Paper)
    Date: 2023–04–19
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:x7pcy&r=cmp
  23. By: Simon Briole (CEE-M - Centre d'Economie de l'Environnement - Montpellier - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement - Institut Agro Montpellier - Institut Agro - Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement - UM - Université de Montpellier); Augustin Colette (INERIS - Institut National de l'Environnement Industriel et des Risques); Emmanuelle Lavaine (CEE-M - Centre d'Economie de l'Environnement - Montpellier - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement - Institut Agro Montpellier - Institut Agro - Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement - UM - Université de Montpellier)
    Abstract: While a sharp decline in air pollution has been documented during early Covid-19 lockdown periods, the stability and homogeneity of this effect are still under debate. Building on pollution data with a very high level of resolution, this paper estimates the impact of lockdown policies on P M 2.5 exposure in France over the whole year 2020. Our analyses highlight a surprising and undocumented increase in exposure to particulate pollution during lockdown periods. This result is observed during both lockdown periods, in early spring and late fall, and is robust to several identification strategies and model specifications. Combining administrative datasets with machine learning techniques, this paper also highlights strong spatial heterogeneity in lockdown effects, especially according to long-term pollution exposure.
    Keywords: air pollution, P M 2.5, lockdown, spatial heterogeneity, machine learning, Covid-19
    Date: 2023–04–28
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-04084912&r=cmp

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.