[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
nep-cmp New Economics Papers
on Computational Economics
Issue of 2019‒09‒23
thirteen papers chosen by



  1. An Alternative Solution Method for Continuous-Time Heterogeneous Agent Models with Aggregate Shocks By Nobuhide Okahata
  2. Construct Validation for a Nonlinear Measurement Model in Marketing and Consumer Behavior Research By Toshikuni Sato
  3. Geographic Clustering of Firms in China By Douglas Hanley; Chengying Luo; Mingqin Wu
  4. Endogenous segregation dynamics and housing market interactions: An ABM approach By Bonakdar, Said Benjamin
  5. Real level of public investment: how to manage the inflation? By Ngouhouo, Ibrahim; Tchoffo, Rodrigue
  6. Developing a model for energy retrofit in large building portfolios: energy assessment, optimization and uncertainty By Laura Gabrielli; Aurora Ruggeri
  7. The Globe as a Network: Geography and the Origins of the World Income Distribution By Matthew Delventhal
  8. Impact of Fiscal Consolidation on the Mongolian Economy By Ragchaasuren Galindev; Tsolmon Baatarzorig; Nyambaatar Batbayar; Delgermaa Begz; Unurjargal Davaa; Oyunzul Tserendorj
  9. Application of machine learning in real estate transactions – automation of due diligence processes based on digital building documentation By Philipp Maximilian Müller
  10. Improving Regulatory Effectiveness through Better Targeting: Evidence from OSHA By Johnson, Matthew S; Levine, David I; Toffel, Michael W
  11. A Structural Model of a Multitasking Salesforce: Job Task Allocation and Incentive Plan Design By Minkyung Kim; K. Sudhir; Kosuke Uetake
  12. Challenges in Machine Learning for Document Classification in the Real Estate Industry By Mario Bodenbender; Björn-Martin Kurzrock
  13. MD&A Disclosure and Performance of U.S. REITs: The Information Content of Textual Tone By Marina Koelbl

  1. By: Nobuhide Okahata (Ohio State University)
    Abstract: The increasing availability of micro data has led researchers to develop increasingly rich heterogeneous agent models. Solving these models involves nontrivial computational costs. The continuous-time solution method proposed by Ahn, Kaplan, Moll, Winberry, and Wolf (NBER Macroeconomics Annual 2017, volume 32) is dramatically fast, making feasible the solution of heterogeneous agent models with aggregate shocks by applying local perturbation and dimension reduction. While this computational innovation contributes enormously to expanding the research frontier, the essential reliance on the local linearization limits a class of problems researchers can investigate to the one where certainty equivalence with respect to aggregate shocks holds. This implies that it may be unsuitable for analyzing models where large aggregate shocks exist or nonlinearity matters. To resolve this issue, I propose an alternative solution method for continuous-time heterogeneous agent models with aggregate shocks by extending the Backward Induction method originally developed for discrete time models by Reiter (2010). The proposed method is nonlinear and global with respect to both idiosyncratic and aggregate shocks. I apply this method to solve a Krusell and Smith (1998) economy and evaluate its performance along two dimensions: accuracy and computation speed. I find that the proposed method is accurate even with large aggregate shocks and high curvature without surrendering computation speed (the baseline economy is solved within a few seconds). This new method is also applied to a model with recursive utility and an Overlapping Generations (OLG) model, and it is able to solve both models quickly and accurately.
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:red:sed019:1470&r=all
  2. By: Toshikuni Sato
    Abstract: This study proposes a method to evaluate the construct validity for a nonlinear measurement model. Construct validation is required when applying measurement and structural equation models to questionnaire data from consumer and related social science research. However, previous studies have not sufficiently discussed the nonlinear measurement model and its construct validation. This study focuses on convergent and discriminant validation as important processes to check whether estim ated latent variables represent defined constructs To assess the convergent and discriminant validity in the nonlinear measurement model, previous methods are extended and new indexes are investigated by simulation studies. Empirical analysis is also provided, which shows that a nonlinear measurement model is better than linear model in both fitting and validity. Moreover, a new concept of construct validation is discussed for future research: it considers the interpretability of machine learning (such as neural networks) because construct validation plays an important role in interpreting latent variable s.
    Date: 2019–08
    URL: http://d.repec.org/n?u=RePEc:toh:dssraa:101&r=all
  3. By: Douglas Hanley (University of Pittsburgh); Chengying Luo (University of Pittsburgh); Mingqin Wu (South China Normal University)
    Abstract: The spatial arrangement of firms is known to be a critical factor influencing a variety of firm level outcomes. Numerous existing studies have investigated the importance of firm density and localization at various spatial scales, as well as agglomeration by industry. In this paper, we bring relatively new data and techniques to bear on the issue. Regarding the data, we use a comprehensive census of firms conducted by the National Bureau of Statistics of China (NBS). This covers firms in all industries and localities, and we have waves from both 2004 and 2008 available. Past studies have largely relied on manufacturing firms. This additional data allows us to look more closely at clustering within services, as well as potential spillovers between services and manufacturing. Further, by looking at the case of China, we get a snapshot of a country (especially in the early 2000s) in a period of rapid transition, but one that has already industrialized to a considerable degree. Additionally, this is an environment shaped by far more aggressive industrial policies than those seen in much of Western Europe and North America. In terms of techniques, we take a machine learning approach to understanding firm clustering and agglomeration. Specifically, we use images generated by density maps of firm location data (from the NBS data) as well as linked satellite imagery from the Landsat 7 spacecraft. This allows us to frame the issue as one of prediction. By predicting firm outcomes such as profitability, productivity, and growth using these images, we can understand their relationship to firm clustering. By turning this into a prediction problem using images as inputs, we can tap into the rich and rapidly evolving literature in computer science and machine learning on deep convolutional neural networks (CNNs). Additionally, we can utilize software and hardware tools developed for these purposes.
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:red:sed019:1522&r=all
  4. By: Bonakdar, Said Benjamin
    Abstract: In contrast to previous research, I hypothesize that residential segregation patterns do not only result from an individual's perception of different ethnicities, but is rather affected by housing market interactions and socioeconomic endowment, like income and education. I implement a theoretical agent-based model, which contains three main features: agents' socioeconomic endowment, the quantification of one's Willingness-to-Stay within a neighborhood and housing market interactions if an agent decides to move. The results indicate that housing market interactions, the valuation of socioeconomic factors, but also the increasing share of minority groups diminish the absolute level of racial segregation. The analysis shows that house price clusters dominate urban areas, since individuals have an incentive to stay in more expensive neighborhoods in which they made a bargain. An increase in house price segregation can be observed if individuals strongly undervalue their own house and if individuals have higher access to credit. I can show that these market interactions lead to lock-in effects for low-income individuals, since they lack the necessary budget and suffer under negative equity. Thus, residential segregation shows a strong dependency on housing market interactions and is more complex than presumed by Schelling's Spatial Model or the White Flight Hypothesis.
    Keywords: agent-based modelling,residential choice,housing demand,neighborhood characteristics,segregation
    JEL: C63 R21 R23
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:rwirep:819&r=all
  5. By: Ngouhouo, Ibrahim; Tchoffo, Rodrigue
    Abstract: When the government collects a supplementary indirect tax on an output, the price of that output increases by consequence. Then, using the resulting revenue for public investments will lead to an underconsumption of the total revenue invested. This is due to an inflation that has been created by this mechanism. This paper investigates the determination of the net amount of investment projects taking into account the effect of inflation. We use the computable general equilibrium model to test our hypothesis. As result, we show that, some simulations are needed in order to reach the equilibrium.
    Keywords: Government spending; inflation; taxes; investment; computable general equilibrium
    JEL: C68 E62 H50
    Date: 2019–09–15
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:95914&r=all
  6. By: Laura Gabrielli; Aurora Ruggeri
    Abstract: During the last years, a growing interest has pivoted around strategies and methodologies for energy efficiency in buildings. Nevertheless, the focus has always been on single properties, while the scientific research still lacks in solutions for building portfolios. Assets owners instead, would require reliable decisional tools to select the most effective retrofit solutions. This study intends to elaborate a model capable of identifying the optimal allocation of financial resources for energy enhancements in large building portfolios. The core idea is to assist and strongly orientate the decision-making process through a comprehensive new methodology. Some novelties characterize this research. First, the approach developed covers each aspect of energy retrofits, from preliminary analysis to construction and management. Second, the level of detail requested is not excessively burdensome, ensuring good reliability. Third, the approach is interdisciplinary, connecting statistical techniques (regression analysis), economic feasibility (life cycle costing, discounted cash flow analysis), optimization modelling (multi-attribute linear programming), and risk simulations (Monte Carlo simulation). The method developed has been implemented into a portfolio of 25 buildings in North Italy for testing and validation. It was possible to compare several design alternatives and reach for the best outcome. This demonstrated how the model could be successfully used in real applications. The most significant achievement in this study lies in its extreme flexibility, allowing confronting countless design scenarios until the optimal is attained. Another significant result is the synergic integration of traditional financial techniques with operational research. The last novelty is the employment of a two-dimensional Monte Carlo simulation to measure the risk, considering uncertainty as a structural part of the study. This methodology helps in verifying what are the best options from both an energy and economic point of view, giving priorities, time-distribution of interventions and optimizing the cash flows. The research could, therefore, be useful for portfolio managers, asset holders, private investors or public administrations, who have to plan and handle a series of energy efficiency actions.
    Keywords: building portfolios; Energy Efficiency; life cycle costing; Linear Regression; Monte Carlo Simulation
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_203&r=all
  7. By: Matthew Delventhal (Claremont McKenna College)
    Abstract: In this paper I develop a quantitative dynamic spatial model of global economic development over the long run. There is an agricultural (ancient) sector and a non-agricultural (modern) sector. Innovation, technology diffusion, and population growth are endogenous. A set of plausible parameter restrictions makes this model susceptible to analysis using classic network theory concepts. Aggregate connectivity is summarized by the largest eigenvalue of the matrix of inverse iceberg transport costs, and the long-run path of the world economy displays threshold behavior. If transport costs are high enough, the world remains in a stagnant, Malthusian steady state; if they are low enough enough, this sets off an endogenous process of sustained growth in population and income. Taking the model to the data, I divide the world into 16,000 1 degree by 1 degree quadrangles. I infer bilateral transport costs by calculating the cheapest route between each pair of locations given the placement of rivers, oceans and mountains. I infer a series of global transport networks using historical estimates of the costs of transport over land and water and their evolution over time. I then simulate the evolution of population and income from the year 1000 until the year 2000 CE. I use the model to calculate two sets of location-specific efficiency parameters, one for the ancient sector and one for the modern sector, that rationalize both the year 1000 population distribution and the year 2000 distribution of income per capita. I then calculate the relative contributions of each set of efficiency wedges, and key historical shifts in transport costs, to the year 2000 variance of per-capita real income.
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:red:sed019:840&r=all
  8. By: Ragchaasuren Galindev; Tsolmon Baatarzorig; Nyambaatar Batbayar; Delgermaa Begz; Unurjargal Davaa; Oyunzul Tserendorj
    Abstract: The Government of Mongolia began implementing an IMF program under the Extended Fund Facility agreement (EFF) in May 2017. Under the program, the government has decreased expenditures and increased taxes to achieve debt sustainability via fiscal consolidation and stable growth. At the same time, the government has faced challenges because of its commitment of fiscal consolidation to the IMF: the rising price of fuel and its own fuel-subsidy policies. We used the PEP standard static CGE model to examine the impact of fiscal consolidation on the Mongolian economy under various conditions. Moreover, we used a poverty (microsimulation) model to analyze those impacts at a household level. Our analysis of the impact of fiscal consolidation under pessimistic and optimistic mineral-commodity-price scenarios showed that Mongolia’s economy was closely tied to international commodity prices. Our examination of the government’s alternative policies on fuel subsidies in an environment of fiscal consolidation demonstrated that the effect of increased fuel prices on the economy depended upon government fuel-subsidy policy.
    Keywords: CGE model, Mongolian economy, Mining, Fiscal consolidation
    JEL: D58 E62 I32 Q33
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:lvl:mpiacr:2019-20&r=all
  9. By: Philipp Maximilian Müller
    Abstract: To minimize risks and increase transparency, every company needs reliable information. The quality and completeness of digital building documentation is more and more a factor as “deal maker” and “deal breaker” in real estate transactions. However, there is a fundamental lack of instruments for leveraging internal data and a risk of overlooking the essentials.In real estate transactions, the parties generally have just a few weeks for due diligence (DD). A large variety of Documents needs to be elaborately prepared and make available in data rooms. As a result, gaps in the documentation may remain hidden and can only be identified with great effort. Missing documents may result in high purchase price discounts. Therefore, investors are increasingly using a data-driven approach to gain essential knowledge in transaction processes. Digital technologies in due diligence processes should help to reduce existing information asymmetries and sustain data-supported decisions.The paper describes an approach to automate Due Diligence processes with a focus on Technical Due Diligence (TDD) using Machine Learning (ML), esp. Information Extraction. The overall aim is to extract relevant information from building-related documents to generate a semi-automated report on the structural (and environmental) condition of properties.The contribution examines due diligence reports on more than twenty office and retail properties. More than ten different companies generated the reports between 2006 and 2016. The research work provides a standardized TDD reporting structure which will be of relevance for both research and practice. To define relevant information for the report, document classes are reviewed and contained data prioritized. Based on this, various document classes are analyzed and relevant text passages are segmented. A framework is developed to extract data from the documents, store it and provide it in a standardized form. Moreover the current use of Machine Learning in DD processes, the research method and framework used for the automation of TDD and its potential benefits for transactions and risk management are presented.
    Keywords: Artificial Intelligence; digital building documentation; Due diligence; Machine Learning; Real estate transactions
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_208&r=all
  10. By: Johnson, Matthew S; Levine, David I; Toffel, Michael W
    Abstract: We study how a regulator can best allocate its limited inspection resources. We direct our analysis to a US Occupational Safety and Health Administration (OSHA) inspection program that targeted dangerous establishments and allocated some inspections via random assignment. We find that inspections reduced serious injuries by an average of 9% over the following five years. We use new machine learning methods to estimate the effects of counterfactual targeting rules OSHA could have deployed. OSHA could have averted over twice as many injuries if its inspections had targeted the establishments where we predict inspections would avert the most injuries. The agency could have averted nearly as many additional injuries by targeting the establishments predicted to have the most injuries. Both of these targeting regimes would have generated over $1 billion in social value over the decade we examine. Our results demonstrate the promise, and limitations, of using machine learning to improve resource allocation. JEL Classifications: I18; L51; J38; J8
    Keywords: Social and Behavioral Sciences, Public Policy
    Date: 2019–09–01
    URL: http://d.repec.org/n?u=RePEc:cdl:indrel:qt1gq7z4j3&r=all
  11. By: Minkyung Kim (School of Management, Yale University); K. Sudhir (Cowles Foundation & School of Management, Yale University); Kosuke Uetake (School of Management, Yale University)
    Abstract: We develop the first structural model of a multitasking salesforce to address questions of job design and incentive compensation design. The model incorporates three novel features: (i) multitasking effort choice given a multidimensional incentive plan; (ii) salesperson’s private information about customers and (iii) dynamic intertemporal tradeoffs in effort choice across the tasks. The empirical application uses data from a micro nance bank where loan officers are jointly responsible and incentivized for both loan acquisition repayment but has broad relevance for salesforce management in CRM settings involving customer acquisition and retention. We extend two-step estimation methods used for unidimensional compensation plans for the multitasking model with private information and intertemporal incentives by combining flexible machine learning (random forest) for the inference of private information and the first-stage multitasking policy function estimation. Estimates reveal two latent segments of salespeople-a “hunter” segment that is more efficient in loan acquisition and a “farmer” segment that is more efficient in loan collection. We use counterfactuals to assess how (1) multi-tasking versus specialization in job design; (ii) performance combination across tasks (multiplicative versus additive); and (iii) job transfers that impact private information impact firm profits and specific segment behaviors.
    Keywords: Salesforce compensation, Multitasking, Multi-dimensional incentives, Private information, Adverse selection, Moral hazard
    JEL: C61 J33 L11 L23 L14 M31 M52 M55
    Date: 2019–09
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:2199&r=all
  12. By: Mario Bodenbender; Björn-Martin Kurzrock
    Abstract: Data rooms are becoming more and more important for the real estate industry. They permit the creation of protected areas in which a variety of relevant documents are typically made available to interested parties. In addition to supporting purchase and sales processes, they are used primarily in larger construction projects.The structures and index designations of data rooms have not yet been uniformly regulated on an international basis. Data room indices are created based on different types of approaches and thus the indices also diverge in terms of their depth of detail as well as in the range of topics. In practice, rules already exist for structuring documentation for individual phases, as well as for transferring data between these phases. Since all of the documentation must be transferable when changing to another life cycle phase or participant, the information must always be clearly identified and structured in order to enable the protection, access and administration of this information at all times. This poses a challenge for companies because the documents are subject to several rounds of restructuring during their life cycle, which are not only costly, but also always entail the risk of data loss. The goal of current research is therefore a seamless storage as well as a permanent and unambiguous classification of the documents over the individual life cycle phases.In the field of text classification, machine learning offers considerable potential in the sense of reduced workload, process acceleration and quality improvement. In data rooms, machine learning (in particular document classification) is used to automatically classify the documents contained in the data room or the documents to be imported and assign them to a suitable index point. In this manner, a document is always classified in the class to which it belongs with the greatest probability (ex: due to word frequency). An essential prerequisite for the success of machine learning for document classification is the quality of the document classes as well as the training data. When defining the document classes, it must be guaranteed on the one hand that these do not overlap in terms of their content, so that it is possible to clearly allocate the documents thematically. On the other hand, it must also be possible to consider documents that may appear later and be able to scale the model according to the requirements. For the training and test set, as well as for the documents to be analyzed later, the quality of the respective documents and their readability are also decisive factors. In order to effectively analyze the documents, the content must also be standardized and it must be possible to remove non-relevant content in advance.Based on the empirical analysis of 8,965 digital documents of fourteen properties from eight different owners, the paper presents a model with more than 1,300 document classes as a basis for an automated structuring and migration of documents in the life cycle of real estate. To validate these classes, machine learning algorithms were learned and analyzed to determine under which conditions and how the highest possible accuracy of classification can be achieved. Stemmer and stop word lists used specifically for these analyses were also developed for this purpose. Using these lists, the accuracy of a classification is further increased by machine learning, since they were specifically aligned to terms used in the real estate industry.The paper also shows which aspects have to be taken into account at an early stage when digitizing extensive data/document inventories, since automation using machine learning can only be as good as the quality, legibility and interpretability of the data allow.
    Keywords: data room; Digitization; document classification; Machine Learning; real estate data
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_370&r=all
  13. By: Marina Koelbl
    Abstract: Textual sentiment analysis provides an increasingly important approach to address many pivotal questions in behavioral finance. Not least because in today’s world a huge amount of information is stored as text instead of numeric data (Nasukawa and Nagano, 2001). As an example, Chen et al. (2014) analyzes articles published on Seeking Alpha and finds the fraction of negative words to be correlated with contemporaneous and subsequent stock returns. Tetlock (2007) emphasizes that high values of media pessimism induce downward pressure on market prices. Moreover, Li (2010) and Davis et al. (2012) investigate corporate disclosures such as earnings press releases or annual and quarterly reports and find disclosure tone to be associated with future firm performance. Sentiment analysis has also garnered increased attention in related real estate research in recent years. For example, Ruscheinsky et al. (forthcoming) extract sentiment from newspaper articles and analyze the relationship between measures of sentiment and US REIT prices. However, sentiment analysis in real estate still lacks behind. Whereas related research in accounting and finance investigates multiple disclosure outlets like news media, public corporate disclosures, analyst reports, and internet postings, real estate literature only covers a limited spectrum. Although, corporate disclosures are a natural source of textual sentiment for researchers since they are official releases that come from insiders who have better knowledge of the firm than outsiders (e.g. media-persons) they have not yet been analyzed in a real estate context (Kearney and Liu, 2014). By observing annual and quarterly reports of U.S. REITs present in the NAREIT over a 15-year timespan (2003 - 2017), this study examines whether the information disclosed in the Management’s Discussion and Analysis (MD&A) of U.S. REITs is associated with future firm performance and generates a market response. The MD&A is particularly suitable for the analysis because the U.S. Securities and Exchange Commission (SEC) mandates publicly traded firms to signal expectations regarding future firm performance in this section (SEC, 2003). To assess the tone of the MD&A, the Loughran and McDonald (2011) financial dictionary as well as a machine learning approach are employed. In order to allow a deeper understanding of disclosure practices, the study also observes readability of the MD&A and topics discussed in this section to examine whether those aspects are linked to either disclosure tone or future firm performance. To the best of my knowledge, this is the first study to analyze exclusively for REITs whether language in the MD&A is associated with future firm performance and if the market responds to unexpected levels of sentiment.
    Keywords: 10K; REITs; Sentiment; Textual Analysis
    JEL: R3
    Date: 2019–01–01
    URL: http://d.repec.org/n?u=RePEc:arz:wpaper:eres2019_281&r=all

General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.