Heterogeneity, fraud, and the design of a crowdsourced review platform /

Saved in:
Bibliographic Details
Author / Creator:Vongsathorn, Xan Alexander, author.
Imprint:2015.
Ann Arbor : ProQuest Dissertations & Theses, 2015
Description:1 electronic resource (90 pages)
Language:English
Format: E-Resource Dissertations
Local Note:School code: 0330
URL for this record:http://pi.lib.uchicago.edu/1001/cat/bib/10773091
Hidden Bibliographic Details
Other authors / contributors:University of Chicago. degree granting institution.
ISBN:9781321884418
Notes:Advisors: Steven D. Levitt Committee members: Eric B. Budish; Matthew Gentzkow.
This item must not be sold to any third party vendors.
This item must not be added to any third party search indexes.
Dissertation Abstracts International, Volume: 76-11(E), Section: A.
English
Summary:Crowdsourced reviews are becoming a dominant source of information across a wide range of products, suggesting that crowdsourcing has strong advantages over traditional sources of information such as expert reviews and brand reputation. When preferences are heterogeneous, crowdsourcing can be more informative by enabling consumers to locate the opinions of like-minded consumers. At the same time, it introduces the possibility that a seller may write fraudulent reviews that imitate genuine consumer reviews. I develop a model in which the design of the crowdsourcing platform determines the likelihood of consumers finding reviews written by like-minded consumers, which in turn determines the incentives for fraud on the part of the seller. Because heterogeneity may largely be reflected by multidimensional review text rather than numerical ratings, my model does not allow the platform itself to comprehend and aggregate individual reviews. Instead, it determines the likelihood that type-t consumers will find type-t reviews; in turn, consumers read and react to the reviews delivered by the platform.
If reviews are well-targeted, they convey more information, but also increase equilibrium fraud, which dilutes the quality of the information. Despite the latter effect, I find that consumers always prefer a design with better-targeted reviews, while the seller is made weakly worse off. The profit-maximizing choice for the platform in turn depends on whether its revenue is consumer-based (e.g. subscriptions or ads), seller-based (e.g. sales commissions or flat participation fee), or some combination of the two. Furthermore, a crowdsourcing platform outperforms an expert review platform only if preferences are sufficiently heterogeneous and the cost of fraud is sufficiently high.
Turning to empirics, I construct a new dataset that includes daily hotel revenue data for 426 Chicago-area hotels together with dated TripAdvisor ratings and review characteristics. This enables me to test whether consumers are significantly influenced by the arrival of individual reviews (rather than simply average ratings). I find that the arrival of a 1-star review results in an expected decrease of $1.37 in revenue per room per day in the following week, relative to a 5-star review. For a typical hotel, this is equivalent in revenue terms to losing 2.7 customers per night. The effect is stronger for reviews that spend longer at the top of the review page, and does not appear to be vulnerable to endogenous unobserved quality or reverse causality between ratings and revenue. In addition, I find evidence that consumer heterogeneity plays a significant role in the impact of reviews. In particular, reviews written by business travelers are more influential than reviews written by leisure travelers, with a much stronger effect during non-peak months when business travel makes up a larger fraction of overall travel. This suggests that consumers respond more strongly to reviews written by like-minded consumers, and thus that heterogeneous preferences contribute to the success of crowdsourcing.