Three simple steps to get your research peer-certified and published
Upload your paper and get instant AI-powered journal recommendations and preliminary review feedback.
Request human peer review from verified experts in your field for comprehensive feedback and certification.
Get your work published as a peer-certified preprint with a DOI, making it citable and discoverable.
Large Language Models (LLMs) exhibit diverse and stable risk preferences in economic decision tasks, yet the drivers of this variation are unclear. Studying 50 LLMs, we show that alignment tuning for harmlessness, helpfulness and honesty systematically increases risk aversion. A ten percent increase in ethics scores reduces risk appetite by two to eight percent. This induced caution persists against prompts and affects economic forecasts. Alignment therefore promotes safety but can dampen valuable risk taking, revealing a tradeoff risking suboptimal economic outcomes. Our framework provides an adaptable and enduring benchmark for tracking model risk preferences and this emerging tradeoff.
This review article synthesizes the burgeoning literature on the intersection of (generative) ar- tificial intelligence (AI) and financial economics. We organize our review around six key areas: (1) the emergent role of generative AI as analytic tools, external shocks to the economy, and au- tonomous economic agents; (2) corporate finance, focusing on how firms respond to and benefit from AI; (3) asset pricing, examining how AI brings novel methodologies for return predictability, stochastic discount factor estimation, and investment; (4) household finance, investigating how AI promotes financial inclusion and improves financial services; (5) labor economics, analyzing AI’s impact on labor market dynamics; and (6) the risks and challenges associated with AI in financial market. We conclude by identifying unanswered questions and discussing promising avenues for future research.
This is a test abstract for browsing functionality
Large Language Models (LLMs) exhibit diverse and stable risk preferences in economic decision tasks, yet the drivers of this variation are unclear. Studying 50 LLMs, we show that alignment tuning for harmlessness, helpfulness and honesty systematically increases risk aversion. A ten percent increase in ethics scores reduces risk appetite by two to eight percent. This induced caution persists against prompts and affects economic forecasts. Alignment therefore promotes safety but can dampen valuable risk taking, revealing a tradeoff risking suboptimal economic outcomes. Our framework provides an adaptable and enduring benchmark for tracking model risk preferences and this emerging tradeoff.
This paper explores novel applications of quantum computing in cryptography and optimization problems. We present a comprehensive analysis of quantum algorithms and their potential impact on modern computational challenges. Our findings demonstrate significant improvements in processing efficiency and security protocols.
This research addresses a fundamental question through systematic investigation and rigorous methodology. We develop novel analytical tools and apply them to diverse scenarios, uncovering consistent patterns and relationships. Our results provide strong evidence for our central hypothesis and offer new perspectives on longstanding debates. The implications extend beyond our immediate field and suggest interdisciplinary applications. We conclude with recommendations for future research and practical implementation.