AI or Die — How America’s Next President Will Shape Our Technological Future
As Americans head to a final push to the polls for the 2024 presidential election, we face a decision that could redefine the trajectory of artificial intelligence (AI) within the United States. AI has become more than just a technological advancement; it’s now a transformative force reshaping industries, governance, and daily life. The incoming administration will bear the responsibility of advancing AI innovation while implementing a governance framework that addresses ethical concerns, societal impacts, and global competitiveness. With nations like China and the European Union accelerating their AI strategies, America stands at a crossroads. Will the U.S. assert itself as a leader in ethical, innovative AI, or will it risk falling behind? Let’s explore the current landscape of AI in America, the differing AI policy approaches of Vice President Kamala Harris and former President Donald Trump, and the potential implications for U.S. leadership in AI.
The Current AI Landscape in the United States
The Biden-Harris administration has set foundational policies for AI governance, focusing on transparency and ethical integration. The National AI Research Resource pilot and the AI Safety Institute are notable efforts aimed at making AI safer and more accessible (Finance & Commerce, 2024). A U.S. Census Bureau survey reveals that business adoption of AI has been steadily increasing, with usage rising from 3.7% in late 2023 to 5.4% by early 2024, and expected to reach 6.6% by year-end (Finance & Commerce, 2024).
However, AI adoption varies widely across industries and organization sizes. Small and Medium Businesses (SMBs) are increasingly turning to AI to improve efficiency and reduce costs. According to the U.S. Chamber of Commerce, 98% of SMBs now use AI-enabled tools, with 40% utilizing generative AI for content creation and customer engagement (U.S. Chamber of Commerce, 2024). Furthermore, Lifewire reports that 64% of small business owners are either using or plan to use AI within the next two years, motivated by economic pressures and competitive necessity (Lifewire, 2024). In many cases, small business employees are leading this charge, with 80% bringing their own AI tools to work, reflecting a grassroots level of adoption (Microsoft, 2024).
Large Enterprises have also embraced AI but at a different scale and complexity. A recent IBM survey found that 85% of large companies are using AI to some extent, with many focusing on advanced analytics, customer service, and automation of back-office tasks (IBM Newsroom, 2024). Specifically, 35% of these companies leverage AI for real-time business intelligence and predictive analytics, enabling them to anticipate market shifts and adapt strategies accordingly (IBM Newsroom, 2024). Notably, IBM highlights that 47% of business owners use AI to create marketing content and digital advertisements, reflecting AI’s growing role in influencing consumer behavior.
Federal Government agencies are no exception in AI adoption. As of fiscal year 2022, the Government Accountability Office reported that 20 out of 23 federal agencies had identified approximately 1,200 AI use cases, spanning border security, data analysis, and public health (Government Accountability Office, 2024). These efforts are far from superficial; nearly 200 AI projects are currently in production, indicating a significant commitment to deploying AI for real-time operational use. Federal AI investments are also rising, with $1.9 billion allocated for AI research and development in 2024 alone, signaling an intensifying focus on integrating AI to enhance public sector efficiency and security (Government Accountability Office, 2024).
Diverging Approaches to AI Governance
The 2024 election presents two distinctly different paths for America’s AI future. Vice President Kamala Harris’s approach, inspired by the European Union’s AI Act, advocates for increased transparency and accountability. Her strategy is to introduce a risk-based framework for AI applications, imposing greater scrutiny on high-impact sectors like healthcare and criminal justice (Finance & Commerce, 2024). In her view, building public trust through transparency is essential for AI to achieve widespread adoption without infringing on civil liberties.
Former President Trump, on the other hand, proposes a minimal-regulation, innovation-first approach that prioritizes competitiveness. His platform posits that a free-market model will enable American companies to innovate faster, positioning the U.S. to outpace global competitors, particularly China (Finance & Commerce, 2024). While Trump’s model is appealing to business leaders eager for fewer restrictions, it raises concerns about potential ethical oversights, especially in areas where biased algorithms and privacy violations could harm vulnerable populations.
Addressing Racial and Gender Bias in AI
One of the most urgent issues surrounding AI today is the risk of perpetuating and amplifying societal biases. Studies consistently reveal that AI systems can inherit and magnify biases present in their training data. A 2018 MIT study demonstrated that facial recognition systems misidentified people of color, particularly women, at alarmingly high rates, with a 35% error rate for darker-skinned women compared to less than 1% for lighter-skinned men (MIT, 2018). This discrepancy in accuracy has troubling implications for sectors like law enforcement, where biased systems could lead to wrongful arrests, and healthcare, where misdiagnoses could disproportionately affect certain demographic groups.
Bias also extends into hiring algorithms, where AI tools have favored male candidates in technical fields, often excluding qualified female candidates. Black patients in healthcare algorithms have been deprioritized due to biases embedded in historical data, highlighting the need for rigorous data diversity standards (Finance & Commerce, 2024). These issues underline the importance of inclusive data practices and regular audits, especially as AI becomes more entrenched in public life.
Harris has proposed regulatory measures that include mandatory bias audits for high-stakes applications, particularly in hiring and law enforcement, to counteract these disparities. Trump’s self-regulation approach, however, leaves such measures at the discretion of private companies, which could limit the effectiveness of bias mitigation efforts (Finance & Commerce, 2024).
Learning from Global Examples: The EU and China’s Regulatory Models
Globally, other major economies offer instructive AI governance models. The European Union’s AI Act, implemented to regulate AI based on risk levels, mandates rigorous oversight for high-risk applications, such as facial recognition and healthcare diagnostics (Finance & Commerce, 2024). Although the EU’s approach promotes accountability, it has led to concerns that compliance costs may stifle innovation, particularly among startups.
China’s AI model, in contrast, is highly centralized, allowing the government to enforce strict control over AI applications, especially those related to content regulation and public surveillance. This authoritarian approach has enabled rapid deployment of AI technologies that align with state priorities, but it often compromises individual freedoms and stifles innovation outside of government-approved initiatives (Finance & Commerce, 2024). These contrasting models highlight the challenges the U.S. faces in balancing innovation with democratic values.
The U.S. Strategy to Dominate the Global AI Market
Both Harris and Trump recognize the strategic importance of AI leadership. The Biden-Harris administration has pursued initiatives like the AI for National Interest program, which allocated billions to research in defense, healthcare, and education, solidifying AI’s role in America’s national priorities (Finance & Commerce, 2024). The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework further emphasizes sector-specific regulation, encouraging innovation while adhering to ethical guidelines.
Internationally, the U.S. has engaged with allies through the AI Safety Summits to establish common standards for responsible AI. The upcoming 2025 AI Action Summit in France will offer an important platform for the U.S. to advocate for enforceable, democratic AI principles, positioning itself as a counterweight to China’s authoritarian influence (Finance & Commerce, 2024).
Building a Blueprint for Ethical AI Governance: A Sector-Specific Model
A sector-specific regulatory model is increasingly viewed as essential for promoting responsible AI development in the U.S. The Framework for Identifying Highly Consequential AI Use Cases developed by the SCSP prioritizes high-risk applications, allowing innovation to flourish in lower-risk areas while addressing significant societal risks (Finance & Commerce, 2024). This approach provides flexibility for companies to grow while holding them accountable for ethical issues in high-stakes applications like healthcare and finance.
Additional steps include bias audits, data diversity standards, and support for a more diverse tech workforce. Together, these measures could create an equitable and transparent AI landscape that aligns with American values.
The Path Forward: Indicators of Progress and Future Implications
Progress in responsible AI governance can be tracked by key indicators:
- Bipartisan Support for a Dedicated AI Regulatory Body: Establishing an agency to oversee AI policies would provide consistency across administrations, ensuring that AI governance evolves with technological advances.
- Increased Federal Funding for AI Research: Expanding investment in AI research, especially in high-stakes sectors like healthcare and national security, will help maintain U.S. competitiveness.
- Closer Private-Public Partnerships: Collaborations with private industry can align ethical considerations with business practices, fostering responsible AI growth.
- International Cooperation at AI Summits: Engagement in global summits, like the upcoming AI Action Summit, will allow the U.S. to influence the international AI standards and safeguard democratic values.
- Widespread Adoption of NIST’s AI Risk Management Framework: Adoption across critical industries will indicate the success of a sector-specific model that balances ethical and innovation priorities.
Empowering the Public in the AI Age
As the U.S. heads toward a new AI-driven future, the incoming administration’s approach to AI will have far-reaching impacts on society, ethics, and the global economy. By blending ethical governance with competitive innovation, the U.S. can lead in shaping an AI landscape that respects individual freedoms and advances societal good. The stakes are high, but with informed policies and international partnerships, America has the opportunity to secure a leadership role that benefits not just the nation, but the world.
References
- Finance & Commerce. (2024). Statistics on AI Use Across Sectors. Retrieved from [Finance & Commerce].
- Government Accountability Office. (2024). AI Use in Federal Government. Retrieved from [gao.gov].
- IBM Newsroom. (2024). Enterprise AI Adoption Report. Retrieved from [ibm.com].
- Lifewire. (2024). Small Business AI Use Study. Retrieved from [lifewire.com].
- Microsoft. (2024). SMB AI Adoption Trends. Retrieved from [microsoft.com].
- MIT. (2018). Facial Recognition Bias Study. Retrieved from [MIT].
- Pew Research Center. (2023). Survey on Individual AI Engagement. Retrieved from [pewresearch.org].
- PwC. (2023). Business AI Adoption Report. Retrieved from [pwc.com].
- U.S. Chamber of Commerce. (2024). AI Adoption in Small Businesses. Retrieved from [uschamber.com].
About the Author
Dr. Kimberly N. West is a renowned AI and Microsoft 365 consultant, author of AI or Die: How Smart Business Owners Use AI to Grow and Scale, and the founder of the Facebook group The Bu$ine$$ of Artificial Intelligence, where she helps business owners use the power of AI to grow and scale. With over 25 of experience in Technology and Management consulting, Dr. Kim empowers clients to transform their businesses through innovative AI strategies and Microsoft solutions. Her work bridges the gap between complex technology and practical, growth-oriented applications, making AI accessible and impactful for companies of all sizes.