Encourage AI’s healthcare potential
Dec 02, 2024American Habits first published this piece.
Artificial intelligence—or AI, as it is commonly called—is a burgeoning technology with scores of potential applications that researchers and developers have only begun to unlock. Healthcare is one sector in which AI already shows great promise for helping physicians, hospital systems, and their patients. Demand for healthcare services continues to rise as the American population continues to age, but the supply of doctors, nurses, healthcare technicians, and medical support staff has not kept pace.
AI is no replacement for doctors and nurses, but it can perform some data-related and diagnostic functions more efficiently than humans, which means it can reduce some administrative costs and improve care. For example, AI-powered software can analyze vast sums of data much faster than humans or standard computer programs; it can diagnose some diseases like Sepsis and read X-rays more accurately than physicians; and it can—because it already has—improve our knowledge of certain diseases and their treatments. These capabilities augment healthcare staff, offering more efficient diagnoses, and freeing time for doctors and nurses to treat patients as only human care providers can.
To achieve these advantages, however, AI developers require access to large, accurate, and highly sensitive pools of patient data that must be safeguarded with effective security protocols. As AI improves, the understandable calls for enhanced data privacy measures grow louder—and how those calls are answered will prove increasingly important.
Unfortunately, federal and state policymakers have signaled their interest in pursuing heavy-handed regulatory rules that risk AI’s use and development. Regulatory barriers already have proven fatal to AI in Europe, leaving the nascent field almost entirely to America and China. But the U.S. looks ready to repeat rather than learn from Europe’s mistake as President Biden’s executive order in 2023 directed practically every federal agency to explore new rules and regulations for AI technology. At least one federal agency is already collaborating with universities and large technology companies to create new requirements that will likely disadvantage smaller firms and reduce sector competition. The incoming Trump administration, focused on private sector innovation, will likely rescind the Biden order and resume creative development in AI.
Regulatory moats are rarely a good idea. State and local lawmakers are actively drafting an inconsistent patchwork of AI-stunting rules that will favor “Big Tech” by making it more expensive, confusing, and legally treacherous for smaller developers to test and bring their products to market. A better approach would allow AI developers to partner with and work under state overseers. For example, Ohio can expand its financial regulatory “sandbox”—which allows businesses to develop and test products under agency oversight, temporarily free from many regulatory restrictions—to include healthcare-related AI innovation. Cultivating an AI-friendly regulatory environment will attract more developers, improve the fledgling technology’s healthcare applications, and offer Ohio’s premier hospital systems more cutting-edge software that can save physicians and patients time and money.
States can create multi-state compacts or collaborate with agencies like the National Institute for Standards and Technology (NIST) to create standards and guidelines instead of more restrictive regulations. NIST could create voluntary standards for AI developers to meet, much as the Motion Picture Association—not governments—works with movie studios to rate moves.
Another potential threat to AI use and development lies with energy. AI programs typically run all day and all night, consuming significant quantities of power and requiring plenty of affordable, reliable electricity. States must ease permitting restrictions on power generators and electricity transmitters so that AI and digital data storage centers can run properly. Electricity grid operators have warned policymakers about the rising demand for electricity and the need to assess all forms of energy supply to meet the growing demand.
Artificial intelligence programs can help doctors and hospital systems meet many medical needs today and tomorrow. Policymakers should recognize AI’s risks and rewards, and encourage its safe use and development rather than pursuing knee-jerk regulations that may deny us the full potential of new treatments and technologies.
Rea S. Hederman Jr. is the vice president of policy at The Buckeye Institute.