When you think about events and technologies that characterize the past decade, what comes to mind? If the COVID-19 pandemic and artificial intelligence don’t, I’m going to assume you’ve been living under a rock. This policy gets with the times and acknowledges that with great power comes great responsibility. There is no doubt that artificial intelligence has the potential to transform our way of life, and one of the fields this is especially relevant in is healthcare. Imagine being able to predict the spread of a disease before it happens like a weather forecast. Or being able to create tens of thousands of words of public health programming in the time it takes to bake a cake. These possibilities are exciting, but the reality is not nearly as compelling. The fact of the matter is that artificial technology as it stands is not up to par. Its data is not generalizable, it is hard to use, and it can be just as inaccurate as it is accurate. However, these are all issues this policy recognizes and has created a framework to tackle them with. First, it suggests drawing insights from a field of study that seems honestly obvious: neuroscience.
Neuroscience can inform AI policy because there are common grounds such as fundamental concepts and similar concerns and challenges. Previous regulatory efforts have overlooked the significant role consumers and consequently, their brains play. Not just that, but AI is meant to mimic the human concept of intelligence. Thus, embracing a neuroscientific approach to gauge the safety and efficacy of AI technology is ideal because the technology will be best interpreted within that context. Doing so will require the mobilization of NIH funding to encourage researchers to explore how neuroscience insights can inform AI development and resolve pressing public health concerns the tech is currently raising. Taking a human-informed approach to creating AI will make it more conducive to our needs, which should be the goal.
There are concerning statistics about AI’s current capacity to enable misinformation and disinformation about health behaviors and stresses the importance of creating regulatory guardrails on what AI can utilize to generate content. A 2023 study used publicly available AI tools and was able to create 102 disinforming articles about vaccination and vaping in 65 minutes. The researchers were unable to find safeguards that prevented the AI from taking existing information and catering to the harmful prompt asked to it. Thus, AI technology as it exists is not regulated enough to actively prevent negative health behaviors, and that part of the solution lies in how and what these models are able to draw upon. It is imperative that a permanent council of health experts is created within the NAIAC (National AI Advisory Committee) who can define explicit standards for unbiased and accurate health information to train AI on.
There is no doubt that this policy is relevant to pertinent issues society faces today and takes an appreciably unique stance. Sometimes it can be easy to see AI as a technology more akin to the assembly line or the steam engine when at its very core, it intends to model a system that has only ever existed within a human. However, we should update this view of AI to require human considerations in its design which will optimize its full potential in public health and healthcare contexts.
Leave a Reply