Attorneys propose framework for monitoring AI developers and protecting whistleblowers and consumers.
AI will continue to have profound effects on our society and the world. We cannot leave it to tech giants alone to shape this future.”
WASHINGTON, DC, UNITED STATES, March 17, 2026 /EINPresswire.com/ -- It is well established that artificial intelligence ("AI") is becoming more ubiquitous as companies ramp up investment in infrastructure.
As a result of unequal access to information, the public—including those tasked with oversight—know little about the development and deployment of AI. Although AI
Currently, AI companies also have little incentive to share any information for purposes of oversight and accountability, and their employees have limited legal protections to report risk-related concerns.
From a consumer standpoint, there is also inadequate information about the use of AI in the marketplace, affecting consumers’ ability to make informed choices in line with their preferences.
These information asymmetries create fundamental challenges for moderating the negative effects of AI, such as safety risks, job displacement, and consumer deception.
According to Litteral LLP Partners, Sean L. Litteral and Elvia M. Lopez, the first step is to is to adopt policy measures targeted toward developers, employees, and consumers to enhance access to information.
In their latest article, “Recoding A(I) to A(We): Addressing Information Asymmetries for Shared Prosperity,” the authors outline steps to closely monitor AI’s growth, to facilitate dialogue by protecting employees with non-public information, and to promote consumer choice through labels to distinguish between AI and human creations.
The article is available in the 2026 volume of the George Washington Journal of Law and Technology.
No comments:
Post a Comment