ChatGPT’s Consumer Safety
With the rise of artificial intelligence on track to define this decade and the years to come, federal regulators are scrambling to ensure the industry does not harm the consumer and broader social spheres. On Thursday, the Federal Trade Commission announced its investigation into ChatGPT’s maker OpenAI in a bid to tighten control over unfamiliar technologies. The move comes as the federal government continues to contain the expansion of Tech, specifically the growth and uncertainty revolving around artificial intelligence. Even with his newest company xAI, Tesla CEO Elon Musk has famously noted the problems advanced AI can inflict on society, calling for a six-month moratorium for any new tools and applications. Although there haven’t been any updates on the potential halt, regulators are continuing to find other ways to scrutinize the safety of these models.
The probe into OpenAI’s operations will serve to uncover exactly how their generative models are trained, and if these datasets contain any forms of private customer data. The investigation is attempting to verify whether OpenAI has violated consumer protection laws that have been in place for decades. The FTC will begin by combing over the companies that have direct access to their Large Language Models, understanding exactly how they obtain and use consumer data within these sets. Seeing as OpenAI Sam Altman has been very open about his technologies and cooperative with Washington in the past, several believe the FTC probe is merely an attempt to display their proactive fight against all new AI applications.
Want to learn how to invest? Download the Invstr app, where you can play Fantasy Finance and manage a virtual investment portfolio or open a brokerage account and invest for real. Take our interactive investing course on Invstr Academy and become a better investor today!
I am not a financial advisor and my comments should never be taken as financial advice. Investments come with risk, so always do your research and analysis beforehand.